Welcome!

Industrial IoT Authors: Yeshim Deniz, Elizabeth White, Stackify Blog, SmartBear Blog, Liz McMillan

Related Topics: Industrial IoT, Microservices Expo, @CloudExpo

Industrial IoT: Blog Feed Post

Architectural Consideration for Building Cloud Applications

Cost Optimization As Major Architectural Consideration for Cloud Apps

Cloud Expo on Ulitzer

Though it is generally believed that biggest challenge of architecting a cloud application is security and reliability, there is another major dimension which is generally overlooked which is cost optimization.

In a recent article 59% identified data security to be the main concern and 20% thought it was reliability. The fact that the applications need to be designed differently to take advantage of cloud and thus reduce cost did not even enter into the consideration.

Traditionally, actual cost of deployment has never directly been considered as a parameter of architectural tradeoffs – performance: yes, response time: yes, application-partitioning: yes, load-balancing: yes, choice-of-platform: yes, choice-of-software: yes, open-vs.-proprietary: yes – but actual-cost-of deployment: no. You are likely to do hardware sizing based on the projected load and arrive at the machine configuration. You may also tune parts of the application post deployment if the response time is not acceptable. But in how many instance will you design your application to bring down the hardware requirement by 10%? What about tuning the application post deployment to reduce hardware requirement by 10% even though the response time is adequate?

Traditionally, your hardware and software is a capital expenditure. So, once the initial investment is made you are unlikely to save any money by optimizing the application to utilize less resource. But when the application is deployed in the cloud it is no longer true. What is driving a CIO to take a serious look as cloud computing is primarily the promise of cost reduction. Pay for what you use implies don’t pay for unutilized resources and if you consume less resource you pay less.

However, in cloud, you pay for:

  • actual CPU utilization
  • actual size of data storage
  • actual data read-write
  • actual input-output bandwidth used

You can always do an architectural tradeoff and increase/reduce the usage of these 4 parameters. How it will impact the overall cost will depend on the cost structure. So your architectural decision will have a direct impact on cost but the optimality of the decision will change as soon as any adjustment is made in the cost structure by the cloud provider.

Pay as you use implies many more options for cost reduction

  • You need to minimize unutilized resources
  • Design and code efficiency becomes critical
  • Cost effective design will depend on the relative cost of processing charge, storage charge, data read-write charge & bandwidth charge
  • Restructuring of cost by cloud provider may affect the optimality of design
  • Whenever there is an IT budget cut, you may be asked to optimize the code
  • Any outside consultant can come and claim that there is opportunity to save money

Lack of availability of data to base the decision on – it has to be found out through experimentation.

Different interpretation of application and machine boundary implies packaging has direct impact on cost

The cloud market place has many players with different strategies but I have not considered SaaS players like Salesforce.com. Though every organization from IBM to Oracle to HP wants to make their presence felt in the cloud – the following are the major players and here a summary of the pricing structure of EC2, Azure & GAE for your quick reference.

Amazon EC2:

  • Pay for the duration a machine has been instantiated
  • Not dependant on what you run on the machine
  • Load variability needs to be managed through instantiation and de-instantiation of one or more machines

Implication: Given a choice of one machine of larger capacity and multiple machines of smaller capacity – later is preferable.

Microsoft Azure:

  • Pay for the duration an application has been instantiated
  • Not dependant on how much the application is used or how complex it is

Implication: Bundling of multiple unrelated applications into one may turn out to be more cost effective.

Google GAE:

  • pay for actual usage of the deployed application
  • not dependant on how long it is deployed
  • CPU usage of individual transaction is aggregated for cost calculation

Implication: Optimization needs to be performed at individual transaction level and not at machine or application level

Experience with one platform cannot be directly translated to another platform.

Availability of storage options other than RDBMS which is expected to be optimized for cloud

  • Though AWS and Azure support RDBMS, they also provide other options
  • GAE only supports persistence of objects
  • AWS has multiple non RDBMS options
  • In AWS, you can use their instance on MySQL instance or use your own mounted storage

Non-relational databases can be highly efficient in specific application scenario. For example, if you have a complex domain object, it may be advantageous to store it as a single object significantly reducing the number of disk I/O thereby impacting the cost. The challenge, however, is to stop thinking in terms of relational tables and SQL. This requires lot of unlearning.

Many of the traditional design principle may have to be revisited and new ones need to be arrived at.

Read the original blog entry...

More Stories By Udayan Banerjee

Udayan Banerjee is CTO at NIIT Technologies Ltd, an IT industry veteran with more than 30 years' experience. He blogs at http://setandbma.wordpress.com.
The blog focuses on emerging technologies like cloud computing, mobile computing, social media aka web 2.0 etc. It also contains stuff about agile methodology and trends in architecture. It is a world view seen through the lens of a software service provider based out of Bangalore and serving clients across the world. The focus is mostly on...

  • Keep the hype out and project a realistic picture
  • Uncover trends not very apparent
  • Draw conclusion from real life experience
  • Point out fallacy & discrepancy when I see them
  • Talk about trends which I find interesting
Google

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...