IT consultant David Linthicum says ignore claims on performance and just test your application in a target environment.

David Linthicum, Contributor

March 6, 2014

5 Min Read

I'm always skeptical about claims that seem too good to be true, especially when it comes to technology. The latest instances are claims made around the Google Cloud's ability to provide "performance stability," as best highlighted in a commentary by Chandra Krintz, "Google Cloud's Big Promise: Performance Stability."

While cloud platforms, specifically, IaaS platforms, do their best to provide consistent performance, the performance that we see from most cloud providers is always a bit "bursty." They often provide non-consistent performance throughout the day. This is due largely to the fact that you ultimately share physical resources, such as CPUs, memory, disk, network, etc., with many different tenants, with many different requirements.

This is an old problem. Multiuser systems (versus multitenant) have long had the same issues, and you can still see it today. As more users log in to a system, the slower the system becomes. Eventually, there is a saturation point where the system begins to thrash, and performance drops off significantly. Obviously, performance stability would be a good thing.

[Want to know more about Google Cloud's aspirations? See Google Compute Cloud Challenges Amazon.]

However, I'm more interested in how something is done, beyond the assertions that it can be done. Krintz states in her commentary:

Google Compute Engine is the next-generation of IaaS system and provides resource performance stability via a set of novel engineering advances. These advances include: customized virtualization under KVM; advanced resource isolation technologies, such as specialized Linux control groups that shield one process from others; clever data replication/ redundancy strategies; novel datacenter design and geographic placement; and dedicated high-speed fiber networks between well thought out and proven software services, such as App Engine, Cloud Storage, BigQuery, and YouTube.

The assertion is that Google, through its IaaS and PaaS services, provides consistent resource performance at very low cost, in terms of resources. Everything is better in this world, including redesigning applications to take advantage of the cloud-native features of the Google platform, as well as the ability to provide better and more consistent performance, specifically, better and more consistent than Amazon Web Services, which is the unstated objective of these claims.

The problem with this model is that, in order to take advantage of the consistent performance features, it's really an application design, development, and deployment issue, more than just good infrastructure. Thus, Google Compute Engine as an IaaS is not as effective without the PaaS capabilities.

My suspicions? Will an unmodified application just dropped on the Google cloud platform be able to take advantage to the degree of benefit described? Google, as with other platforms, does require that its platform features be taken into consideration, to some degree, when deploying applications in its cloud.

To be fair to Google, the same issues exist with other IaaS and even PaaS players out there, including AWS, Rackspace, Microsoft, and others. Applications that are modified to take advantage of native performance features of the cloud platforms they are deployed upon will indeed offer up better performance and stability, including features such as auto-scaling. That's why you hear the term "cloud native" a lot these days. I suspect it will become a more popular approach as those who deploy to cloud-based platforms better understand the performance and stability benefits.

The claim that Google provides more consistent performance will have to be tested by each enterprise that moves applications and data to its cloud. I would recommend that they create a series of tests, using different loading profiles, such as simulating end-of-year processing using increasing loads on compute and storage, and then sustained loads for several days. This type of testing will tell you a lot.

As most of us who have migrated applications to the cloud know, the characteristics and profile of each application are more of a determination of performance than the platform. Thus, I would also suggest that enterprises create test applications that leverage some cloud-native features, and observe the performance and stability differences over applications that are largely unmodified. Considering that you're likely to move hundreds of applications and data sets to a cloud, this information will help a great deal in understanding the performance profile of your application, as well as reducing the cost of operations.

I doubt Google is the only provider that offers this benefit. However, Google's focus on performance stability may get it more play in the market over the next few years, as the second-place IaaS contender. Google is good at making things fast, and I suspect that its resulting IaaS and PaaS clouds won't disappoint long-term. My experience has been that Google's performance is good to great.

Keep in mind that, with any cloud-native features where applications are localized for specific cloud platforms, the tradeoff is portability for performance. If you write cloud-native features in your application to support performance stability, that application will have to be modified when it's moved to another cloud platform, or perhaps brought on-premises. This is a cost and risk that needs to be considered before committing to a specific flavor of cloud computing.

Private clouds are moving rapidly from concept to production. But some fears about expertise and integration still linger. Also in the Private Clouds Step Up issue of InformationWeek: The public cloud and the steam engine have more in common than you might think. (Free registration required.)

About the Author(s)

David Linthicum

Contributor

David S. Linthicum is senior vice president of Cloud Technology Partners and an expert in complex distributed systems, including cloud computing, data integration, service oriented architecture (SOA), and big data systems. He has written more than 13 books on computing and has more than 3,000 published articles, as well as radio and TV appearances as a computing expert. In addition, David is a frequent keynote presenter at industry conferences, with over 500 presentations given in the last 20 years.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights