Cloud resources must be prepared to adapt to tumultuous events, from sudden surges in online shopping to the repercussions of the pandemic.

Joao-Pierre S. Ruth, Senior Writer

December 7, 2020

4 Min Read
Penny Avril, Google Cloud

Keeping operations as disruption-free as possible is a necessity for enterprises though that might seem more complicated with resources running in the cloud. Disasters, wild fluctuations in market demand, and the general upheaval of adjusting to the pandemic could raise questions about the ability of cloud native databases to adapt. No one wants the transformation strategy to implode in the middle of a storm.

In an interview with InformationWeek, Google Cloud’s senior director of databases, Penny Avril spoke about making sure that unforeseen events do not cause disarray with resources that have been entrusted to the cloud.

How do you prepare for the unpredictable? How do you make plans for unknown issues that may come up?

“In dealing with unpredictability, there are two key aspects here. One is being agile in terms of scale and capacity. It’s not just renting out machines. It’s being able to use those machines without experiencing any downtime. The other aspect really is the speed of developing new applications. One thought is how do we adapt the application we’ve got to greatly increase volume of traffic. The second is possibly the need for new applications.

“We’ve seen a lot this this year with COVID, in some cases like New York State Department of Labor that was managing unemployment claims. They saw a massive increase in traffic. I think they went from 350,000 sign-ons in a week to 6 million in the first week. Their existing database, which was an on-prem mainframe, just couldn’t handle it. They put one of our cloud native databases in front of it. They deployed this in a matter of days and were able to handle that volume.

“The other aspect, developing new apps. With COVID, which is obviously on everyone’s mind, we saw a number of COVID-related apps, such as the City of Madrid, which deployed in a number of weeks an app where their residents could track symptoms. That was more about speed of developing rather than being able to cope with increases in terms of volume and velocity of data.

Are there types of scenarios that you test for in advance? Do you map out potential issues to prepare a response? What thought processes and strategizing do you go through?

“We want unpredictable times to be a non-event. That is our end goal. Sometimes we know there are going to be changes in volume of traffic -- Black Friday and Cyber Monday being obvious ones. The best news is it’s a non-event. How we plan for that is really in how we design these databases. They’re really designed for unlimited scale and scale that can be turned on, both up and down, without any interruption to the application.

“That is really our core design principle. One of the strengths that Google has here is these cloud native databases, such Spanner, Bigtable, and Firestore, they were battle-tested because Google services themselves. YouTube, Gmail, the list would go on -- they had to have true unlimited online scale. Spanner is a unique database in that it is the only fully managed relational database that can scale horizontally without any limits. We prepared at the design phase of these databases.

Are there lessons that have been learned as you’ve adapted to the events of this year and other situations?

“One thing brought to our attention this year was the increase in customers wanting to move their databases to the cloud to advantages of unlimited scale with no downtime. We’re seeing what I would call more mainstream or enterprise customers move. They are very familiar in old guard technology; they use a lot of existing ecosystem tools. This is one thing that’s really come home to us. It’s almost like Google has solved the difficult problems here.

Penn_Avril-GoogleCloud.jpg

“We have these databases that have unlimited, globally distributed scale. We almost just need to make them slightly more accessible for users. We’ve done that in a couple of ways. The big way is to work with the developer tools, the ORM, object-relational mapping tool, so people have an easier time. Even migrating to these databases exists in apps or developed in new apps against them, easing the onboarding.

How is the velocity of data changing for 2021 and beyond?

“We’ve seen a couple of big trends. One trend is in the old on-prem world, customers had monolithic applications that not only had problems scaling but problems in terms of developing new features against these large monolithic apps. We’re seeing a move to microservices and Kubernetes. In terms of data volumes, at any one point in time at Google Cloud, we’re seeing over half a million Kubernetes pods connecting to Cloud SQL.”

 

For content on cloud migration, follow up with these stories:

Where Cloud Spending Might Grow in 2021 and Post-Pandemic

IBM Research’s Chief Scientist Talks AI for Cloud Migration

Is Cloud Migration a Path to Carbon Footprint Reduction?

Is Your Pandemic-Fueled Cloud Migration Sustainable?

Study: Cloud Migration Gaining Momentum

 

About the Author(s)

Joao-Pierre S. Ruth

Senior Writer

Joao-Pierre S. Ruth has spent his career immersed in business and technology journalism first covering local industries in New Jersey, later as the New York editor for Xconomy delving into the city's tech startup community, and then as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight. Joao-Pierre earned his bachelor's in English from Rutgers University. Follow him on Twitter: @jpruth.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights