Google offers a look at some additional features being introduced to Anthos and where it fits in cloud transformation.

Joao-Pierre S. Ruth, Senior Writer

September 20, 2019

5 Min Read
Jennifer Lin, Google Cloud<p>Image: Joao-Pierre S. Ruth

Getting organizations to see how Anthos might be used in cloud transformation has been on the agenda of late with Google. Early this week, the company held an event to get the application management platform more on the radar of potential users and analysts. The event included discussions on how Anthos can be deployed across complex, hybrid environments and introduced the Anthos Service Mesh for microservices management to deploy and secure services.

The desire to modernize at times can conflict with security demands and other concerns about business disruption. Google introduced Anthos earlier this year for managing applications in an agnostic fashion, including hybrid cloud environments. Financial institutions such as KeyBank, an Anthos user, have regulatory requirements that mean maintaining certain operations on-premise. Jennifer Lin, product management director for Google Cloud, took time from the stage to speak with InformationWeek about where Anthos fits in the still-evolving cloud transformation equation.

What demands are you seeing from organizations that have hybrid environments, and how does Anthos come into play in those circumstances?

“The choice today is based on the flexibility moving forward. We started with the on-prem, hybrid scenario with multicloud coming in the future. We know that customers today are already not locking themselves down to one. From a technology perspective, it’s not like they can use one management layer easily and get the types of capabilities we’re providing through the Kubernetes API server. The fact that Kubernetes is becoming a compute orchestration layer of choice makes it easier for us to say 'if you want to register a cluster that sits in another cloud, today it’s in your own prem but tomorrow it could be running in EC2 [Amazon Elastic Compute Cloud] or Azure Cloud'. That is essentially the equivalent to an on-prem server. The compute and storage can sit somewhere else, the intelligent layer managing it across a secure network.

“The notion today that from a network perspective you can interconnect two VPCs [virtual private clouds], but it’s not at the level of dynamic compute orchestration or a service mesh that extends into another cloud environment or a function that can fire and run in another cloud. Today, lots of people are running Kubernetes in other environments. Istio runs a lot in Azure and AWS. And Knative because these are all open source projects so many of our early adopters are running it in whatever compute environment they want.

“After we announced Anthos, some of our competitors were asked specifically what their reaction was. Certain CTOs said they had made it too hard to exit from their cloud. A two- to three-year investment is a big investment. Before it was about thinking about the lowest common denominator. Now if the value is above the infrastructure as a service, we think about this as the service layer. For us the cloud is now value-added above the compute services. How do you monitor the service independent of what the bill of material of the hardware is? How do you think about the intent of the function without knowing in advance the hardware and software environment?

You mentioned that there is concern about vendor lock-in. How do you see Anthos as helping to address those concerns?

“I think a lot of that is based on open source. A huge number of enterprise customers have gotten to understand Kubernetes, Istio, and Knative just by watching the open source project. When they’re ready to move that into a production environment, they want someone else to manage it for them. Being grounded in open source means that if I need to go make another decision, it’s easier because it’s not tied up in proprietary protocols. We’ve quickly gotten thousands of ecosystem partners to write to a pluggable architecture that didn’t exist before. Even at Google Next, we showed six to seven open source database partners that were exiting out of AWS and coming to Google Cloud because in that case, AWS was competing against them. We’ve been trying to be very clear about a platform framework that is pluggable -- there are some areas where we have our own database -- but these partners come our way because we’re being clear about how we loosely couple these components.

“This is based a lot on how Google operates. We have lots of parallel development teams, so we have to be clear about what the interfaces are between different parts of the distributed system. It’s not like Sun-Solaris-Oracle where it’s all bound within the same hardware and software package, or even Exchange and Office. With Azure originally, they were giving away the cloud to get their renewal of their application layer. We see the next 10 years as more about what really happens above the infrastructure as a service. How do you drive new applications that are using this new architectural framework? I think that’s why there’s been so much excitement about Kubernetes and what’s different about our cloud.

What will the push-and-pull be like in the coming 10 years between what you’re offering and what the market is calling for? Are we reaching any sort of plateau in development where things settle down?

“For enterprise customers, they were thinking about virtualizing and then going to the cloud. Now we’re seeing people do that in one step. Maybe don’t finish virtualizing but move directly to containers so you have one orchestration layer and then over time modernize the application. That’s sort of the Anthos migration story. If you listen to OpenText, they’re their own SaaS provider. They’ve had their own private cloud to serve their customers and they are using us as their platform, so they can go to their customers with one service level agreement. If you think about cloud as making this whole go-to-market very efficient for ourselves. We got into the game very quickly without a huge sales organization. That’s because a lot of people like this idea that they can understand it and self-provision it on their own terms without a lengthy sales cycle, channel partners, and integration that is very painful.”

About the Author(s)

Joao-Pierre S. Ruth

Senior Writer

Joao-Pierre S. Ruth has spent his career immersed in business and technology journalism first covering local industries in New Jersey, later as the New York editor for Xconomy delving into the city's tech startup community, and then as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight. Joao-Pierre earned his bachelor's in English from Rutgers University. Follow him on Twitter: @jpruth.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights