What Oracle’s Cloud Deals with OpenAI and Google Cloud Mean

The race for capacity, fueled by AI’s ascension, could lead to more bartering agreements in the cloud.

Joao-Pierre S. Ruth, Senior Editor

June 17, 2024

3 Min Read
Oracle Signage Logo on Top of Glass Building
Askar Karimullin via Alamy Stock Photo

In a pair of deals announced last week, Oracle wove its cloud infrastructure even more into growing services from OpenAI and Google Cloud.

Last Tuesday, Oracle, Microsoft, and OpenAl announced a partnership that extends Microsoft’s Azure Al to Oracle Cloud Infrastructure (OCI) to give OpenAI more capacity. Oracle also announced a partnership with Google Cloud to combine OCI and Google Cloud tech to accelerate app modernization and migrations.

In a statement, OpenAI CEO Sam Altman said OCI would enable OpenAI to continue to scale, running some of its workloads on that infrastructure.

According to the statement on the partnership with Google Cloud, the agreement is meant to offer, among other resources, Oracle's database and applications in tandem with Google Cloud's platform and AI.

Sid Nag, vice president in the technology and service provider group at Gartner, says Oracle’s partnership with Google, comparable to a prior arrangement with Microsoft Azure, will let workloads in the Google environment communicate with Oracle’s database if needed.

“It kind of promotes the whole notion of multicloud,” he says. “A lot of cloud providers talk about multicloud, but they don’t really put their money where their mouth is. This is an example where they are actually doing something about it.”

Related:Should Your Organization Use a Hyperscaler Cloud Provider?

Regarding the other announced partnership, Nag says it seems Microsoft will leverage capacity for OpenAI in OCI in order to support Azure AI platforms. “It seems like the need for capacity that OpenAI either has today or is predicting may or may not be met by Microsoft infrastructure, which means they’re leaning on OCI to do that,” he says. “What is not clear is if Microsoft doesn’t have that capacity or if they want to keep that capacity for other things.”

There may be a bigger dynamic at play beyond this specific deal. “I think we’re going to see more of this capacity bartering going to happen between cloud providers as cloud becomes an utility like your gas company or your electric company,” Nag says. He compares such bartering to arrangements between major telco providers such as AT&T and regional “Baby Bells” as well as other carriers. Further bartering could be fueled by AI’s continued rise.

“AI is going to drive a massive demand for capacity,” he says. “LLMs [large language models] are going to balloon in size.”

There seems to be slowing to AI’s, particularly generative AI, momentum Nag says. It is unclear, he says, how quickly enterprises might adopt GenAI, which could generate revenue at a greater scale than consumers and hobbyists asking cute questions on ChatGPT. “The money’s going to be in the adoption of GenAI by the enterprise,” he says. “It’s not clear how quickly that’s going to happen and, therefore, what impact will that be to the capacity question.”

Related:Understanding the AI/Cloud Convergence

Another aspect of AI’s expansion across the cloud may be the desire to spread some compute workloads among more providers than hyperscalers. “This is obviously a canny move for Oracle in terms of having some of that capacity in their cloud,” says Spencer Kimball, CEO of Cockroach Labs. “Definitely Oracle’s OCI is less expensive than the hyperscalers.”

Production workloads might stay with hyperscalers, he says, but other workloads could run elsewhere. It does not matter where some AI training is done, Kimball says. “You’re doing all the hard compute work and then you get a model, and that model is relatively small and doesn’t require compute anywhere near the compute that was required for the training,” he says. “This makes it so that the hyperscalers are not the best place to run training.”

About the Author

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights