At Interop ITX, JP Morgenthal of CSC will describe what changes when it comes to operations management in a move to the cloud.

Charles Babcock, Editor at Large, Cloud

March 14, 2017

4 Min Read
JP Morgenthal

When it comes to the operations, the word that is most often associated with IT ops is infrastructure. Operations managers oversee the physical servers, switches, and disk drives that keep the business running.

As IT operations shift into the cloud, however, those operations are going to have to be associated with some other form of management because the cloud will be providing the infrastructure.

What will have to change for IT operations to manage their systems in the public cloud? On premises, operations staffs have carefully built up run books that tell them what sequence of steps to take in most situations. As their systems move into the cloud, they'll declare, "This comes with me," and they'll seek to use the run books in the cloud, predicts JP Morgenthal CTO of digital applications, the Americas, for CSC,

board-780321_640-pixabay.jpg

"There's no bigger mistake than trying to take your existing run book into the cloud. It's a paradigm shift," said Morgenthal in an interview, and operations will have to make a similar shift if it's to succeed in the new environment.

For more on the elements of serverless computing, see Serverless Computing Isn't Simple To Explain.

Morgenthal will try to explain what operations is likely to look like in Developing Your Cloud Operating Model on May 18 at 10:30 a.m. at the Interop ITX conference in Las Vegas.

He proposes that IT operations "is moving toward an application-centric focus vs. an infrastructure focus." If that sounds a little too familiar, then it's not what you think. Morgenthal sees an inevitable shift toward composing applications to run on the cloud both as small distributed microservices and even smaller functions, as with event-driven functions in AWS Lambda, Google Cloud Functions or Microsoft Azure Functions.

These functions will exist as application components at rest until needed. Some designated software event will trigger the service, after which the function is fired up quickly, executed, and put back to sleep.

These cloud applications will be part of the new serverless generation, where their developers never worry about what server they will be running on. Cloud operations takes care of providing the servers automatically.

But this will drastically change how operations does things. A key tool of today's on-premises operations managers is the ability to analyze the server logs on which their systems run. But how will the server log be analyzed if the application consists of hundreds of microservices and functions-as-a-service scattered around the infrastructure of the cloud supplier, Morgenthal asked.

The previous metrics of CPU usage, memory usage and disk drives need to go away in the cloud, when it comes to measuring application performance. Notice of an application failure is based on another anachronistic measure, whether the server hardware has failed. The cloud is designed to allow a server to fail and cloud software causes the virtual machine workloads to fail over to another piece of hardware.

"Instrumentation will be your responsibility, not that of the operating system...The CEO and CIO offices, I don't think have people focused on this," he said.

Tools to notify administrators that an application has failed haven't been invented for cloud use yet. Operations managers migrating into the cloud will try to carry their practices and procedures with them. In an Oct. 4 personal blog as The Tech Evangelist, Morgenthal had this to say:

"What we should expect to see going forward is operations being 'unbound' from infrastructure and bonded with applications. As more and more businesses realize the economic benefits of relinquishing ownership of their infrastructure to a (cloud) provider, IT should reorganize around operations of the applications and workloads."

That would mean abandoning on-premises systems monitoring tools, such as IBM Tivoli and BMC Patrol in favor of Dynatrace, AppDynamics, and New Relic in terms of coming up with metrics for application monitoring, he said. Dynatrace is able to remotely monitor application performance and report on it from a business perspective, showing a cost/benefit relationship, he said.

 

"That's the Holy Grail of application performance monitoring," Morgenthal noted in the interview. But in his blog he was pessimistic that migrators to the cloud would get to such a state any time soon.

It is likely, he wrote, that the needed tools "do not exist or are only starting to now appear in the market. Hence, skilled individuals that understand how to configure these tools most likely don’t yet exist. Thus, the likely outcome will be that businesses will attempt to manage (cloud operations) with the same knowledge and tools they use to manage the pre-Cloud Operating Model world, which will, unfortunately, fail."

Out of this failure will emerge a new generation of operations managers, the ones who will figure out what application-centric operations in the cloud really need for successful operations management. If you wish to question Morgenthal about this pessimistic outlook or trade views on what's possible in the near future, consider attending his May 18 session at Interop ITX.

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights