But first we need an underlying cloud fabric that allows for flexibility of data and applications.

Ziv Kedem, CEO, Zerto

April 28, 2014

4 Min Read

Several high-profile organizations have migrated off of the public cloud recently, taking all workloads back onto their own private clouds.

For example, Zynga, HubSpot, MemSQL, and even the CIA made headlines when they moved from Amazon Web Services to private clouds (in the case of the CIA, Amazon is building the organization a private cloud). 

[If your IT strategy doesn't emphasize a phased cloud approach, it may be time for a new job. Read Cloud Holdouts Face Tipping Point.]

A 2013 survey from CompTIA revealed that one-quarter of companies using public clouds are transferring IT services from public cloud providers to on-premises systems and/or private cloud models. With all of the alleged efficiency of using the public cloud, why would so many companies choose to take everything private? 

Workload inequality
The various types of workloads and even phases of the workload lifecycle present steep challenges for cloud adoption. Testing and development may be commonplace cloud-ready scenarios, but production workloads come in so many flavors that it's a very complex process to move them to the cloud and operate them there.

Each type of workload employs datacenter infrastructure in different ways, which can include:

  • Multiple stacks

  • Different networks

  • Numerous tiers of the application

  • Complex management needs

  • Varied protection, recovery timelines, and SLAs

Therefore a "one-size-fits-all" approach does not give enterprises the flexibility to manage these disparate needs.

For those companies considering going the other way -- moving workloads from their on-premises datacenters to the public cloud -- the situation is even more complex. To accomplish this, production workloads need to be easily mobilized and centrally managed and protected to interoperate with, and reap the benefits of, the public cloud. But production workloads today are not easily mobilized, because they're siloed.

Breaking the silos
Workloads are siloed by the hypervisor, be it VMware ESXi, Microsoft Hyper-V, Citrix Xen, or Redhat KVM. Workloads simply cannot move between these hypervisors easily. Some processes need to take place behind the scenes, and these processes require effort both by end-users and the hypervisor.

They are also siloed by hardware, with "vendor lock-in" to a specific brand of hardware making it difficult to move workloads for applications like disaster recovery and data migration. Big storage likes it this way. Differentiated workloads will not change or become magically interoperable with any other datacenter environment, so a move to the cloud is fraught with problems from the start. 

Workloads are also siloed in clouds. Using a cloud computing model allows for converged infrastructure and shared resources across departments or across clouds. However, without shared management of these resources across cloud platforms, customers get locked in to one specific provider. Cloud providers often use proprietary infrastructure and tools, making them incompatible with other clouds.

So where can enterprises turn? Lately, there has been a lot of buzz around the hybrid cloud, where an organization manages some resources in-house and has other resources provided by an external cloud provider. 

The hybrid cloud is interesting to IT departments and CIOs because it allows for cost reduction, cloud bursting, server migrations, disaster recovery, and data portability. Gartner estimates that 70 percent of enterprises will pursue the hybrid cloud by 2015. Unlike the public cloud model, hybrid cloud diverts far more control to the IT department, allowing for greater flexibility and easier management of workloads. But how does a hybrid-cloud-based datacenter avoid or remove the datacenter silos?

We need a "cloud fabric" 
For the hybrid cloud to reach widespread adoption by removing these silos, we need to see the development and adoption of an underlying infrastructure layer that allows for seamless flexibility of data and applications across clouds, hypervisors, networks, and hardware -- a concept I like to call the "Cloud Fabric."

My company Zerto has identified the key functionalities that production workloads need to utilize any cloud. Zerto will be rolling out products in the next year that support the core principals of the cloud fabric concept (which is not Zerto's concept alone). 

I see four critical components of the cloud fabric layer:

  • A powerful transport layer for data and applications, one that is cross-hypervisor and hardware agnostic

  • Orchestration of the mobility of complex applications

  • Encapsulation of all of the dependencies that are part of an application, such as boot order and IP configuration

  • Production-level tools for the highest service levels of data mobility and protection, so that mobility of workloads is easy to manage and report on

Enterprises will be able to move production applications between clouds without incurring downtime, and without changing configurations between sites. The hybrid cloud will be able to protect and recover applications without the need to purchase storage from the same manufacturer for both production and recovery sites.

In this "brave new world" organizations will be able to manage applications through a single console, no matter if those applications reside on-premises or in a cloud datacenter.

Private clouds are moving rapidly from concept to production. But some fears about expertise and integration still linger. Also in the Private Clouds Step Up issue of InformationWeek: The public cloud and the steam engine have more in common than you might think. (Free registration required.)

About the Author(s)

Ziv Kedem

CEO, Zerto

Ted Kobus focuses his practice in the areas of privacy, data security, and intellectual property. He advises clients, trade groups, and organizations regarding data security and privacy risks, including compliance, developing breach response strategies, defense of regulatory actions, and defense of class action litigation. He counsels clients involved in breaches involving domestic and international laws, as well as other regulations and requirements. Having led more than 500 data breach responses, he has respected relationships with regulators involved in privacy concerns as well as deep experience helping clients confront privacy issues during the compliance risk management stages. He is invested in his client relationships and approaches engagements practically and thoughtfully. Ted is national co-leader of BakerHostelter's Privacy and Data Protection team. He is ranked in "Chambers USA: America's Leading Lawyers for Business" and was one of only three attorneys named an MVP by Law360 for Privacy & Consumer Protection in 2013. He is a regular contributor to BakerHostetler's Data Privacy Monitor blog and regularly speaks at major industry events regarding data breach response, risk management, and litigation issues affecting privacy, including being the only private attorney to speak at the National Association of Attorneys General on data security issues.

 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights