App Virtualization: Why We Need Better Options

Golden disk images are simple, but they won't cut it as implementations get more complex. Here's the problem and what vendors are doing about it.

Kurt Marko, Contributing Editor

October 26, 2011

11 Min Read

The evolution of virtualization has made the job of configuring virtualized applications incredibly complex.

Where virtualization was once just about wringing the most efficient use out of a server, it's changed into a whole new application platform, whether that's a private cloud inside your data center or infrastructure as a service from a cloud provider. That evolution means IT now must maintain multiple virtual machine instances of an app, providing patches and updates on an increasingly diverse array of hardware, hypervisor platforms, and cloud services. Each of these application endpoints demands different tools and interfaces for configuration and ongoing administration.

As IT gets comfortable with virtualized infrastructures, companies also are deploying much larger and more complex production applications on them. They're also virtualizing more of their environments: 63% expect to have at least half of their servers virtualized by the end of next year, the InformationWeek 2011 Virtualization Management Survey finds (see chart, above). IT used to have to tend to only a few virtualized Windows development and test servers. Now it's multitier, mission-critical enterprise applications.

As a result, the longstanding practice of distributing VM disk images as virtual application appliances is showing its flaws. While these "golden images" are a fast way to deploy a new machine, the images quickly diverge from their pristine initial condition once they're in use, so you no longer have a single app type to maintain. Furthermore, they don't always work if you're using more than one vendor's hypervisor or using public infrastructure-as-a-service platforms. A similar problem afflicts IT teams trying to configure and instantiate virtualized resources--like compute cores, memory, networks, and storage--on different hardware platforms in a repeatable fashion.

In an ideal world, application and infrastructure requirements could be expressed in a machine-readable meta-format that IT could use to automatically configure and deploy instances on a variety of virtualized platforms--whether private or public. We're still a long way from this cross-platform "configure once, deploy anywhere" vision. But the industry has made some progress in creating tools for repeatable cross-cloud application deployment. So we'll focus on ways to decouple virtualized applications from the underlying physical infrastructure.

Fundamentals

Application Virtualization
Even more in-depth analysis on applications and virtualization is available free with registration.

And if you're looking for more analysis and survey data on the next stage of virtualization, download these other reports:

Get This And All Our Reports


Application Bundling

The key strategic decision when determining how to deploy and administer virtualized applications is the mode of packaging and distribution. The default--ever since VMware, the market-share leader in virtualization software, adopted the format--is disk images. Now known as virtual appliances, images are bundles comprising an operating system, applications, and configuration details that completely describe a VM's runtime environment.

While this approach, which is borrowed from deployment practices pioneered by enterprise PC client distribution, is simplistic, the benefit is that it's repeatable: Each new virtual application is identical to the golden master. IT likes repeatability, with good reason.

Unfortunately, problems arise once the pristine image is subjected to the vagaries of operating system patches, application updates, and configuration changes. Applications, once released into the runtime wild, tend to quickly diverge from the golden image.

There are a couple of ways to address this problem. One is a reprise of the Unix diff patch management approach, in which the current state of an application or system is compared against a prior snapshot. Much like image-based disk backup software, this technique can identify changes at the disk-block layer to create a delta image that can transform applications from one golden image to another.

This approach can be problematic, however. "If you can make an application totally stateless, then it works," says Shawn Edmondson, VP of product strategy at private cloud software provider rPath. But that's not possible for most applications. For deployed applications accumulating state information such as configuration settings or user information, IT ends up creating separate diff images for each VM instance. "Most people use golden images for new deployments but do updates the old-fashioned way," he says. Read: patch files and binary package distributions applied on each VM--which clearly isn't scalable.

Another, more flexible strategy entails using templates, which may include, for example, scripts to pull Perl packages from CPAN or build a standard J2EE stack. This tactic allows the same application model to be used on different cloud environments, both private and public. Such a template-driven approach facilitates updating stacks without rebuilding the entire bundled disk image. Nand Mulchandani, CEO and co-founder of cloud management system vendor ScaleXtreme, describes this approach as "dynamic VM assembly."

This strategy extends traditional software-building approaches, like makefiles and package managers, which focus on source code and application libraries, into the realm of VMs and virtual appliances. Developers build a detailed model of all the application and OS binary files, reminiscent of source code makefiles or Linux binary packages. It's no coincidence that rPath is a leading advocate of this approach since the startup was co-founded by an originator of the Red Hat Package Manager.

However, Mulchandani cautions against commingling application and operating system configurations if you plan on deploying to public clouds, since even IaaS providers using the same Linux distribution--say Red Hat or Centos--often use different kernel or library versions in their VMs. That's a critical detail that IT can't control. He also points out that application models can get quite detailed and that building them is painstaking, time-consuming work that can be impractical for large companies with scores of applications and thousands of machines.

So what's the answer? For now, that has a lot to do with whether you're using a public or a private cloud. Yes, SaaS products like ScaleXtreme's are trying to bridge that gap, but they are very new and are designed to do everything from their own service--so they don't integrate well, if at all, with existing internal management software.

What percentage of your company's production servers do you expect to have virtualized by the end of next year?

Private Vs. Public Cloud Tools

With few exceptions, tools for managing virtualized applications are aimed at either internal private clouds or public infrastructure-as-a-service environments, not both.

For private clouds, infrastructure management suite vendors, like Hewlett-Packard with Insight or CA with Cloud-Connected, and a host of point product providers, notably Opscode and Puppet Labs, have developed orchestration software that can automate the configuration and deployment of complete virtualized infrastructures and application sets using a single user interface. Under the covers, this generally entails creating a machine-readable document (in XML, for example) that controls a high-level scripting environment. That environment subsequently calls the low-level tools that do the real work of instantiating VMs, creating LUNs, configuring networks, and deploying VM images.

Alternatively, the virtualization vendors' hypervisor administration systems have largely morphed into all-encompassing private cloud management suites. At this year's VMworld, VMware made clear its ambition to be the center of the private cloud ecosystem. It's enhancing the scope and capabilities of its management stack, with products that replicate most of the key features found in conventional IT management suites from the likes of CA, HP, and IBM.

Yet IT teams aren't hot to implement such advanced provisioning and administration capabilities yet. Our Virtualization Management Survey finds them ranked in the lower echelon of features (see chart, right). Maybe that's because such management and provisioning capabilities are applicable only if you're exclusively a VMware shop. Things aren't so rosy if, like 36% of our respondents, you use more than one hypervisor. And the number of heterogeneous shops is poised to increase, as Microsoft and Citrix improve their capabilities, and if VMware raises its prices, which for many would be the effect in the near future from its recent licensing changes.

That's where a cross-platform product can come in handy. One such product comes from HotLink. Its new SuperVisor software essentially tricks VMware's vCenter into seeing alternative hypervisors as native VMware instances.

Think of it as a translation layer that sits between, say, Microsoft's Hyper-V hypervisor and VMware's vSphere. It's conceptually similar to CPU instruction set translators like Apple's Rosetta, needed for moving the OS and platform from one instruction set to another, such as PowerPC to x86.The advantage, says HotLink CTO Oded Haner, is that companies can extend their investments and staff training related to VMware's management platform to Hyper-V, XenServer, and KVM environments.

Furthermore, since each hypervisor looks to VMware's management stack like a native VMware instance, it means IT can migrate workloads among them. Sadly, HotLink SuperVisor can't yet automatically migrate active VMs using VMotion. Since more than 60% of our survey respondents are using such virtual machine mobility tools, this is a key shortcoming. Haner's promising a fix in a future release.

How important are these technologies to your company's overall IT strategy?

Public Cloud: At Their Mercy

When it comes to managing VMs and applications on the public cloud, IT is pretty much at the mercy of whatever tool the cloud service provides. And here, Amazon has set a high bar for the competition.

Amazon Web Services' CloudFormation, released in February, provides a one-stop shop for creating and managing sets of AWS resources used by virtualized applications. Furthermore, these sets can be saved as application templates that describe all AWS resources, dependencies between them, and runtime parameters used by the application. Amazon calls this collection the "application stack." Like HP with Cloud Maps, Amazon has built a starter library of configurations for common applications like WordPress (blog), Drupal (content management), and Joomla (content management).

CloudFormation currently supports most AWS resources, including its EC2 virtual servers; storage from both elastic block store (EBS) and simple storage (S3); load balancers and IP addresses; relational database services; and queue services. For Java apps, Amazon's Elastic Beanstalk further simplifies the deployment process by automatically provisioning server capacity, load balancing, and resource-capacity scaling using the full panoply of AWS compute, storage, and network services; it figures out what the application needs by parsing the application archive file and monitoring its use.

Rackspace's recent acquisition of Cloudkick is a sign that it too sees the need for improved management tools, even though that product is largely a VM management and monitoring tool, not one for cloud application packaging.

Finally, make sure you keep an eye on a new class of cloud management services, such as those from ScaleXtreme and RightScale. While initially focused on cross-platform and cross-cloud virtual server management, these online services hold the promise of evolving into full-fledged application administration products.

ScaleXtreme already supports the ability to script application installations, such that a full LAMP stack can be deployed on its target environments, including AWS, Rackspace, and VMware.

Likewise, RightScale offers the ability to create application deployment templates that describe the configuration of and connections among multiple virtual servers.

These templates start with a base machine image that typically includes just the guest operating system; it's customized using a set of scripts, which can execute at boot or runtime. Scripts can be used to install or configure applications, network parameters, or storage volumes. With either product, since the scripts are independent of the base OS, applications can be patched and reconfigured without generating a new system image. That's where ScaleXtreme's Mulchandani gets his concept of dynamic VM assembly.

Familiar Problems

The difficulties faced by developers and systems admins in deploying apps to the cloud are reminiscent of those we encountered transitioning from mainframes to the client-server era. In that conversion, developers were dealing with two very different environments: the Wintel PC and the Unix workstation. Unix provided a standard hardware and software platform that enabled "write once, run anywhere" simplicity. Wintel PCs meant a hodgepodge of incompatible instruction sets, Unix variants, and memory byte orders--anyone else remember the big-endian, little-endian madness?

Madness is a fair description of today's virtualization and cloud landscape. Developers and IT have to address not only application and infrastructure requirements, but also how these are packaged for deployment on different platforms and services. There are plenty of vendors looking to address these problems, and many of their approaches are conceptually similar to those pioneered for mainframe-to-Unix application distribution decades ago. That's actually a good thing since it means the conceptual framework isn't new. We've solved similar problems and are largely just adapting well-worn concepts to the cloud era.

Still, given the diversity of efforts and cloud-based endpoints, we're not likely to have a clean, widely applicable fix soon. The best near-term strategy is to factor application deployment and management into your thinking when evaluating systems, hypervisor management software, and public cloud services, and investigate the growing array of cloud management point products.

Kurt Marko is a 15-year IT industry veteran.

InformationWeek: Nov. 7, 2011 Issue

InformationWeek: Nov. 7, 2011 Issue

Download a free PDF of InformationWeek magazine
(registration required)

Read more about:

20112011

About the Author(s)

Kurt Marko

Contributing Editor

Kurt Marko is an InformationWeek and Network Computing contributor and IT industry veteran, pursuing his passion for communications after a varied career that has spanned virtually the entire high-tech food chain from chips to systems. Upon graduating from Stanford University with a BS and MS in Electrical Engineering, Kurt spent several years as a semiconductor device physicist, doing process design, modeling and testing. He then joined AT&T Bell Laboratories as a memory chip designer and CAD and simulation developer.Moving to Hewlett-Packard, Kurt started in the laser printer R&D lab doing electrophotography development, for which he earned a patent, but his love of computers eventually led him to join HP’s nascent technical IT group. He spent 15 years as an IT engineer and was a lead architect for several enterprisewide infrastructure projects at HP, including the Windows domain infrastructure, remote access service, Exchange e-mail infrastructure and managed Web services.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights