Containers emerged at the DockerCon event in San Francisco this week as a technology that is backed by a surprisingly broad spectrum of users, including Google, which says its search engine and all other applications run in containers.
Docker is a particular format for Linux containers that caught on with developers since its inception 15 months ago. Both Amazon Web Services and Microsoft are moving quickly to make Docker containers welcome guests on their respective cloud hosts.
Containers, sometimes described as lightweight virtualization, promise to move software around more easily and level the playing field between clouds. Does that mean IT should abandon its adoption of virtual machines and replace them with containers? What do containers represent in terms of IT's existing investment in VMware and other hypervisor-based management?
One way to answer the question is to look at one of the clearest predecessors to Docker that casts light on what it means. Docker has nothing to do with hypervisors and little to do with the first containerized operating system, Solaris. Rather, it more closely resembles the simple Red Hat Package Manager or RPM. Because open source code was frequently modified, Red Hat early on standardized how discrete modules of code could be packaged to assign them dates of issuance and version numbers so that a package manager system could check for compatibility with other modules and assemble thousands of modules into an operating system (Linux). The importance of RPM is not in the technology -- which is fairly simple -- but in the agreement it enforces among Linux developers to work together in a standard way. Docker does something similar, only for complex applications and on a much larger scale.
[Want to learn more about containers in the cloud? See Red Hat Linux Containers: Not Just Recycled Ideas.]
In the future, containers are expected to be nested. A software component that makes up a layer in one container might be called by another in a remote location. An update to the same layer might be passed on to any other containers that use the same component.
Ben Golub, CEO of Docker Inc., the firm that sponsors the Docker project, likes to draw an analogy with a shipping container: Docker makes it possible to move software around and handle it in a predictable way. But "shipping" falls short of all that Docker enables on the operational front.
Docker creates a sandboxed runtime on the computer on which it lands. It occupies a defined memory space and has access only to specified resources. A container sets up networking for an application in a standard way and carries as discrete layers all the related software that it needs. This tweet from Red Hat Dan came out of the second day of the conference: "A container is like Vegas, what happens in a container stays in that container."
The one exception is that the application in the container must rely on its new host to provide the operating system, which the host already has. A restriction is that the number of the Linux kernel that the application moved from must match the number that it is moving too, a relatively simple standard to meet in exchange for a big gain in workload portability.
In addition to portability, Docker injects a DevOps flavor to the workload package. DevOps requires a higher level of cooperation between developers and operations managers. By accepting the Docker format, developers can produce code without worrying much about where it's going to run. Developers who change code can find their changes automatically tested and added to the correct layer in the Docker workload, without the developer being burdened with maintenance. Operations managers can accept code that's already been tested, certified it's been formatted in a standard way, and guaranteed to be isolated from other code in a production environment. With Docker, developers and operations, two groups that have perennially been at war, can sit down at a table where a truce could break out and make it easier way for both sides to get their jobs done.
On the opening day of the conference, Microsoft CEO Satya Nadella tweeted about a blog post at Microsoft.com about Docker running on Azure, noting Docker was "developer goodness."
With IBM, Google, Rackspace, Red Hat, and many other backing the emergence of Docker containers, it wasn't surprising that Stuart Miniman, principal research contributor and tech analyst at Wikibon, said in another tweet: "Fun fact -- Docker currently has 42 employees. Is it the answer to life, the universe, and everything?"
If enterprise IT is already committed to virtualization, will Linux containers supplant that? Can Docker with 42 employees displace
VMware with 14,000? Can containers live up to what they seem to promise?
There's a tenuous relationship between containers and lightweight virtualization. Sun Microsystems executives used to refer to Solaris containerization as Sun's answer to virtualization. Both VMs and containers supply workload isolation on a shared host, but it wasn't an adequate Sun answer to virtualization. Sun later introduced its own version of Xen. Containers and VMs are also different -- enough so that replacement of VMs by containers any time soon in the enterprise looks highly unlikely.
The larger question is whether VMware, in virtualizing legacy systems and dominating the enterprise data center, is somehow not the right party to lead the management of the next generation of applications. Few people are more aware of this question than executives at VMware. They are trying their utmost to move beyond legacy systems into applications for the cloud and to become a supplier of hybrid cloud services. The spinning out of Pivotal from VMware and the establishment of an independent Cloud Foundry PaaS are key parts of VMware's effort to stay relevant to developers.
With those moves, its data center dominance, and its vCloud Hybrid Service, VMware is in a theoretically good position to realize its ambitions to extend virtualization beyond the enterprise data center into hybrid cloud operations. But I think Linux containers will in fact act as a curb on how far the VMware hypervisor-based software horizon can expand.
Containerization is going to have an appeal for the next generation of developers, partly because it can't be matched in every way by sophisticated virtualization tools and management. There's evidence from IBM that containers deploy more quickly and run more efficiently than virtual machines. They can also be more densely packed on servers. That's a big plus in the cloud, where overall efficiency remains a litmus test of who will thrive and who will die.
Containerization "is an important way to get standardization at the sub-virtual machine level, allowing portable apps to be packaged in a lightweight fashion and be easily and reliably consumed by PaaS clouds everywhere," wrote IDC software analyst Al Hilwa from the DockerCon 2014 event.
Cloud computing based on vCloud Hybrid Service will have ESX Server hypervisors in both the data center and public cloud. No hypervisor is required for cloud computing based on Docker, a point Google plans to illustrate with its Compute Engine service.
On the other hand, Docker workloads can be deployed in virtual machines, if the user chooses. It is conceivable containers and virtual machines will be used hand-in-glove in some cloud settings. In others, containers will run by themselves on bare metal for maximum efficiency.
For the foreseeable future, virtualization has several management advantages in the enterprise data center, with its potpourri of legacy applications. Those applications can be made independent of the hardware they were launched on and managed with pooled resources. Workloads can be moved around while running to maximize utilization of servers -- containers cannot. But the software-defined data center doesn't necessarily rule out Linux containers. They can be fit in alongside VMs.
The next generation of applications, many of which will run in the cloud, are more likely to be built with containers in mind rather than virtualization. When applications are composed as assemblies of many moving and distributed parts, containers will be a better fit.
Google VP of Infrastructure Eric Brewer in a keynote Tuesday said that containers have been critical to how Google does cloud computing. In a blog post the same day, he said, "Everything at Google, from search to Gmail, is packaged and run in a Linux container. Each week we launch more than 2 billion container instances across our global data centers, and the power of containers has enabled both more reliable services and higher, more-efficient scalability."
Google also released Tuesday a container management system, Kubernetes, as open source code. Google uses Kubernetes to manage those 2 billion container instances, but few details of its operation are known yet. Nevertheless, other cloud providers and builders of enterprise private clouds now have a management system to start with.
As a better understanding of attributes of containerization emerges, it will be the tools to create and manage them that will take center stage. It's too soon to know how flexibly containers will be managed or migrated, or the future tasks they may be able to undertake. But the giant step represented by the move to virtualization in the data center appears about to be repeated, this time with containerization in the cloud.
Can the trendy tech strategy of DevOps really bring peace between developers and IT operations -- and deliver faster, more reliable app creation and delivery? Also in the DevOps Challenge issue of InformationWeek: Execs charting digital business strategies can't afford to take Internet connectivity for granted.Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio