At Interop ITX 2017, a container deployment expert will demonstrate how users can select an option for moving containers to the cloud.

Charles Babcock, Editor at Large, Cloud

March 1, 2017

6 Min Read
Scott Lowe

 More on Cloud Live at Interop ITX
More on Cloud
Live at Interop ITX

As the popular Docker container platform takes on more orchestration features, container users have faced a lengthening set of options with which to deploy the containers that they build.

They can turn to Docker Swarm, which creates and manages a Docker container cluster, or they can rely on the Swarm orchestration features added to the Docker Container Engine last June.

Or they can rely on the deployment expertise that Google embedded in Kubernetes, now open source code and available as a stand-alone deployment system or embedded in third party products, such as CoreOS' Tectonic.

If the container owner is deploying to the cloud, the list quickly gets longer, with Amazon Web Services EC2 Container Service, Google Cloud Platform's Google Container Engine (GKE) or some combination of a deployment engine initiated in the cloud and that cloud's infrastructure as a service receiving the customer's containers.

Scott Lowe, an engineering architect with VMware, will conduct a three-hour workshop on May 16 at Interop ITX in Las Vegas unraveling the different options and demonstrating how each works. His workshop is titled, Deploying Applications To the Cloud with Containers. The workshop will run from 9 a.m. to noon.

"The idea is to help attendees understand the various options that exist," he said.  The focus of container users, he said, "has begun to shift to the public cloud. That doesn't mean we will exclude discussion of deploying containers on premises," he noted. Indeed, some of the options he will discuss can be used on premises or in the cloud.

Want to see how Docker made container orchestration a little simpler? See Docker Embeds Container Orchestration In Engine.

But a primary option he will discuss will be deploying to a Docker Swarm server on Amazon Web Services. His workshop will include a demonstration of how to take a Docker container with its application and submit it to a Docker Swarm server that the customer has established on AWS.

Lowe will also demonstrate deploying Docker containers on Amazon via Amazon's own EC2 Container Service, which was initiated in November 2014 at AWS' ReInvent conference.

And he will demonstrate deploying Docker containers to a Kubernetes server on AWS.

The fourth option that he describe will be deploying containers on the Google Cloud Platform using the Google Container Engine (GKE). Choosing that route automatically generates a deployment under Kubernetes because that's what GKE uses, Lowe said.

Part of Lowe's approach will be to demonstrate the use of tools that are available for container deployment in each environment. One will be the platform-independent Terraform tool from HashiCorp. It is a general purpose workload configuration tool with an ability to build in scalability and could be used on-premises, or for either the Google or Amazon cloud.

Lowe will also demonstrate use of AWS' own CloudFormation tool, which allows a user to build a configuration template with instructions on how the parts fit together. The template may include snippets of code that add customizations to a standard configuration.

In addition, Lowe will demonstrate Docker Compose, the tool that container builders may be most familiar with. Compose can be used to build a multi-container application, with a Compose file holding all the different application services. Upon command, the file can be used to start up all the services simultaneously.

"I'd like attendees to see the advantages and disadvantages of each approach," Lowe said. When a tool produces a template, he'll store the template on Github so that those who see him demonstrate the tool will be able to retrieve the template and conduct their own experiments.

He'll also demonstrate use of Kubectl or Kubernetes Control, a command line interface for running commands against Kubernetes container clusters. The user can specify operations to be run against one or more container resources or dictate the creation of multiple instances of a particular pod. A pod under Kubernetes is a set of containers that can share a single resource.

Although he's a VMware employee, Lowe said he doesn't plan to participate in the containers vs. virtual machines debate in his workshop. It's a fact of life that containers sent to the public cloud are deployed inside a virtual machine so there's no need to debate the issue. Lowe expects container management to mature and eventually allow containers to run in the cloud without virtual machines. But he also thinks virtual machines are a known management entity and will be the management environment of choice for a long time to come.

Lowe said he'll be able  to highlight the main difference between The Google Cloud Platform and Amazon. At Google, the customer uses containers that are automatically deployed to a Kubernetes cluster. "They will natively provide you with a Kubernetes cluster. It's how the service works, not something you have to do," he said.

At AWS, if the user wishes to use Kubernetes, Docker Swarm or some other orchestration engine, the user has to launch it in EC2 and then launch his containers to it. The orchestration engine is a vital part because it sizes up each container and then launches those that represent a compatible fit to the same server.

Orchestrating containers can be a complex process, requiring knowledge of schedulers and networking, and needing to match up container affinities so that multiple containers operate on the same node without interfering with each other's need for server resources, Lowe explained.

Another container orchestrator and management system, Mesosphere's DC/OS, based on open source Mesos, may get a mention during Lowe's session but he has staked out the primary areas of activity that he wants to cover and DC/OS  isn't yet a product in demand enough for him to include it. He's focused on the most active areas of container use: containers running on AWS, Kubernetes as a container deployment system and cluster manager, Google Container Engine and Docker Container Engine and Docker Swarm. The time he has available, that's enough, he said.

Lowe warned that he'll demonstrate the tools and approaches, provided the Amazon cloud cooperates. During the interview yesterday, AWS' S3 service out of its popular Ashburn, Va., data center complex, U.S. East – 1, slowed drastically due to "increasing error rates." The slowdown caused many Web services to slow or stall as they waited for S3 storage service to return to healthy operation.

"We'll demonstrate them, unless Amazon Web Services on the East Coast is acting up," he said.

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights