What Inconsistent Environments Mean for Container Management

It’s easy to get lost in the allure of the “build once, run anywhere” mantra without fully appreciating how much the experience of container deployment can vary among environments.

Derek Ashmore, Application Transformation Principal

November 17, 2023

5 Min Read
cloud computing conceptual image
Ezyjoe via Alamy Stock

Part of the reason why containers have become so popular is that they enable a “build once, run anywhere” approach to application deployment. With containers, the same application image can run in virtually any environment. Developers don’t have to recompile or repackage the app to support multiple environments.

That doesn’t mean, however, that the process of deploying a containerized app across different environments is actually the same. On the contrary, it can be quite different depending on factors like which cloud hosts your app and whether you manage it using Kubernetes or an alternative orchestration solution.

These environment-specific differences in container management are worth spelling out because they are often glossed over in conversations about containers. It’s easy to get lost in the allure of the “build once, run anywhere” mantra, without fully appreciating how much the experience of container deployment can actually vary between environments.

For that reason, I’d like to walk through the key ways in which container deployment and management can be quite different depending on which environment and orchestration service you use. None of these differences make one type of environment “better” for hosting containers than another, but they are important to keep in mind when assessing which skills and tools your team will need to support containerized apps in the environment you choose to deploy them.

Related:Unleashing the Power of GenAI: Future-Proof Your Cloud Strategy

Principles That Apply to All Container-Based Deployments

Before discussing environment-specific differences regarding containers, let’s talk about aspects of container deployment that are the same no matter where you choose to run containerized apps.

One constant across environments is security principles. You should always embrace practices like least privilege (which means granting containers access only to the resources they require, and no more) to mitigate risk. You should also enforce encryption over data at rest as well as data in motion.

Container networking, too, is generally standardized across environments, at least as far as connections between containers go. (As I explain below, container network configurations can be different when it comes to exposing container ports to outside networks, in which case orchestrator-specific networking tooling and integrations come into play.)

You’ll also always have to manage additional tools and services. No matter where you deploy containers, you’ll need to think about providing infrastructure to host them, deploying an orchestration service, balancing network load, and so on. The exact tools you use for these tasks can vary across environments, but the tasks themselves are fundamentally the same.

Related:6 Secrets of Cloud Cost Optimization

How Containers Vary Across Clouds

Now, let’s talk about differences in container management between environments, starting with how the cloud you choose for hosting your containers impacts the way you manage them.

Broadly speaking, there are not huge differences among the major public clouds -- Amazon Web Services, Microsoft Azure and Google Cloud Platform (GCP) -- with regard to container management. However, each cloud does offer different takes on container orchestration services.

For example, AWS offers both a proprietary container orchestrator, called Elastic Container Service (ECS), as well as a Kubernetes-based orchestrator called Elastic Kubernetes Service (EKS). For their part, Azure and GCP primarily offer only Kubernetes-based orchestration (although Azure supports limited integrations with certain other orchestrators, such as Swarm, via Azure Container Instances). This means that the service you use to manage your containers may vary depending on which cloud hosts them.

Container security tooling and configurations vary between clouds, too. Each provider’s identity and access management (IAM) tooling is different, requiring different policies and role definitions. Likewise, if you configure containers to assume specific cloud resources -- such as data inside an Amazon S3 storage bucket or SNS notifications -- it will work only with the cloud platform that provides those resources. For both of these reasons, you can’t lift-and-shift container security policies from one cloud to another. You need to perform some refactoring to migrate your app between clouds.

Similarly, if you use your cloud provider’s built-in monitoring and alerting services (such as Amazon CloudWatch or Azure Monitor), your monitoring and observability tools and processes will vary between clouds. That’s especially true if you embed cloud-specific monitoring agents within containers directly, in which case you’d have to update the agents to rehost the containers on a different cloud without breaking your monitoring and alerting workflow.

Kubernetes’ Impact Container Management

If you opt to use Kubernetes to manage containers -- which you may or may not want to do, depending on the unique needs of your app -- your experience will also be different in key ways as compared to most other approaches to container orchestration. That’s because Kubernetes adopts a relatively unique approach to configuration management, environment management and more.

For example, because Kubernetes has its own approach to secrets handling, you’ll need to manage passwords, encryption keys and other secrets for containers running on Kubernetes differently than you would in other environments.

Network integration, too, looks different for Kubernetes-based deployments. Kubernetes supports multiple methods (such as ClusterIP and NodePort) of exposing containers to public networks, but they are all based on concepts and tooling that are unique to Kubernetes. You can’t take a networking configuration that you created for, say, Docker Swarm and apply it to a Kubernetes cluster.

As another example, most teams use environment management tools that are purpose-built for Kubernetes, such as Helm, for environment management. Kubernetes also comes with its own administration tool, kubectl.

For all of these reasons, working with Kubernetes requires specialized expertise -- so much so that it’s common today to see enterprises building platform engineering teams dedicated to Kubernetes. Although the principles behind container management in Kubernetes may be the same as those for other orchestrators, the tools, and practices you need to implement them in Kubernetes are quite different.

Conclusion: Build Once, Configure Multiple Times

Given the considerable differences that can affect container management in different types of environments, it’s a bit simplistic to think of containers as a solution that frees developers and IT engineers from having to think about host environments.

It’s true that you can typically deploy the same container image anywhere. But the security, networking and monitoring configurations and tools you use can end up looking quite different within different clouds and container orchestrators. You can build your app once, but don’t assume you’ll only have to configure it once if you want to deploy it across multiple environments.

About the Author(s)

Derek Ashmore

Application Transformation Principal, Asperitas

Derek Ashmore is Application Transformation Principal at Asperitas. Derek helps companies use cloud platforms to cost-effectively, securely and with better availability and performance, gain an advantage over their competitors.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights