Get started Bring yourself up to speed with our introductory content.

Everything you need to know about creating a virtual container

It's tempting think of containers as just another type of VM, but that's not accurate. Expert Mark Betz offers a look at what's easy and tricky about containers in the cloud.

Software may be "eating the world," but containers are eating the world of software. Containerization is a radical...

shift that brings dramatic improvements to how your enterprise software is deployed and managed. Those improvements stem directly from the main promise that a virtual container makes: An image built to run your application will capture all the necessary software dependencies and just work every time it is deployed to a compatible host system. This is a Big Thing. Little wonder that a colleague of mine professed to be in love with Docker a few days ago. As an engineer who has often had to configure systems and deploy software by hand I know exactly how he feels.

Then again, a virtual container is a Big Change, too. You can install Docker in minutes and be experimenting as quickly as you can type 'docker run –it alpine:latest /bin/sh'. But actually migrating non-trivial applications to a containerized architecture involves a number of serious issues that should be considered up front. Here are few key things that you should be aware of when you start thinking about a virtual container strategy for your organization.

Containers are not virtual machines

When you first start a container and open a shell, it's not difficult to make mental comparisons between what you're experiencing and prior experiences with virtual machines. When I started with Docker over two years ago, I did so myself, but it was the wrong model and it led to some bad decisions. A virtual machine is a server, and a server has a boot disk, init system and persistent storage. A virtual container has none of these things. They don't boot, they don't run init and their storage is ephemeral unless you take steps to map it onto a persistent file system. A server is a semi-tangible asset onto which you might deploy many processes. A container, by comparison, is a lightweight sandbox that functions best when hosting a single process. You can run quite a few virtual machines on a big server, but you can run thousands of containers on even a smaller one. Docker has some great advice on this subject.

Containers are a significant tooling change

Deploying production software in containers involves tooling changes at every step. Developers have to become familiar with virtual container technology and comfortable using the tools in the Docker ecosystem to construct images, test them and push them to a repository. Many of the necessary tools and techniques will be familiar, but much of the workflow involving building and running images will be brand new.

The good news is that it's fun, and not difficult to master, but building consensus around how to work with code, images and repositories takes time and missteps can be expected. Approaches to version control are also affected, as images become one more versioned asset that has to be managed. On the system administration or DevOps side of the world, containerizing changes the deployment pipeline and server configuration requirements, and opens up a host of new possibilities for failover and auto-scaling.

Containers work best in the cloud

Cloud computing transformed the IT landscape by converting racked servers, routers, switches and storage into virtual resources that can be created, allocated, managed and destroyed as requirements dictate. In the process it is rapidly removing the "system" from system administration and allowing organizations to focus more of their engineering resources on solving problems that add user value.

A virtual container completes the cloud computing value proposition by taking us to true "zero configuration" deployments. In this, applications and the resources required to operate them can both be scaled up and down on demand using automated configuration management and deployment tools that access cloud stack APIs to bring about changes in a controlled, declarative, repeatable way.

Stateful services are a hard challenge

Sure, you can create and destroy a virtual container willy-nilly and get all sorts of scalability and durability benefits. It's a shame data cannot be so accommodating. Stateless services like HTTP daemons and proxies are often the first parts of a system to get the Docker treatment, because it's easy to envision how they work in that fluid world. Sooner or later, though, you have to deal with state.

Databases, session caches, text search and message queueing are all examples of system components that require both a runtime and a chunk of hard-to-manage state, usually sitting on a disk attached to the host the runtime is on. Stateful services flip the container equation on its head -- where the runtime is everything for a stateless service, what we care about for stateful services are the stored bits. Pretty much everyone who has had to deal with services like these and containers -- mainly everyone using containers broadly in production -- has had to come up with ad hoc ways to associate containers with specific storage devices to keep the right containers bolted to the right stores. Until we have containers for data these challenges will persist.

Managing containers by hand is still hard

Learning to work with containers from a development perspective gets you part of the way to a flexible, responsive architecture, but it turns out that managing and deploying containers to your infrastructure is still a hard, error-prone problem. You can simply spin up a number of cloud instances with Docker installed, SSH to them, pull your images with 'docker pull' and start them with 'docker run'.

That's what I did the first time I ran containers in production, and even though I had a lot less work to do configuring each host, I quickly found that it was still possible to screw up. You can pull the wrong image, set the wrong environment variables, forget to mount a volume -- there are a number of ways to hose your system.

That's why container technology quickly gave rise to so-called orchestration platforms. Systems like Mesos, Google Kubernetes and Amazon EC2 Elastic Container Service aim to solve the problem of describing, creating and managing resources such as hosts, images, containers, networking and storage using a unified data model and API. In many ways these platforms are the necessary third leg of the cloud plus containers stool.

Next Steps

For the contrarian point of view: why you don't need containers to do DevOps

Understand containers as a service

A Kubernetes hands-on how-to

This was last published in August 2016

Dig Deeper on Cloud DevOps

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Has your organization considered deploying containers at scale in the cloud, and if so what have you learned so far?
Cancel
While most containers are lightweight sandboxes that function best when hosting a single process, there are container implementations that are designed to operate more as infrastructure using, for example Canoncial’s LXD to run a full Linux system.
Cancel

-ADS BY GOOGLE

SearchAWS

TheServerSide.com

SearchFinancialApplications

SearchBusinessAnalytics

SearchCRM

SearchSalesforce

DevOpsAgenda

Close