The fastest-developing concept in all of the cloud computing space is software containers. In fact, just about...
every possible positive thing you could say about cloud technology has been said about containers. And many enterprise planners wonder why anything else is even being considered. It is true that container technology is probably underused in enterprise clouds today. But there's also a real danger that jumping into containers could lead your cloud plans to a dark place.
In traditional virtual machine (VM) clouds, a hypervisor divides a server into virtual machines that in nearly always operate just as separate physical servers would. Each VM runs its own applications, middleware and even operating system. And the hypervisor mediates the sharing of memory, I/O, network connections and other physical system elements.
This approach takes advantage of the fact that modern servers have powerful multicore processors that are usually underutilized in traditional applications. Sharing hardware through virtualization and cloud computing lowers hardware capital cost. As there's only one real server no matter how many VMs are on it, hardware management is also less costly. Each VM is essentially an independent server, so VM technology is widely used in public clouds. It's naturally multi-tenant in nature.
The problem with a VM approach is that enterprises building private clouds don't need multi-tenant support -- they're the only tenant. The duplication of an operating system for each application means wasted memory and more complexity in building and maintaining machine images. Containers are a kind of operating system level of virtualization, where a single OS runs multiple containers that share the OS services and hardware.
Benefits of using software containers
Any evaluation of containers versus VMs for cloud environments should start with this first, basic benefit: Software containers are inherently more memory efficient. It implies that they'll be most useful where there are a lot of applications to run and a lot of potential duplications of an OS if VMs are used. Small-scale container use probably won't gain much in memory efficiency, so don’t think of containers for simple server consolidation.
Beyond the issue of memory efficiency benefits, the next issue in making a container decision is the need for application tenancy. Some companies have applications that demand high levels of access control and data security to the point where they almost create a virtual tenant. Container technology, which shares an OS and builds fewer hard partitions between applications, is harder to use in these high-security applications. It's best to check with your internal audit or compliance organization to make sure you can meet application partitioning requirements with software containers.
Network connectivity issues
The next thing to look at is the complexity of network connectivity that applications need. Most users are aware of the fact that all cloud and virtualization technologies, including the popular VM OpenStack model, have only very basic networking capabilities. Container tools, with the goal of simplifying deployment, are usually even more basic. Docker, the most popular container platform, has significant limitations in its network model.
Complex networking in the cloud usually means complicated application cloud bursting, failover strategies, large-scale use of dynamic application components like microservices or a diverse community of users each with their own set of cloud-hosted applications and tools. Any of these issues are a signal to look for a software-defined networking option that fits your needs and then incorporate either VMs or containers into the option you've picked. Users report this works for containers as well as VMs, but most admit that the tools needed in a VM space are more flexible.
Readiness of staff in using containers
The final consideration is the intersection of your cloud goals and your staff skills. Software containers are easier to use than VMs, particularly if you take time, or get professional services assistance, in setting up your network and hosting environment upfront and apply DevOps tools thoughtfully; you plan to run sophisticated applications in a private or hybrid cloud; and you don’t have skilled virtual-machine professionals and network operations people on staff, containers will provide an easier path to fulfillment.
However, containers are currently not as versatile as VMs. Say your staff already understands virtualization, in particular if you have a virtualized data center from a supplier who supports private cloud deployment well. Then you may be better off staying with VMs if you plan extensive use of the cloud, particularly if you plan dynamic services, microservices, or applications that involve some of the web service features of Amazon's or Microsoft's public cloud.
Nothing stands still in the cloud, especially in container implementation. Docker dominates containers today, but there are other emerging options, including Virtuozzo/LXC, Lightweight Virtual Environments, rkt (pronounced rocket) and Red Hats OCID. Each of these alternatives present a different feature set that trades various benefits and limitations differently from Docker. They also combine to create a competitive container market that will ultimately enhance them all and improve the container value proposition.
Using containers and virtual machines together
Even VM users should keep an eye on these trends. VMs were designed for a compute world where users relied on multiple operating systems and where operating systems didn't offer complete application security by partitioning applications and resources. We're increasingly committed to Linux as a platform, and we're constantly improving in-OS isolation of applications and resources. We're heading for a time, perhaps, when software containers and VMs converge and when a container-like approach will be the right path for all.
What can container networking provide?
Five negatives of container technology
Applications that can get most out of Docker containers