Gunnar Hellekson, Red Hat‘s director of Product Management for Linux and Virtualization, sent some thoughts on where and how containers and virtual machine software fit. I’ve shamelessly lifted some of his thoughts for this article.
DK: When you’re speaking with your enterprise customers, how do they see containers and virtual machine software?
GH: To some, containers and virtualization (editor: Hellekson is using “virtualization” to stand in for “virtual machine software”) are essentially the same thing – a rip and replace alternative for the other. To others, they are completely different technologies with different use cases. The truth is that containers and virtualization do have a lot in common, but not as much as some people think. To get the most out of each of these important technologies, we must understand the ins and outs of containers and virtualization and how they do and don’t work together.
DK: Why are enterprises looking into the use of these technologies?
GH: Organizations struggling to meet increasing demand for more and better applications, which need to be delivered faster than ever before, are warming up, if not outright flocking, to container technology, especially when paired with OpenStack infrastructure. By offering a uniform application packaging paired with application isolation and improved workload density, Linux container technologies can help enterprises meet these new application challenges as well as end user expectations. This does, however, sound very similar to the benefits of virtualization, an already-proven technology adopted by a broad cross-section of enterprises.
Containers Are Not Virtual Machines, and Vice Versa
DK: Could you discuss how you see virtual machine software and containers?
GH: A common misconception is that containers are just an evolution of virtual machines (VMs), but there are some major differences between the technologies.
Like virtual machines, application containers keep all components of an application together — including the libraries and binaries on which they depend. By combining the ability to isolate applications with lightweight and image-based deployment capabilities, we can put more applications on a single machine and start them up much more quickly. How do containers achieve their light weight? Unlike virtual machines, they do not contain an operating system (OS) kernel; rather, they rely on the kernel of their host.
DK: What potential issues does the use of containers create?
GH: This flexibility can, however, introduce potential security and manageability issues. Since containers are more dependent on the environment that hosts them, this introduces more risk, and therefore more chances for being breached. If the host is compromised, then all of its containers are compromised as well, just as we expect from virtual machines and hypervisors. Unlike virtual machines, if a single container is compromised, there is also a chance the intruder can gain access to the OS. So while there are considerable added benefits with containers, IT departments need to ensure that the entire environment is secure when setting them up.
DK: Are enterprises using these technologies differently?
GH: Because of their speed and light weight, containerized applications are much more likely to be distributed and modular, where virtual machines are much more likely to be centralized and monolithic. That means that containerized applications rely on orchestration and management tools in a way that virtual machines do not. Because there are many orchestration solutions that connect different containerized components together into a single coherent application, there is a real chance of compatibility issues, or even future flaws that have yet to be discovered — but this is a challenge that container platforms, an evolution of Platform-as-a-Service, intend to address.
DK: Are there other differences in enterprise use of these technologie?
GH: Virtualization technology, of course, has been used and trusted in the enterprise for many years now. Virtualization enables servers to run multiple operating systems and applications. The main difference from containers is that each VM contains the full application stack, from the server to the database to the OS. Many companies have even successfully used virtualization to consolidate server systems, with hardware abstraction creating an environment that can run multiple operating systems and applications running on VMs. And, because virtualization systems run workloads inside a guest operating system that is isolated from the host OS, it offers more security than container technology currently does. In addition, over time many products have been developed that boost the security and manageability of virtualization systems.
On the downside, VMs can be much slower to start up, and their size makes them much less flexible when it comes to implementing changes in the development process, such as adding new features. Virtualization also takes up a lot of system resources since each VM runs not just a full copy of an operating system, but a virtual copy of all the hardware that the operating system needs to run.
How to Get Virtualization and Containers to Live in Harmony
DK: Could you describe how these two technologies should be used together?
GH: In short, virtualization provides flexibility by abstraction from hardware, while containers provide speed and agility through lightweight application isolation. So, instead of thinking of containers as replacing VMs, organizations should be thinking about containers as a complement to VMs — with the workload determining what to use when.
For example, containers can be used in development environments for speedier deployment of new applications. Many companies are also using containers within VMs to take advantage of virtualization’s security and management features. Some virtualize their container hosts to maintain operational consistency between the “old” and the “new” infrastructures. Indeed, as companies evolve their use of containers, they are increasingly finding the need for a new “stack”—one that provides the level of security, management and standardization required for running any technology in enterprise production environments.
Companies looking to stay ahead of the competitive curve are not making a choice between containers and virtualization; they are looking for ways to use and integrate both technologies to their fullest potentials.
Dan’s Take – Use the proper tool at the proper time
As I pointed out in the article “Virtualization: Much More Than Virtual Machines” published on VirtualizationReview.com back in January 2016, each type of virtualization technology addresses a different set of needs and requirements. Although there is quite a bit of overlap, containers and virtual machine software are built to address different needs. Containers might be the best choice if all of the applications, application components, and the like are designed to execute on a single operating system. Virtual machine software addresses the environments in which several different operating systems or different versions of operating systems must coexist without injuring one another.
Typically, these technologies don’t live in a vacuum. Other types of virtualization technology are necessary to build a completely software defined computing environment. Let’s see what else is needed.
- Access virtualization — this technology makes it possible for applications to work effectively with many different types of end-point devices without requiring the application to know too much about each one. Citrix, Microsoft and VMware offer this type of technology
- Application virtualization — this technology makes it possible for applications to be encapsulated allowing them to be easily delivered to remote systems on demand or work with other versions of the operating system they were designed for. In some ways, the features of this technology appear to overlap containers even though this operates at the application layer and containers operates a bit lower in the stack
- Processing virtualization — this technology makes it possible for many systems appear to be a single computing resource for an application or for a single system to appear to be many. Containers, also known as operating system virtualization and partitioning and virtual machine software both make it possible for a single machine to appear to be many. Workload management and parallel processing monitors make it possible for the resource of many machines to support a single application or workload.
- Storage virtualization — this technology hides the actual physical properties of storage technology from the workloads making it possible to use parallel processing techniques to speed up storage processing, making it possible to use compression and deduping to reduce the size of objects being stored, or to hide where data objects are being stored.
- Network virtualization — this technology hides the actual physical properties of the network from the workloads. This technology can be used to improve network performance, increase network agility, improve network security or make it possible for workloads designed for one type of network to work happily on another.
I always enjoy communicating with Gunnar to learn more about how he and Red Hat see this technology.