Spread the love

Containers give developers flexibility, speed, and simpler deployment. Virtual machines offer superior workload isolation and security. We can have both.

peanut butter and jelly sandwich, better together
Credit: Jeffery Goldman

The technology industry loves to redraw boundary lines with new abstractions, then proclaim that prior approaches are obsolete. It happens in every major arena: application architectures (monoliths vs. microservices), programming languages (JVM languages vs. Swift, Rust, Go), cloud infrastructure (public cloud vs. on-prem), you name it. 

False dichotomies are good at getting people excited and drawing attention, and they make for interesting debates on Reddit. But nearly without exception, what tends to happen in tech is a long period of co-existence between the new and the old. Then usually the old gets presented as new again. Monolithic application architectures sometimes prevail despite modern theology around microservices. On-prem data centers have not, in fact, been extinguished by public clouds. Serverless has not killed devops. The list goes on.

I think the most interesting false dichotomy today is the supposed line between virtual machines (VMs) and containers. The former has been maligned (sometimes fairly, sometimes not) as expensive, bloated, and controlled by a single vendor, while the latter is generally proclaimed to be the de facto application format for cloud-native deployments.

But the reality is the two worlds are coming closer together by the day. 

Now more than 10 years into the rise of containers, the relationship between containers and VMs can be better described in terms of melding, rather than replacement. It’s one of the more nuanced evolutionary themes in enterprise architecture, touching infrastructure, applications, and, most of all, security.

The rise of containers

The lineage of containers and virtual machines is rather involved. Linux namespaces, the primitive kernel components that make up containers, began in 2006. The Linux Containers project (LXC) dates back to 2008. Linux-vserver, an operating system virtualization project similar to containers, began in 1999. Virtuozzo, another container tech for Linux that uses a custom Linux kernel, was released as a commercial product in 2000 and was open-sourced as OpenVZ in 2005. So, containers actually predate the rise of virtualization in the 2000’s. 

But for most of the market, containers officially hit the radar in 2013 with the introduction of Docker, and started mainstreaming with Docker 1.0 in 2015. The widespread adoption of Docker in the 2010’s was a revolution for developers and set the stage for what’s now called cloud-native development. Docker’s hermetic application environment solved the longstanding industry meme of “it works on my machine” and replaced heavy and mutable development tools like Vagrant with the immutable patterns of Dockerfiles and container images. This shift enabled a new renaissance in application development, deployment, and continuous integration (CI) systems. Of course, it also ushered in the era of cloud-native application architecture, which has experienced mass adoption and has become the default cloud architecture.

The container format was the right tech at the right time—bringing so much agility to developers. Virtual machines by comparison looked expensive, heavyweight, cumbersome to work with, and—most damning—were thought of something you had to wait on “IT” to provision, at a time when the public clouds made it possible for developers to simply grab their own infrastructure without going through a centralized IT model.

The virtues of virtual machines

When containers were first introduced to the masses, most virtual machines were packaged up as appliances. The consumption model was generally heavyweight VMware stacks, requiring dedicated VM hosts. Licensing on that model was (and still is) very expensive. Today, when most people hear the term “virtualization,” they automatically think of heavyweight stacks with startup latency, non-portability, and resource inefficiency. If you think of a container as a small laptop, a virtual machine is like a 1,000 pound server. 

However, virtual machines have some very nice properties. Over time there’s been interest in micro-VMs, and a general trend toward VMs getting smaller and more efficient. The technology in the Linux kernel has evolved to the point where you can reasonably run a customer application in a separate kernel. These evolutionary gains have made VMs much more friendly as a platform, and today virtual machines are all around us.

On Windows, for example, if you install on normal hardware, you are running on virtual machines for security purposes. It has become commonly accepted that hypervisors are a powerful way to do security. Containers running in virtual machines can run on any cloud environment, and no longer just the private environment of the hypervisor providers. 

Containers and VMs join forces

The need for multi-tenancy and the lack of a good security isolation boundary for containers are the primary reasons why containers and virtual machines are coming together today.

Containers don’t actually contain. If you’re running multiple workloads and apps inside a Kubernetes cluster, they’re all sharing the same OS kernel. So should a compromise happen to one of those containers, it could be a bad day for every other container plus the infrastructure that runs them.

Containers by default provide an OS-level virtualization that provides a different view of the file system and processes, but if there are any exploits in the Linux kernel, you can then pivot to the entire system, get privilege escalation, and either escape the container or execute processes in a process or context you shouldn’t be allowed to. 

So, one of the fundamental reasons why the container and virtual machine worlds are colliding is that virtualized containers (aka, “virtual machine containers”) allow each container to run in its own kernel, with separate address spaces that do not touch the same Linux kernel the host system uses.

While security isolation in multi-tenant environments is the “hair on fire” reason why virtual machine containers are on the rise, there are also broad economic reasons challenging the boundaries between virtual machines and containers. When you have a virtual machine, you can assign specific memory requirements or specific CPU limitations in a more granular way and even apply usage policies at a global level. 

Introducing Krata

The container format is a nearly 20-year-old evolution in how modern applications are created and operated across distributed cloud infrastructure. Although virtual machines were supposed to wither as the world adopted containers, the reality is that VMs are here to stay. Not only do virtual machines still have a large installed base, but developers are drawing on VM design principles to address some of containers’ still-turbulent challenges.

Krata, an open-source project by Edera, pairs a Type 1 hypervisor (the Xen microkernel) with a control plane that has been reimagined and rewritten in Rust to be container native. Krata provides the strong isolation and granular resource controls of virtual machines while supporting the developer ergonomics of Docker. It also creates an OCI-compatible container runtime on Kubernetes. This means Krata doesn’t require KubeVirt for virtualization; the system firmware boots and hands off to the Xen microkernel, which sets up interrupts and memory management for virtual machines that run pods in Kubernetes.

Because Krata uses a microkernel, it doesn’t make any changes to your existing Kubernetes operating system. You can use any OS—AL2023, Ubuntu, Talos Linux, etc.—and you don’t need KubeVirt to handle virtualization. Your containers run side-by-side on Kubernetes with no shared kernel state, preventing data access between workloads even in the event of kernel vulnerabilities.

Containers give developers flexibility, speed, and simpler deployment. Virtual machines offer superior workload isolation and security. For years, developers have been forced to choose. That’s changing.

Alex Zenla is co-founder and CTO of Edera.

New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.

If you found this article helpful, please support our YouTube channel Life Stories For You

Facebook Comments Box