Containers

Virtual Machines

In the quest for maximising efficiency of computing power available on servers Virtual Machines (VMs) came into existence, with products from firms like VMware and Virtualbox pushing the concept to general users.

“In computing, virtualization refers to the act of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms, operating systems, storage devices, and computer resources.” - Wikipedia

Virtual Machines are created on top of hypervisors which run on top of the host machine’s operating system (OS), the hypervisors allow emulation of hardrware like CPU, Disk, Memory, Network etc and server machines can be configured to create a pool of emulated hardware resources available for applications in the process making the actual harware resources on those server utilized much more efficiently. This concept of pooling together emulated hardware resources into a bigger virtual resource is an enabler for cloud technologies by the way.

From developers point of view one of the biggest problems in deploying code has been, and continues to be, inconsistencies in environments (development, testing, production etc). Developers would create packages (like JAR, WAR or DLL) bundling the application together with its dependencies and deploy them to different environments, this went a long way to tackle the problem of application level dependencies and their versions however there were still issues with lower level dependencies (like versions of JDK, OS library or utilities like ffmpeg).

Since VMs can run an entire operating system as a guest machine they provide benefits in creating reproducible environments. The same OS can run on Windows or Mac OS hosts for developers and Linux hosts in testing and production environments, for example. And since the whole OS is virtualized by the hypervisor on host machines the entire dependency chain for the application can be packaged in a minimal OS and saved as a VM image - as long as the host machine has a hypervisor that can run that particular VM image the application can run on that machine.

The diagram below illustrates a layered view of two applications running on VMs on a single bare metal server

VMs

Application A is a linux based application, it has been bundled along with dependant binaries and libraries (e.g. JDK and ffmpeg) to create a VM image. This server is running two instances of this application by running two VMs of its image. Application B is a Windows 10 based application, similarly it has been bundled with it’s dependencies in a single VM image and this server is running a single instance of this application.

By deploying these applications in this way the following benefits are achieved;

  • The hardware resources, CPU, memory, etc, of the server are utilized more efficiently
  • The applications are bundled into a single VM image that has all the dependencies contained within, hence simplifying the deployment process.
  • Since the VM image will behave in the same way regardless of the host OS as long as the correct hypervisor is available, reproducible enviroments (for development, testing and production) are possible.

Containers

Containers are defined as “an object for holding or trasnporting somthing” and they are generally associuated with shipping containers. The concept of containers has made use of Operating-System-level virtualzation to run multiple isolated user-space instances (referred to as containers), containing the application code, required libaries and the runtime required to run the application without any external dependencies in parallel.

The diagram below provides a layered illustration of the same two applications described previously running under VM images but running in Docker containers.

Containers

In the Containerised infrastructure there is no hypervisors and no guest operating systems, this means that the resource footprint is very little compared to a VM based infrastructure. This improves the efficiency of utilizing server hardware greatly. And since there is no gues operating system to boot up and shut down, containers can be deployed very fast, within milli seconds - which means they can be scaled up and down really quickly.

Docker

Docker has been particularly instrumental in bringing containers to the masses by providing the docker engine, which contains the components in the diagram below, that handles the basics like image and container creation and management, networking and management of data volumes.

Docker Components

The core of Docker Engine itself has been built with a very open API-driven architecture which has made it very favourable for programming and autmation and hence Docker has become a very fundamental enabling technology for Continous Deployments and the emerging devops world in general.

Docker has also built very useful tooling around it’s core offering that makes working with containers a lot painless then it was otherwise. These tools include Swarm for orchestration, Docker Machine that helps install and manage Docker Engine locally and on cloud, and Docker Compose that helps to define and run multi-container applications. There are, of course, a lot of othger tools constantly being added to the Docker eco-system as it is a very active and developing technology.

There are also a lot of third party products around the Docker (and containers in general) ecosystem, like Kurbenetes and Mesos, for example, that helps in orchestration of containers in a cloud environment.

Micro OSes

The continuing adoption of containers has given rise to what are called Micro OSes which are essentialy trimmed down linux distributions that ship with only the essentials needed to run a container engine, like Docker, and nothing more. These are then run as lightweight VMs on a bare metal server to create a cloud platform for containers. There are a few distributions out there, like Project Atomic, RancherOS and CoreOS

Micro Oses

As the diagram above illustrates an example implementation of Micro OSes, the application is simply running on a cluster, or cloud, of containers - underneath it there are Docker engne running on a cloud of lightweight VMs running Core OS. This maximises the efficieny of utlizing the hardware resources of the bare metal server at the very bottom of the layer, by removing all resources consuming processes that are not needed to run the application a single server could run as many instances of the application as perhaps ten servers running the application straight on bare metal.

Read more articles on Containers

 
comments powered by Disqus