VM or Docker?

How to understand that you need a Docker, not a VM? Let's try to understand the main differences between the isolation of virtual machines (VMs) and Docker containers, whether they can be interchangeable and how we can use them.







So what is the difference between Docker containers and VM?

A virtual machine (VM) is a virtual computer with all virtual devices and a virtual hard disk, on which a new independent OS (guest OS) is installed along with virtual device drivers, memory management and other components. That is, we get an abstraction of physical equipment that allows you to run many virtual computers on one computer. Virtual equipment is displayed in the system properties, and installed applications interact with it as if it were real. At the same time, the virtual machine itself is completely isolated from the real computer, although it may have access to its disk and peripheral devices.

An installed VM can take up space on a computer’s disk in different ways:









When using VM, there are additional costs for emulating virtual equipment and launching a guest OS, supporting and administering the necessary environment for your application to work. Also, when you deploy a large number of virtual machines on the server, the amount of space they occupy on the hard disk will only grow, because each VM requires space, at least for the guest OS and drivers for virtual devices.







Docker is software for creating container-based applications. Containers and virtual machines solve one problem, but they do it differently. Containers take up less space because reuse more host system shared resources than VMs, as unlike VM, it provides virtualization at the OS level, not hardware. This approach provides less hard disk space, faster deployment and easier scaling.







The docker container provides a more efficient application encapsulation mechanism, providing the necessary host system interfaces. This feature allows containers to split the core of the system, where each container works as a separate process of the main OS, which has its own virtual address space, so data belonging to different memory areas cannot be changed.







Docker is the most common technology for using containers in an application. It has become the standard in this area, building on the cgroups and namespace provided by the Linux kernel. The native OS for Docker is Linux, so the launch of Docker containers on Windows will occur inside the Linux virtual machine.







What is the container made of?







An image is the main element from which containers are created. The image is created from the Dockerfile added to the project and is a set of file systems (layers) layered on each other and grouped together, read-only; the maximum number of layers is 127.







At the heart of each image is a basic image, which is indicated by the FROM command - the entry point for the formation of the Dockerfile image. Each layer is a readonly layer and is represented by one command modifying the file system written to the Dockerfile. This approach allows different files and directories from different file layers to transparently overlap, creating a cascading-integrated file system. Layers contain metadata, allowing you to save related information about each layer at run time and build. Each layer contains a link to the next layer, if the layer does not have a link, then this is the topmost layer in the image.

Starting with the Docker EE version 06.17.02-ee5 and the Docker Engine - Community, Overlay2 or Overlay is used, and earlier versions use AuFS (Advanced multi layered Union file system).







Container - how does it work?







A container is an application-level abstraction that combines code and dependencies. Containers are always created from images, adding a writable top layer and initializing various parameters. Since the container has its own layer for recording and all changes are saved in this layer, several containers can share access to the same image. Each container can be configured through a file in the docker-compose.yml project, setting various parameters, such as container name, ports, identifiers, dependencies between other containers. If you do not specify a container name in the settings, then Docker will create a new container each time, giving it a name randomly.







When the container starts from the image, Docker mounts the file system for reading and writing on top of any layers below. This is where all the processes will be executed. When you start the container for the first time, the initial read-write layer is empty. When changes occur, they apply to this layer; for example, if you want to modify a file, this file will be copied from the bottom read-only layer to the read and write layer. A read-only version of the file will still exist, but now it is hidden under the copy.







How does a cascaded federated file system work?







A cascading-integrated file system (FS) implements a copy-on-write (COW) copy mechanism. The working unit is a layer, each layer should be considered as a separate full-fledged file system with a hierarchy of directories from the root itself. This approach uses the unified mount of file systems, allowing, transparently for the user, to combine files and directories of various file systems (called branches) into a single connected file system. The contents of directories with the same paths will be displayed together in one combined directory (in a single namespace) of the resulting file system.







The layers are merged according to the following principles:









Output







If you need to virtualize a system with guaranteed dedicated resources and virtual hardware, you should choose a VM. What gives the use of VM:









If you want to isolate running applications as separate processes, Docker is fine for you. What gives the use of Docker:










All Articles