Windows Server 2019 – Understanding application containers

How to install docker on windows 10

What does it mean to contain an application? We have a pretty good concept these days of containing servers, by means of virtualization. Taking physical hardware, turning it into a virtualization host-like Hyper-V, and then running many virtual machines on top of it is a form of containment for those VMs. We are essentially tricking them into believing that they are their own entity, completely unaware that they are sharing resources and hardware with other VMs running on that host. At the same time that we are sharing hardware resources, we are able to provide strong layers of isolation between VMs, because we need to make sure that access and permissions cannot bleed across VMs  particularly in a cloud provider scenario, as that would spell disaster.

Application containers are the same idea, at a different level. Where VMs are all about virtualizing hardware, containers are more like virtualizing the operating system. Rather than creating VMs to host our applications, we can create containers, which are much smaller. We then run applications inside those containers, and the applications are tricked into thinking that they are running on top of a dedicated instance of the operating system.

A huge advantage to using containers is the unity that they bring between the development and operations teams. We hear the term DevOps all the time these days, which is a combination of development and operation processes in order to make the entire application-rollout process more efficient. The utilization of containers is going to have a huge impact on the DevOps mentality, since developers can now do their job (develop applications) without needing to accommodate for the operations and infrastructure side of things. When the application is built, operations can take the container within which the application resides, and simply spin it up inside their container infrastructure, without any worries that the application is going to break servers or have compatibility problems.

I definitely foresee containers taking the place of many virtual machines, but this will only happen if admins jump in and try it out for themselves. Let’s discuss a few particular benefits that containers bring to the table.

Sharing resources

Just like when we are talking about hardware being split up among VMs, application containers mean that we are taking physical chunks of hardware and dividing them up among containers. This allows us to run many containers from the same server  whether a physical or virtual server.

However, in that alone, there is no benefit over VMs, because they simply share hardware as well. Where we really start to see benefits in using containers rather than separate VMs for all of our applications is that all of our containers can share the same base operating system. Not only are they spun up from the same base set, which makes it extremely fast to bring new containers online, it also means that they are sharing the same kernel resources. Every instance of an operating system has its own set of user processes, and often it is tricky business to run multiple applications together on servers because those applications traditionally have access to the same set of processes, and have the potential to be negatively affected by those processes. In other words, it’s the reason that we tend to spin up so many servers these days, keeping each application on its own server, so that they can’t negatively impact each other. Sometimes apps just don’t like to mix. The kernel in Windows Server 2019 has been enhanced so that it can handle multiple copies of the user mode processes. This means you not only have the ability to run instances of the same application over many different servers, but it also means that you can run many different applications, even if they don’t typically like to coexist, on the same server.

Isolation

One of the huge benefits of application containers is that developers can build their applications within a container running on their own workstation! A host machine for hosting containers can be a Windows Server, or it can be a Windows 10 workstation. When built within this container sandbox, developers will know that their application contains all of the parts, pieces, and dependencies that it needs in order to run properly, and that it runs in a way that doesn’t require extra components from the underlying operating system. This means the developer can build the application, make sure it works in their local environment, and then easily slide that application container over to the hosting servers where it will be spun up and ready for production use. That production server might even be a cloud-provided resource, but the application doesn’t care. The isolation of the container from the operating system helps to keep the application standardized in a way that it is easily mobile and movable, and saves the developer time and headaches since they don’t have to accommodate differences in underlying operating systems during the development process.

The other aspect of isolation is the security aspect. This is the same story as multiple virtual machines running on the same host, particularly in a cloud environment. You want security boundaries to exist between those machines, in fact most of the time you don’t want them to be aware of each other in any way. You even want isolation and segregation between the virtual machines and the host operating system, because you sure don’t want your public cloud service provider snooping around inside your VMs. The same idea applies with application containers.

The processes running inside a container are not visible to the hosting operating system, even though you are consuming resources from that operating system. Containers maintain two different forms of isolation. There is namespace isolation, which means the containers are confined to their own filesystem and registry. Then there is also resource isolation, meaning that we can define what specific hardware resources are available to the different containers, and they are not able to steal from each other. Shortly, we will discuss two different categories of containers, Windows Server Containers and Hyper-V Containers. These two types of containers handle isolation in different ways, so stay tuned for more info on that topic.

We know that containers share resources and are spun up from the same base image, while still keeping their processes separated so that the underlying operating system can’t negatively affect the application and also so that the application can’t tank the host operating system. But how is the isolation handled from a networking aspect? Well, application containers utilize technology from the Hyper-V virtual switch in order to keep everything straight on the networking side. In fact, as you start to use containers, you will quickly see that each container has a unique IP address assigned to it in order to maintain isolation at this level.

Scalability

The combination of spinning up from the same base image and the isolation of the container makes a very compelling scalability and growth story. Think about a web application that you host whose use might fluctuate greatly from day to day. Providing enough resources to sustain this application during the busy times has traditionally meant that we are overpaying for compute resources when that application is not being heavily used. Cloud technologies are providing dynamic scaling for these modern kinds of applications, but they are doing so often by spinning up or down entire virtual machines. There are three common struggles with dynamically scaling applications like this. First is the time that it takes to produce additional virtual machines; even if that process is automated, your application may be overwhelmed for a period of time while additional resources are brought online. Our second challenge is the struggle that the developer needs to go through in order to make that application so agnostic that it doesn’t care if there are inconsistencies between the different machines upon which their application might be running. Third is cost. Not only the hardware cost, as new VMs coming online will each be consuming an entire set of kernel resources, but monetary costs as well. Spinning virtual machines up and down in your cloud environment can quickly get expensive. These are all hurdles that do not exist when you utilize containers as your method for deploying applications.

Since application containers are using the same underlying kernel, and the same base image, their time to live is extremely fast. New containers can be spun up or down very quickly, and in batches, without having to wait for the boot and kernel mode processes to start. Also, since we have provided the developer this isolated container structure within which to build the application, we know that our application is going to be able to run successfully anywhere that we spin up one of these containers. No more worries about whether or not the new VM that is coming online is going to be standardized correctly, because containers for a particular application are always the same, and contain all of the important dependencies that the application needs, right inside that container.

Comments are closed.