Ubuntu Server 18.04 – Managing Docker containers

Installing an FTP server

Now that Docker is installed and running, let’s take it for a test drive. After installing Docker, we have the docker command available to use now, which has various sub-commands to perform different functions with containers. First, let’s try out docker search:

docker search ubuntu

The docker search command allows us to search for a container given a search term. By default, it will search the Docker Hub, which is an online repository that hosts containers for others to download and utilize. You could search for containers based on other distributions, such as Fedora or CentOS, if you wanted to experiment. The command will return a list of Docker images available that meet your search criteria.

So, what do we do with these images? An image in Docker is equivalent to a virtual machine or hardware image. It’s a snapshot that contains the filesystem of a particular operating system or Linux distribution, along with some changes the author included to make it perform a specific task. This image can then be downloaded and customized to suit your purposes. You can choose to upload your customized image back to the Docker Hub, or you can choose to be selfish and keep it to yourself. Every image you download will be stored on your machine, so that you won’t have to re-download it every time you wish to create a new container.

To pull down a docker image for our use, we can use the docker pull command, along with one of the image names we saw in the output of our search command:

docker pull ubuntu

With the preceding command, we’re pulling down the latest Ubuntu container image available on the Docker Hub. The image will now be stored locally, and we’ll be able to create new containers from it. The process will look similar to the following screenshot:

Downloading an Ubuntu container image

If you’re curious as to which images you have saved locally, you can execute docker images to get a list of the Docker container images you have stored on your server. The output will look similar to this:

docker images
Listing installed Docker images

Notice the IMAGE ID in the output. If for some reason you want to remove an image, you can do so with the docker rmi command, and you’ll need to use the ID as an argument to tell the command what to delete. The syntax would look similar to this if I was removing the image with the ID shown in the screenshot:

docker rmi 0458a4468cbc
Feel free to remove the Ubuntu image if you already downloaded it so you can see what the process looks like. You can always re-download it by running docker pull ubuntu.

Once you have a container image downloaded to your server, you can create a new container from it by running the docker run command, followed by the name of your image and an application within the image to run. The application you’re running is known as an ENTRYPOINT, which is just a fancy term to describe an application a particular container is configured to run. You’re not limited to the ENTRYPOINT though, and not all containers actually have an ENTRYPOINT. You can use any command in the container that you would normally be able to run in that distribution. In the case of the Ubuntu container image we downloaded earlier, we can run bash with the following command so we can get a prompt and enter any command(s) we wish:

docker run -it ubuntu:latest /bin/bash

Once you run that command, you’ll notice that your shell prompt immediately changes. You’re now within a shell prompt from within your container. From here, you can run commands you would normally run within a real Ubuntu machine, such as installing new packages, changing configuration files, and more. Go ahead and play around with the container, and then we’ll continue with a bit more theory on how this is actually working.

There are some potentially confusing aspects of Docker we should get out of the way first before we continue with additional examples. The thing that’s most likely to confuse newcomers to Docker is how containers are created and destroyed. When you execute the docker run command against an image you’ve downloaded, you’re actually creating a container. Therefore, the image you downloaded with the docker pull command wasn’t an actual container itself, but it becomes a container when you run an instance of it. When the command that’s being run inside the container finishes, the container goes away. Therefore, if you were to run /bin/bash in a container and install a bunch of packages, those packages would be wiped out as soon as you exit the container.

Every container you run has a container ID that differentiates it from others. If you want to remove a container for example, you would need to reference this ID with the docker rm command. This is very similar to the docker rmi command that’s used to remove container images.

To see the container ID for yourself, you’ll first need to exit the container if you’re currently running one. There are two ways of doing so. First, you could press Ctrl +  D to disconnect, or even type exit and press Enter. Exiting the container this way, though, will remove it. When you run the docker ps command (which is the command you’ll use any time you want a list of containers on your system), you won’t see it listed. Instead, you can add the -a option to see all containers listed, even those that have been stopped.

You’re probably wondering, then, how to exit a container and not have it go away. To do so, while you’re attached to a container, press Ctrl  P and then press q (don’t let go of THE  Ctrl key while you press these two letters). This will drop you out of the container, and when you run the docker ps command (even without the -a option), you’ll see that it’s still running.

The docker ps command deserves some attention. The output will give you some very useful information about the containers on your server, including the CONTAINER ID that was mentioned earlier. In addition the output will contain the IMAGE it was created from, the COMMAND being run when the container was CREATED, and its STATUS, as well as any PORTS you may have forwarded. The output will also display randomly generated names for each container, which are usually quite comical. As I was going through the process of creating containers while writing this section, the code names for my containers were tender_cori, serene_mcnulty, and high_goldwasser. This is just one of the many quirks of Docker, and some of these can be quite humorous.

The important output of the docker ps -a command is the CONTAINER ID, the COMMAND, and the STATUS. The ID, which we already discussed, allows you to reference a specific container to enable you to run commands against it. COMMAND lets you know what command was being run. In our example, we executed /bin/bash when we started our containers.

If we have any containers that were stopped, we can we can resume a container with the docker start command, giving it a container ID as a argument. Your command will end up looking similar to this:

docker start 353c6fe0be4d

The output will simply return the ID of the container, and then drop you back to your shell prompt—not the shell prompt of your container, but that of your server. You might be wondering at this point: how do I get back to the shell prompt for the container? We can use docker attach for that:

docker attach 353c6fe0be4d

The docker attach command is useful because it allows you to attach your shell to a container that is already running. Most of the time, containers are started automatically instead of starting with /bin/bash as we have done. If something were to go wrong, we may want to use something like docker attach to browse through the running container to look for error messages. It’s very useful.

Speaking of useful, another great command is docker info. This command will give you information about your implementation of Docker, such as letting you know how many containers you have on your system, which should be the number of times you’ve run the docker run command unless you cleaned up previously run containers with docker rm. Feel free to take a look at its output and see what you can learn from it.

Getting deeper into the subject of containers, it’s important to understand what a Docker container is and what it isn’t. A container is not a service running in the background, at least not inherently. A container is a collection of namespaces, such as a namespace for its filesystem or users. When you disconnect without a process running within the container, there’s no reason for it to run, since its namespace is empty. Thus, it stops. If you’d like to run a container in a way that is similar to a service (it keeps running in the background), you would want to run the container in detached mode. Basically, this is a way of telling your container to run this process and to not stop running it until you tell it to. Here’s an example of creating a container and running it in detached mode:

docker run -dit ubuntu /bin/bash

Normally, we use the -it options to create a container. This is what we used a few pages back. The -i option triggers interactive mode, while the -t option gives us a psuedo-TTY. At the end of the command, we tell the container to run the Bash shell. The -d option runs the container in the background.

It may seem relatively useless to have another Bash shell running in the background that isn’t actually performing a task. But these are just simple examples to help you get the hang of Docker. A more common use case may be to run a specific application. In fact, you can even serve a website from a Docker container by installing and configuring Apache within the container, including a virtual host. The question then becomes: how do you access the container’s instance of Apache within a web browser? The answer is port redirection, which Docker also supports. Let’s give this a try.

First, let’s create a new container in detached mode. Let’s also redirect port 80 within the container to port 8080 on the host:

docker run -dit -p 8080:80 ubuntu /bin/bash

The command will output a container ID. This ID will be much longer than you’re accustomed to seeing. This is because when we run docker ps -a, it only shows shortened container IDs. You don’t need to use the entire container ID when you attach; you can simply use part of it as long as it’s long enough to be different from other IDs:

docker attach dfb3e

Here, I’ve attached to a container with an ID that begins with dfb3e. I’m now attached to a Bash shell within the container.

Let’s install Apache. We’ve done this before, but there are a few differences that you’ll see. First, if you simply run the following command to install the apache2 package as we would normally do, it may fail for one or two reasons:

sudo apt install apache2

The two problems here are first that sudo isn’t included by default in the Ubuntu container, so it won’t even recognize the sudo part of the command. When you run docker attach, you’re actually attaching to the container as the root user, so the lack of sudo won’t be an issue anyway. Second, the repository index in the container may be out of date, if it’s even present at all. This means that apt within the container won’t even find the apache2 package. To solve this, we’ll first update the repository index:

apt update

Then, install apache2 using the following command:

apt install apache2

Now we have Apache installed. We don’t need to worry about configuring the default sample web page or making it look nice. We just want to verify that it works. Let’s start the service:

/etc/init.d/apache2 start

Apache should be running within the container. Now, press Ctrl + P and Ctrl + Q to exit the container, but allow it to keep running in the background. You should be able to visit the sample Apache web page for the container by navigating to localhost:8080 in your web browser. You should see the default It works! page of Apache. Congratulations, you’re officially running an application within a container.

As your Docker knowledge grows, you’ll want to look deeper into the concept of an ENTRYPOINT. An ENTRYPOINT is a preferred way of starting applications in a Docker container. In our examples so far, we used an ENTRYPOINT of /bin/bash. While that’s perfectly valid, ENTRYPOINTs are generally Bash scripts that are configured to run the desired application and are launched by the container.

Our Apache container is running happily in the background, responding to HTTP requests over port 8080 on the host. But what should we do with it at this point? We can create our own image from it so that we can simplify deploying it later. Before we do so, we should configure Apache to automatically start when the container is started. We’ll do this a bit differently inside the container than we would on an actual Ubuntu Server. Attach to the container and open the /etc/bash.bashrc file in a text editor within the container. In order to do this, you may need to install a text editor (such as nano) with the apt command, as the container may not have an editor installed:

sudo apt install nano

Add the following to the very end of the /etc/bash/rc file inside the container:

/etc/init.d/apache2 start

Save the file and exit your editor. Exit the container with the Ctrl + P and Ctrl + Q key combinations. Next, let’s grab the container ID by running the docker ps command. Once we have that, we can now create a new image of the container with the docker commit command:

docker commit <Container ID> ubuntu/apache-server:1.0

That command will return us the ID of our new image. To view all the Docker images available on our machine, we can run the docker images command to have Docker return a list. You should see the original Ubuntu image we downloaded, along with the one we just created. We’ll first see a column for the repository the image came from; in our case it is Ubuntu. Next, we see tag. Our original Ubuntu image (the one we used docker pull to download) has a tag of latest. We didn’t specify that when we first downloaded it, it just defaulted to latest. In addition, we see an image ID for both, as well as the size.

To create a new container from our new image, we just need to use docker run, but specify the tag and name of our new image. Note that we may already have a container listening on port 8080, so this command may fail if that container hasn’t been stopped:

docker run -dit -p 8080:80 ubuntu/apache-server:1.0 /bin/bash

Speaking of stopping a container, I should probably show you how to do that as well. As you can probably guess, the command is docker stop followed by a container ID. This will send the SIGTERM signal to the container, followed by SIGKILL if it doesn’t stop on its own after a delay:

docker stop <Container ID>

Admittedly, the Apache container example was fairly simplistic, but it does the job as far as showing you a working example of a container that is actually somewhat useful. Before continuing on, think for a moment of all the use cases you can use Docker for in your organization. It may seem like a very simple concept (and it is), but it allows you to do some very powerful things. Perhaps you’ll want to try to containerize your organization’s intranet page, or some sort of application. The concept of Docker sure is simple, but it can go a long way with the right imagination.

Before I close out this section, I’ll give you a personal example of how I implemented a container at a previous job. At this organization, I worked with some Embedded Linux software engineers who each had their own personal favorite Linux distribution. Some preferred Ubuntu, others preferred Debian, and a few even ran Gentoo. This in and of itself wasn’t necessarily an issue—sometimes it’s fun to try out other distributions. But for developers, a platform change can introduce inconsistency, and that’s not good for a software project. The build tools are different in each distribution, because they all ship different versions of all development packages and libraries. The application this particular organization developed was only known to compile properly in Debian, and newer versions of the GCC compiler posed a problem for the application. My solution was to provide each developer with a Docker container based on Debian, with all the build tools baked in that they needed to perform their job. At this point, it no longer mattered which distribution they ran on their workstations. The container was the same no matter what they were running. Regardless of what their underlying OS was, they all had the same tools. This gave each developer the freedom to run their preferred distribution of Linux (and the stranger ones used macOS) and it didn’t impact their ability do their job. I’m sure there are some clever use cases you can come up with for implementing containerization.

Now that we understand the basics of Docker, let’s take a look at automating the process of building containers.

Comments are closed.