Docker – Third-party (remote) network drivers

How to configure WordPress multisite with NGINX

As mentioned previously in the What is a Docker network? section, in addition to the built-in, or local, network drivers provided by Docker, the CNM supports community- and vendor-created network drivers. Some examples of these third-party drivers include Contiv, Weave, Kuryr, and Calico. One of the benefits of using one of these third-party drivers is that they fully support deployment in cloud-hosted environments, such as AWS. In order to use these drivers, they need to be installed in a separate installation step for each of your Docker hosts. Each of the third-party network drivers brings their own set of features to the table. Here is the summary description of these drivers as shared by Docker in the reference architecture document:

Although each of these third-party drivers has its own unique installation, setup, and execution methods, the general steps are similar. First, you download the driver, then you handle any configuration setup, and finally you run the driver. These remote drivers typically do not require swarm mode and can be used with or without it. As an example, let’s take a deep-dive into using the weave driver. To install the weave network driver, issue the following commands on each Docker host:

# Install the weave network driver plug-in
sudo curl -L -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave
# Disable checking for new versions
# Start up the weave network
weave launch [for 2nd, 3rd, etc. optional hostname or IP of 1st Docker host running weave]
# Set up the environment to use weave
eval $(weave env)

The preceding steps need to be completed on each Docker host that will be used to run containers that will communicate with each other over the weave network. The launch command can provide the hostname or IP address of the first Docker host, which was set up and already running the weave network, to peer with it so that their containers can communicate. For example, if you have set up node01 with the weave network when you start up weave on node02, you would use the following command:

# Start up weave on the 2nd node
weave launch node01

Alternatively, you can connect new (Docker host) peers using the connect command, executing it from the first host configured. To add node02 (after it has weave installed and running), use the following command:

# Peer host node02 with the weave network by connecting from node01
weave connect node02

You can utilize the weave network driver without enabling swarm mode on your hosts. Once weave has been installed and started, and the peers (other Docker hosts) have been connected, your containers will automatically utilize the weave network and be able to communicate with each other regardless of whether they are on the same Docker host or different ones.

The weave network shows up in your network list just like any of your other networks:

Let’s test out our shiny new network. First, make sure that you have installed the weave driver on all the hosts you want to be connected by following the steps described previously. Make sure that you either use the launch command with node01 as a parameter, or from node01 you use the connect command for each of the additional nodes you are configuring. For this example, my lab servers are named ubuntu-node01 and ubuntu-node02. Let’s start with node02:

Note the following, on ubuntu-node01:

# Install and setup the weave driver
sudo curl -L -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave
weave launch
eval $(weave env)

And, note the following, on ubuntu-node02:

# Install and setup the weave driver
sudo curl -L -o /usr/local/bin/weave
sudo chmod a+x /usr/local/bin/weave
weave launch
eval $(weave env)

Now, back on ubuntu-node01note the following:

# Bring node02 in as a peer on node01's weave network
weave connect ubuntu-node02

Now, let’s launch a container on each node. Make sure we name them for easy identification, starting with ubuntu-node01:

# Run a container detached on node01
docker container run -d --name app01 alpine tail -f /dev/null

Now, launch a container on ubuntu-node02:

# Run a container detached on node02
docker container run -d --name app02 alpine tail -f /dev/null

Excellent. Now, we have containers running on both nodes. Let’s see whether they can communicate. Since we are on node02, we will check there first:

# From inside the app02 container running on node02,
# let's ping the app01 container running on node01
docker container exec -it app02 ping -c 4 app01

Yeah! That worked. Let’s try going the other way:

# Similarly, from inside the app01 container running on node01,
# let's ping the app02 container running on node02
docker container exec -it app01 ping -c 4 app02

Perfect! We have bi-directional communication. Did you notice anything else? We have name resolution for our app containers (we didn’t have to ping by IP only). Pretty nice, right?

Comments are closed.