loading...

Docker – Built-in (local) Docker networks

How to Create MySQL Users Accounts and Grant Privileges

The out-of-the-box install of Docker includes a few built-in network drivers. These are also known as local drivers. The two most commonly used drivers are the bridge network driver and the overlay network driver. Other built-in drivers include none, host, and MACVLAN. Also, without your creating networks, your fresh install will have a few networks pre-created and ready to use. Using the network ls command, we can easily see the list of pre-created networks available in the fresh installation:

In this list, you will notice that each network has its unique ID, a name, a driver used to create it (and that controls it), and a network scope. Don’t confuse a scope of local with the category of driver, which is also local. The local category is used to differentiate the driver’s origin from third-party drivers that have a category of remote. A scope value of local indicates that the limit of communication for the network is bound to within the local Docker host. To clarify, if two Docker hosts, H1 and H2, both contain a network that has the scope of local, containers on H1 will never be able to communicate directly with containers on H2, even if they use the same driver and the networks have the same name. The other scope value is swarm, which we’ll talk more about shortly.

The pre-created networks that are found in all deployments of Docker are special in that they cannot be removed. It is not necessary to attach containers to any of them, but attempts to remove them with the docker network rm command will always result in an error.

There are three built-in network drivers that have a scope of local: bridge, host, and none. The host network driver leverages the networking stack of the Docker host, essentially bypassing the networking of Docker. All containers on the host network are able to communicate with each other through the host’s interfaces. A significant limitation to using the host network driver is that each port can only be used by a single container. That is, for example, you cannot run two nginx containers that are both bound to port 80. As you may have guessed because the host driver leverages the network of the host it is running on, each Docker host can only have one network using the host driver:

Next up, is the null or none network. Using the null network driver creates a network that when a container is connected to it provides a full network stack but does not configure any interfaces within the container. This renders the container completely isolated. This driver is provided mainly for backward-compatibility purposes, and like the host driver, only one network of the null type can be created on a Docker host:

The third network driver with a scope of local is the bridge driver. Bridge networks are the most common type. Any containers attached to the same bridge network are able to communicate with one another. A Docker host can have more than one network created with the bridge driver. However, containers attached to one bridge network are unable to communicate with containers on a different bridge network, even if the networks are on the same Docker host. Note that there are slight feature differences between the built-in bridge network and any user-created bridge networks. It is best practice to create your own bridge networks and utilize them instead of the using the built-in bridge network.  Here is an example of running a container using a bridge network:

In addition to the drivers that create networks with local scope, there are built-in network drivers that create networks with swarm scope. Such networks will span all the hosts in a swarm and allow containers attached to them to communicate in spite of running on different Docker hosts. As you probably have surmised, use of networks that have swarm scope requires Docker swarm mode. In fact, when you initialize a Docker host into swarm mode, a special new network is created for you that has swarm scope. This swarm scope network is named ingress and is created using the built-in overlay driver. This network is vital to the load balancing feature of swarm mode that saw used in the Accessing container applications in a swarm section of Chapter 5, Docker Swarm. There’s also a new bridge network created in the swarm init, named docker_gwbridge. This network is used by swarm to communicate outward, kind of like a default gateway.  Here are the default built-in networks found in a new Docker swarm:

Using the overlay driver allows you to create networks that span Docker hosts. These are layer 2 networks. There is a lot of network plumbing that gets laid down behind the scenes when you create an overlay network. Each host in the swarm gets a network sandbox with a network stack. Within that sandbox, a bridge is created and named br0. Then, a VXLAN tunnel endpoint is created and attached to bridge br0. Once all of the swarm hosts have the tunnel endpoint created, a VXLAN tunnel is created that connects all of the endpoints together. This tunnel is actually what we see as the overlay network. When containers are attached to the overlay network, they get an IP address assigned from the overlay’s subnet, and all communications between containers on that network are carried out via the overlay. Of course, behind the scenes that communication traffic is passing through the VXLAN endpoints, going across the Docker hosts network, and any routers connecting the host to the networks of the other Docker hosts. But, you never have to worry about all the behind-the-scenes stuff. Just create an overlay network, attach your containers to it, and you’re golden.

The next local network driver that we’re going to discuss is called MACVLAN. This driver creates networks that allow containers to each have their own IP and MAC addresses, and to be attached to a non-Docker network. What that means is that in addition to the container-to-container communication you get with bridge and overlay networks, with MACVLAN networks you also are able to connect with VLANs, VMs, and other physical servers. Said another way, the MACVLAN driver allows you to get your containers onto existing networks and VLANs. A MACVLAN network has to be created on each Docker host where you will run containers that need to connect to your existing networks. What’s more, you will need a different MACVLAN network created for each VLAN you want containers to connect to. While using MACVLAN networks sounds like the way to go, there are two important challenges to using it. First, you have to be very careful about the subnet ranges you assign to the MACVLAN network. Containers will be assigned IPs from your range without any consideration of the IPs in use elsewhere. If you have a DHCP system handing out IPs that overlap with the range you gave to the MACVLAN driver, it can easily cause duplicate IP scenarios. The second challenge is that MACVLAN networks require your network cards to be configured in promiscuous mode. This is usually frowned upon in on-premise networks but is pretty much forbidden in cloud-provider networks such as AWS and Azure, so the MACVLAN driver will have very limited use cases.

There is a lot of information covered in this section on local or built-in network drivers. Don’t despair! They are much easier to create and use than this wealth of information seems to indicate. We will go into creating and using info shortly in the Creating Docker networks section, but next, let’s have a quick discussion about remote (also known as third-party) network drivers.

Comments are closed.

loading...