Networking with Docker: Don’t settle for the defaults.

Behold the Docker
Ship the full environment
All problems solved
 – ancient developer haiku

That was a bit of Docker humor for you, but onto the true purpose of this blog post: Networking with Docker!

Docker provides many types of isolation, one of which is network isolation. As a network monitoring company, we take great interest in this topic.

In the Docker world, networking is extremely important. It’s the primary way containers communicate with other isolated processes without writing to disk and without direct access to a binary. Most networking configurations are run-time options that can either be controlled via Docker or Docker Compose.

Docker has two types of networks: single-host and multi-host. For this topic, we will focus on single-host virtual networks – multi-host container networking pertains more to software like Kubernetes or Docker Swarm.

Docker Networking Basics

Generally, each container has a loopback interface, a private interface on the container, a virtual interface on the host, and an IP address. Each virtual interface on the host communicates with a Docker bridge interface that acts as the router for containers.

Docker Network Types

There are four different types of network configurations for containers. Listed from most secure/restrictive to most open/unrestrictive.

  1. Closed
  2. Bridged
  3. Joined
  4. Open
1. Closed Containers

Closed containers are completely offline, providing only a loopback interface. While it may not seem useful on the surface, closed containers actually have their place. For example, at Netbeez we use the sidecar logging pattern. These sidecar containers are great candidates to run with closed networking. They require no network access, only mounting in volumes and running tail.

Another container that runs in a closed network uses dmidecode, which needs access to low-level system information. We consider this container a security risk because it requires the sysrawio Linux capability and access to /dev/mem to function properly. By removing all network access (among other options to harden the container), we can limit the risk this process poses to the system.

2. Bridged Containers

Bridged containers are the most popular networking type as well as the Docker default. They are a great balance between closed networks (which are clearly too restrictive for most use cases) and joined networks. By default, bridged containers are all on the same network bridge. All containers can see each other and have outbound access.

User-Defined Bridges 

User-defined bridges isolate bridged containers from each other. At Netbeez, we have a back-network, a front-network and a mid-network. Containers in the back-network cannot communicate with containers on the front-network. From an organizational standpoint, both of these networks are separated by our mid-network. Not only does this improve security, but it’s also somewhat self-documenting. From a high-level, we can easily see which containers talk to each other within a docker-compose.yml.

Internal Networks

Using the internal network option allows a container to communicate with other containers. No outbound access is allowed. For example, I may be able to ping our Ruby on Rails container but not google.com.

User-Defined + Internal

Specifying an internal user-defined bridge network is as close a container can get to being a closed container, without being a closed container. It allows no outbound access but allows inter-container communication to containers on the same network bridge.

3. Joined Containers

Joined containers have a network stack in common. It’s similar to running multiple services on a host from the networking standpoint, except there is still host isolation. These types of containers are not used by Netbeez. Joined containers still have their use cases though. For example, it may be desirable to monitor the internal network traffic of one container from another.

4. Open Containers

Open containers are not recommended. This type of container runs on the host’s network stack. An open container that binds to port 80, binds to the host’s port 80. Considering that containers run as root by default (which should always be changed anyway), this is extremely dangerous. This removes all forms of network isolation. There are not many use cases in which an open container would be required.

Link Aliases

Now we have all of these containers with randomly assigned IP addresses, we need a way for the containers to find each other. Containers could be assigned static IP addresses, but that’s a lot of work for a messy solution. Thankfully, Docker provides link aliases.

Link aliases allow containers with dependencies on each other (for example, Rails may depend on MySQL) to access each other by link name. Really, it’s an efficient way to assign hostnames to containers for easy access.

Inbound Access

Closed containers cannot receive any inbound requests and open containers are at risk of inbound access by default. Bridged and joined containers cannot be reached inbound from the world by default.

To allow inbound access, a container port must be mapped to a host port at runtime. This can either be done one-off via Docker or written as a docker-compose configuration. For example, a web-server could listen on 9586 on its private interface. Then Docker Compose can remap that to 443 on the host.

Linux Capabilities

Linux capabilities, a large topic outside the scope of this post, are a way to deny or allow certain privileges/actions, even affecting the root user. These privileges can be set on individual containers. In most cases, all Linux capabilities can be removed from a container with some reorganization effort. This RedHat article is a good primer to Docker and Linux capabilities.

net_bind_service Capability

This capability allows binding to privileged ports. Dropping this capability from a container prevents it from binding to ports lower than 1024 on the container network interface.

If a container that binds to a privileged has this capability dropped, the container will crash. Therefore, the internal container process needs to be reconfigured to listen to a non-privileged port. Then this non-privileged port on the container can be remapped to a privileged port on the host with a docker-compose configuration change. With five minutes of work, we eliminated unnecessary capabilities.

Additional Capabilities

There are many other Linux capabilities. Some pertain to networking, but most don’t. net_bind_service is probably the capability most relevant to networking, and it’s very easy to drop and fix. Netbeez drops all capabilities and opts-in as needed. Very few of our containers have any capabilities.

Tldr

Docker gives us networking isolation and it should always be used to its fullest extent. Different options can be combined in countless ways to create the perfect level of isolation. Don’t just accept the defaults.