Docker networking enables isolated, secure, and flexible communication between containers, the host, and external networks. At its core, a Docker network is a logical construct that defines how containers exchange data — whether via IP addressing, DNS resolution, or port mapping.
Default Network Behavior
By default, Docker assigns all newly launched containers to the built-in bridge network unless explicit overridden. This network uses a virtual Ethernet bridge (typically named docker0) managed by the Docker daemon, allowing containers on the same bridge to communicate directly using IPv4 addresses while remaining isolated from those on other networks.
Built-in Network Drivers
Docker provides several preconfigured network drivers:
none: Disables all networking for the container — no interfaces exceptlo.host: Bypasses Docker’s network namespace isolation; the container shares the host’s network stack directly.bridge: The default driver. Uses software-based bridging to connect containers on the same local network segment. Supports subnet assignment, gateway routing, and basic inter-container reachability.
Advanced drivers like overlay, ipvlan, and macvlan enable multi-host networking, fine-grained IP address control, and L2-level MAC address management respectively — but are beyond the scope of basic bridged setups.
Practical Demonstration with Alpine Containers
Start two lightweight containers using the minimal alpine image:
docker run -d --name net-node-a alpine sleep infinity
docker run -d --name net-node-b alpine sleep infinity
Verify they reside on the default bridge network:
docker network ls
Output confirms the presence of the bridge network with ID c8e2262fa6ef:
NETWORK ID NAME DRIVER SCOPE
c8e2262fa6ef bridge bridge local
5050dfe48c19 host host local
ac4d3767e763 none null local
Inspecting the bridge reveals its configuration:
docker network inspect bridge
The output shows:
- Subnet:
172.17.0.0/16 - Gateway:
172.17.0.1 - Container assignments:
net-node-a→172.17.0.2/16net-node-b→172.17.0.3/16
Validating Connectivity
Attach to one container and examine its interface configuration:
docker attach net-node-a
Inside the container, run:
ip addr show eth0
This displays the assigned IPv4 address and MAC, confirming integration into the bridge network.
Test reachaiblity:
- External connectivity:
ping -c 3 google.com - Inter-container ping:
ping -c 3 172.17.0.3(fromnet-node-atonet-node-b)
Both succeed — proving default bridge permits internal and outbound traffic.
Creating and Using a Custom Bridge Network
To enforce network segmentation, define an isolated bridge:
docker network create --driver bridge isolated-net
This generates a new network with a distinct subnet (e.g., 172.19.0.0/16) and gateway (172.19.0.1). Launch containers attached exclusively to it:
docker run -d --name service-alpha --network isolated-net alpine sleep infinity
docker run -d --name service-beta --network isolated-net alpine sleep infinity
docker run -d --name legacy-app alpine sleep infinity
Now verify isolation:
service-alphacan pingservice-beta(same subnet:172.19.0.2↔172.19.0.3)service-alphacannot reachlegacy-app(different subnets:172.19.0.2vs172.17.0.2)
This demonstrates effective layer-3 segmentation without requiring iptables rules or manual routing.
Proxy Configuration for Outbound Traffic
When containers require HTTP(S) proxy access (e.g., in restricted corporate environments), inject proxy settings via environment variables:
docker run -d \
--env HTTP_PROXY="http://10.0.2.2:8080" \
--env HTTPS_PROXY="http://10.0.2.2:8080" \
--name proxied-service \
alpine sleep infinity
These variables are honored by most package menagers (apk, apt) and runtime tools (curl, wget) within the container.