sysctl -w net.ipv4.ip_forward=1
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sysctl -w net.ipv4.ip_forward=1
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sysctl -w net.ipv4.ip_forward=1
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
# If the kernel was compiled with CONFIG_IKCONFIG_PROC:
zcat /proc/config.gz | grep CONFIG_VIRTIO_NET # On a Debian/Ubuntu guest that has the config in /boot:
grep CONFIG_BRIDGE /boot/config-$(uname -r) # Check what modules are loaded right now:
lsmod
# If the kernel was compiled with CONFIG_IKCONFIG_PROC:
zcat /proc/config.gz | grep CONFIG_VIRTIO_NET # On a Debian/Ubuntu guest that has the config in /boot:
grep CONFIG_BRIDGE /boot/config-$(uname -r) # Check what modules are loaded right now:
lsmod
# If the kernel was compiled with CONFIG_IKCONFIG_PROC:
zcat /proc/config.gz | grep CONFIG_VIRTIO_NET # On a Debian/Ubuntu guest that has the config in /boot:
grep CONFIG_BRIDGE /boot/config-$(uname -r) # Check what modules are loaded right now:
lsmod
CONFIG_VIRTIO_NET → guest NIC driver. Without it, the VM has no network at all. CONFIG_BRIDGE → Linux bridge (docker0). Without it, Docker cannot create a bridge interface. CONFIG_VETH → virtual ethernet pairs. Without it, containers have no host-side interface. CONFIG_NETFILTER → the entire packet filtering framework. Without it, no iptables, no NAT. CONFIG_NF_TABLES → nf_tables subsystem (modern iptables backend). Missing means iptables-nft fails with EPROTONOSUPPORT. CONFIG_IP_NF_IPTABLES → x_tables subsystem (legacy iptables backend). Missing means iptables-legacy fails. CONFIG_NF_NAT → NAT support (MASQUERADE, DNAT). Without it, no port publishing. CONFIG_NF_CONNTRACK → stateful connection tracking. Without it, NAT only works for the first packet of a connection. CONFIG_BRIDGE_NETFILTER → lets iptables see bridged traffic. Without it, bridged containers bypass NAT entirely. CONFIG_CGROUPS → control group framework. Docker needs this to exist before it will start. CONFIG_CGROUP_DEVICE → device access control per container.
CONFIG_CGROUP_NET_PRIO → network priority per cgroup. CONFIG_INET → basic IPv4 support. Catastrophic if missing. CONFIG_IPV6 → IPv6. Some tooling breaks without it even if you're not using IPv6 addresses.
CONFIG_VIRTIO_NET → guest NIC driver. Without it, the VM has no network at all. CONFIG_BRIDGE → Linux bridge (docker0). Without it, Docker cannot create a bridge interface. CONFIG_VETH → virtual ethernet pairs. Without it, containers have no host-side interface. CONFIG_NETFILTER → the entire packet filtering framework. Without it, no iptables, no NAT. CONFIG_NF_TABLES → nf_tables subsystem (modern iptables backend). Missing means iptables-nft fails with EPROTONOSUPPORT. CONFIG_IP_NF_IPTABLES → x_tables subsystem (legacy iptables backend). Missing means iptables-legacy fails. CONFIG_NF_NAT → NAT support (MASQUERADE, DNAT). Without it, no port publishing. CONFIG_NF_CONNTRACK → stateful connection tracking. Without it, NAT only works for the first packet of a connection. CONFIG_BRIDGE_NETFILTER → lets iptables see bridged traffic. Without it, bridged containers bypass NAT entirely. CONFIG_CGROUPS → control group framework. Docker needs this to exist before it will start. CONFIG_CGROUP_DEVICE → device access control per container.
CONFIG_CGROUP_NET_PRIO → network priority per cgroup. CONFIG_INET → basic IPv4 support. Catastrophic if missing. CONFIG_IPV6 → IPv6. Some tooling breaks without it even if you're not using IPv6 addresses.
CONFIG_VIRTIO_NET → guest NIC driver. Without it, the VM has no network at all. CONFIG_BRIDGE → Linux bridge (docker0). Without it, Docker cannot create a bridge interface. CONFIG_VETH → virtual ethernet pairs. Without it, containers have no host-side interface. CONFIG_NETFILTER → the entire packet filtering framework. Without it, no iptables, no NAT. CONFIG_NF_TABLES → nf_tables subsystem (modern iptables backend). Missing means iptables-nft fails with EPROTONOSUPPORT. CONFIG_IP_NF_IPTABLES → x_tables subsystem (legacy iptables backend). Missing means iptables-legacy fails. CONFIG_NF_NAT → NAT support (MASQUERADE, DNAT). Without it, no port publishing. CONFIG_NF_CONNTRACK → stateful connection tracking. Without it, NAT only works for the first packet of a connection. CONFIG_BRIDGE_NETFILTER → lets iptables see bridged traffic. Without it, bridged containers bypass NAT entirely. CONFIG_CGROUPS → control group framework. Docker needs this to exist before it will start. CONFIG_CGROUP_DEVICE → device access control per container.
CONFIG_CGROUP_NET_PRIO → network priority per cgroup. CONFIG_INET → basic IPv4 support. Catastrophic if missing. CONFIG_IPV6 → IPv6. Some tooling breaks without it even if you're not using IPv6 addresses.
#!/bin/sh
set -e mount -t proc none /proc
mount -t sysfs none /sys
mount -t devtmpfs none /dev
mount -t tmpfs none /run
mount -t tmpfs none /tmp # cgroup v2 unified hierarchy (for kernels 5.8+ and modern Docker)
mount -t cgroup2 none /sys/fs/cgroup # IP forwarding — needed if containers want to reach the internet
echo 1 > /proc/sys/net/ipv4/ip_forward # configure the guest NIC
ip addr add 192.168.0.2/24 dev eth0
ip link set eth0 up
ip route add default via 192.168.0.1 exec /usr/bin/dockerd --host unix:///run/docker.sock
#!/bin/sh
set -e mount -t proc none /proc
mount -t sysfs none /sys
mount -t devtmpfs none /dev
mount -t tmpfs none /run
mount -t tmpfs none /tmp # cgroup v2 unified hierarchy (for kernels 5.8+ and modern Docker)
mount -t cgroup2 none /sys/fs/cgroup # IP forwarding — needed if containers want to reach the internet
echo 1 > /proc/sys/net/ipv4/ip_forward # configure the guest NIC
ip addr add 192.168.0.2/24 dev eth0
ip link set eth0 up
ip route add default via 192.168.0.1 exec /usr/bin/dockerd --host unix:///run/docker.sock
#!/bin/sh
set -e mount -t proc none /proc
mount -t sysfs none /sys
mount -t devtmpfs none /dev
mount -t tmpfs none /run
mount -t tmpfs none /tmp # cgroup v2 unified hierarchy (for kernels 5.8+ and modern Docker)
mount -t cgroup2 none /sys/fs/cgroup # IP forwarding — needed if containers want to reach the internet
echo 1 > /proc/sys/net/ipv4/ip_forward # configure the guest NIC
ip addr add 192.168.0.2/24 dev eth0
ip link set eth0 up
ip route add default via 192.168.0.1 exec /usr/bin/dockerd --host unix:///run/docker.sock
container process (172.17.0.2) └─ eth0 (veth1, inside container net namespace) ↕ veth pair — kernel memory copy vethXXXXXX (veth0, host end, in guest net namespace) └─ docker0 bridge (172.17.0.1, Layer-2 switch) └─ routing + netfilter FORWARD + MASQUERADE └─ eth0 (virtio-net, guest uplink) ↕ virtio ring buffer TAP device (host kernel) └─ host routing + host MASQUERADE └─ host physical NIC → internet
container process (172.17.0.2) └─ eth0 (veth1, inside container net namespace) ↕ veth pair — kernel memory copy vethXXXXXX (veth0, host end, in guest net namespace) └─ docker0 bridge (172.17.0.1, Layer-2 switch) └─ routing + netfilter FORWARD + MASQUERADE └─ eth0 (virtio-net, guest uplink) ↕ virtio ring buffer TAP device (host kernel) └─ host routing + host MASQUERADE └─ host physical NIC → internet
container process (172.17.0.2) └─ eth0 (veth1, inside container net namespace) ↕ veth pair — kernel memory copy vethXXXXXX (veth0, host end, in guest net namespace) └─ docker0 bridge (172.17.0.1, Layer-2 switch) └─ routing + netfilter FORWARD + MASQUERADE └─ eth0 (virtio-net, guest uplink) ↕ virtio ring buffer TAP device (host kernel) └─ host routing + host MASQUERADE └─ host physical NIC → internet
# NAT table
$ iptables -t nat -L POSTROUTING --line-numbers
Chain POSTROUTING (policy ACCEPT)
num target prot opt source destination
1 MASQUERADE all -- 172.17.0.0/16 !172.17.0.0/16 $ iptables -t nat -L DOCKER --line-numbers
Chain DOCKER (2 references)
num target prot opt source destination
1 RETURN all -- anywhere anywhere
2 DNAT tcp -- anywhere anywhere tcp dpt:5432 to:172.17.0.2:5432 # Filter table
$ iptables -L FORWARD --line-numbers
Chain FORWARD (policy DROP)
num target prot opt source destination
1 DOCKER-USER all -- anywhere anywhere
2 DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
3 ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
4 DOCKER all -- anywhere anywhere
5 ACCEPT all -- anywhere 172.17.0.0/16
# NAT table
$ iptables -t nat -L POSTROUTING --line-numbers
Chain POSTROUTING (policy ACCEPT)
num target prot opt source destination
1 MASQUERADE all -- 172.17.0.0/16 !172.17.0.0/16 $ iptables -t nat -L DOCKER --line-numbers
Chain DOCKER (2 references)
num target prot opt source destination
1 RETURN all -- anywhere anywhere
2 DNAT tcp -- anywhere anywhere tcp dpt:5432 to:172.17.0.2:5432 # Filter table
$ iptables -L FORWARD --line-numbers
Chain FORWARD (policy DROP)
num target prot opt source destination
1 DOCKER-USER all -- anywhere anywhere
2 DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
3 ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
4 DOCKER all -- anywhere anywhere
5 ACCEPT all -- anywhere 172.17.0.0/16
# NAT table
$ iptables -t nat -L POSTROUTING --line-numbers
Chain POSTROUTING (policy ACCEPT)
num target prot opt source destination
1 MASQUERADE all -- 172.17.0.0/16 !172.17.0.0/16 $ iptables -t nat -L DOCKER --line-numbers
Chain DOCKER (2 references)
num target prot opt source destination
1 RETURN all -- anywhere anywhere
2 DNAT tcp -- anywhere anywhere tcp dpt:5432 to:172.17.0.2:5432 # Filter table
$ iptables -L FORWARD --line-numbers
Chain FORWARD (policy DROP)
num target prot opt source destination
1 DOCKER-USER all -- anywhere anywhere
2 DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
3 ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
4 DOCKER all -- anywhere anywhere
5 ACCEPT all -- anywhere 172.17.0.0/16
update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
# On the host
ip link show tap0 # is the TAP up?
ip addr show tap0 # does it have an IP?
ping [guest IP] -c3 # can the host reach the guest?
# On the host
ip link show tap0 # is the TAP up?
ip addr show tap0 # does it have an IP?
ping [guest IP] -c3 # can the host reach the guest?
# On the host
ip link show tap0 # is the TAP up?
ip addr show tap0 # does it have an IP?
ping [guest IP] -c3 # can the host reach the guest?
# Inside the guest
ip link show eth0 # is virtio-net up?
ip addr show eth0 # does it have an IP?
ip route show # is there a default route?
ping 8.8.8.8 -c3 # can the guest reach the internet?
cat /proc/sys/net/ipv4/ip_forward # should be 1
# Inside the guest
ip link show eth0 # is virtio-net up?
ip addr show eth0 # does it have an IP?
ip route show # is there a default route?
ping 8.8.8.8 -c3 # can the guest reach the internet?
cat /proc/sys/net/ipv4/ip_forward # should be 1
# Inside the guest
ip link show eth0 # is virtio-net up?
ip addr show eth0 # does it have an IP?
ip route show # is there a default route?
ping 8.8.8.8 -c3 # can the guest reach the internet?
cat /proc/sys/net/ipv4/ip_forward # should be 1
mount | grep cgroup # is cgroup filesystem mounted?
ls /sys/fs/cgroup # can Docker see the cgroup hierarchy?
mount | grep "tmpfs on /run" # is /run a tmpfs?
ls /run/docker.sock # does the socket exist?
mount | grep cgroup # is cgroup filesystem mounted?
ls /sys/fs/cgroup # can Docker see the cgroup hierarchy?
mount | grep "tmpfs on /run" # is /run a tmpfs?
ls /run/docker.sock # does the socket exist?
mount | grep cgroup # is cgroup filesystem mounted?
ls /sys/fs/cgroup # can Docker see the cgroup hierarchy?
mount | grep "tmpfs on /run" # is /run a tmpfs?
ls /run/docker.sock # does the socket exist?
ip link show docker0 # does the bridge exist?
bridge link show # are any veth peers attached?
ip addr show docker0 # is 172.17.0.1 assigned?
docker network inspect bridge # what does Docker think is happening?
ip link show docker0 # does the bridge exist?
bridge link show # are any veth peers attached?
ip addr show docker0 # is 172.17.0.1 assigned?
docker network inspect bridge # what does Docker think is happening?
ip link show docker0 # does the bridge exist?
bridge link show # are any veth peers attached?
ip addr show docker0 # is 172.17.0.1 assigned?
docker network inspect bridge # what does Docker think is happening?
iptables --version # nf_tables or legacy?
iptables -t nat -L DOCKER # are DNAT rules present?
iptables -t nat -L POSTROUTING # is MASQUERADE present?
iptables -L FORWARD | grep DOCKER # are forward rules present?
iptables --version # nf_tables or legacy?
iptables -t nat -L DOCKER # are DNAT rules present?
iptables -t nat -L POSTROUTING # is MASQUERADE present?
iptables -L FORWARD | grep DOCKER # are forward rules present?
iptables --version # nf_tables or legacy?
iptables -t nat -L DOCKER # are DNAT rules present?
iptables -t nat -L POSTROUTING # is MASQUERADE present?
iptables -L FORWARD | grep DOCKER # are forward rules present?
conntrack -L 2>/dev/null | head -20 # are connections being tracked?
cat /proc/net/nf_conntrack | wc -l # how many entries in the table?
sysctl net.netfilter.nf_conntrack_max # what's the limit?
conntrack -L 2>/dev/null | head -20 # are connections being tracked?
cat /proc/net/nf_conntrack | wc -l # how many entries in the table?
sysctl net.netfilter.nf_conntrack_max # what's the limit?
conntrack -L 2>/dev/null | head -20 # are connections being tracked?
cat /proc/net/nf_conntrack | wc -l # how many entries in the table?
sysctl net.netfilter.nf_conntrack_max # what's the limit?
# If /proc/config.gz exists:
zcat /proc/config.gz | grep -E "CONFIG_(BRIDGE|VETH|NF_TABLES|NF_NAT|NF_CONNTRACK|BRIDGE_NETFILTER|VIRTIO_NET)" # Expected output for a Docker-capable kernel:
CONFIG_BRIDGE=y
CONFIG_VETH=y
CONFIG_NF_TABLES=y # or m, if the module is loadable
CONFIG_NF_NAT=y
CONFIG_NF_CONNTRACK=y
CONFIG_BRIDGE_NETFILTER=y
CONFIG_VIRTIO_NET=y
# If /proc/config.gz exists:
zcat /proc/config.gz | grep -E "CONFIG_(BRIDGE|VETH|NF_TABLES|NF_NAT|NF_CONNTRACK|BRIDGE_NETFILTER|VIRTIO_NET)" # Expected output for a Docker-capable kernel:
CONFIG_BRIDGE=y
CONFIG_VETH=y
CONFIG_NF_TABLES=y # or m, if the module is loadable
CONFIG_NF_NAT=y
CONFIG_NF_CONNTRACK=y
CONFIG_BRIDGE_NETFILTER=y
CONFIG_VIRTIO_NET=y
# If /proc/config.gz exists:
zcat /proc/config.gz | grep -E "CONFIG_(BRIDGE|VETH|NF_TABLES|NF_NAT|NF_CONNTRACK|BRIDGE_NETFILTER|VIRTIO_NET)" # Expected output for a Docker-capable kernel:
CONFIG_BRIDGE=y
CONFIG_VETH=y
CONFIG_NF_TABLES=y # or m, if the module is loadable
CONFIG_NF_NAT=y
CONFIG_NF_CONNTRACK=y
CONFIG_BRIDGE_NETFILTER=y
CONFIG_VIRTIO_NET=y - Guest has no default route. Ping 8.8.8.8 and nothing happens. ip route show inside the guest shows no default.
- TAP device is down on the host. ip link show tap0 shows DOWN. Bring it up with ip link set tap0 up.
- ip_forward is disabled. Packets from the guest arrive at the host's TAP interface and go nowhere. The guest can ping the host's TAP IP but not anything beyond it.
- Missing MASQUERADE rule. Guest can reach the host but not the internet. iptables -t nat -L POSTROUTING shows nothing relevant. - Clone the Linux source at the version Firecracker targets. Firecracker's repo documents this under resources/.
- Start from Firecracker's reference config (resources/guest_configs/microvm-kernel-x86_64-*.config).
- Enable the missing options via make menuconfig. Each one shows as y (built-in), m (loadable module), or not set.
- Compile: make vmlinux -j$(nproc).
- Replace the vmlinux in your boot config.