In certain scenarios, we may need to set up a chained VPN structure: first connecting internal devices to a self-hosted WireGuard server (e.g., a personal VPS), then routing all traffic through that server to another VPN (a corporate, school, or commercial VPN service) before reaching the internet. Building this “VPN over VPN” network architecture offers the following clear advantages:
-
Robust Privacy Protection: Internal devices’ data flows are first encrypted through a self-hosted WireGuard tunnel to your personal VPS. Then, the VPS forwards it again through a corporate or commercial VPN before finally exiting to the internet via that VPN’s IP address. This method completely avoids using the VPS’s own IP address to access target networks, greatly reducing the risk of your personal IP being tracked or leaked.
-
Centralized, Convenient Management: All client devices connect to a single WireGuard service, eliminating the need to configure each device individually. More importantly, it cleverly bypasses the device-number limits set by many commercial VPN providers—one subscription, all devices covered.
-
Flexible & Effortless Switching: Because the corporate or commercial VPN configuration only needs to be changed once on your self-hosted WireGuard server, downstream clients (phones, laptops, etc.) do not need any extra adjustments. This makes it much quicker and easier to switch or reconfigure upstream VPN settings, greatly improving day-to-day maintenance efficiency.
-
Create a Private Virtual LAN for Cross-Device Communication: Once all devices connect to your self-hosted WireGuard VPN server, they automatically receive a unified VPN virtual LAN IP address (e.g., 10.8.0.0/16). Devices within this network can communicate directly with each other without additional setup.
This convenience means that no matter where you are, you can easily achieve:
- Remote SSH Login: Access servers at home or in the office at any time.
- Private NAS Media Access: Seamlessly access your home NAS media files while on the go.
- Remote Work: Securely access internal corporate network resources anytime, anywhere.
Solution Overview
Through my exploration and practice, I have developed a complete chained VPN solution. In general, my solution is fully based on Docker deployment, and the core components used are:
- gluetun: A Docker image that embeds OpenVPN/WireGuard clients and can connect to various VPN service providers. In this example, we configure it to provide VPN gateway functionality within a Docker network.
- wg-easy: A streamlined WireGuard Docker image with a built-in web interface, making it easy to create/manage WireGuard configuration files. Downstream devices connect to this service via a WireGuard client, and all traffic is forwarded to the gluetun container, which then sends it to the upstream VPN.
The advantage of this solution lies in its entirely containerized deployment, avoiding pollution of the server/VPS environment. It is also quite stable—during my personal usage, I did not notice any significant latency, stuttering, or disconnection issues, and performance was good (tested on a typical home bandwidth). Additionally, this solution now supports IPv6, allowing devices on IPv4-only networks to tunnel and access IPv6 resources.
In terms of network topology, the entire system only requires a single Docker bridge network:
- vpn: Used as the “VPN network” between gluetun and wg-easy. gluetun acts as a gateway within this network, and all traffic from wg-easy goes through this network to gluetun, which then forwards it to the upstream VPN.
[Client Device]
↓ WireGuard (connects in)
[wg-easy Container] ——→ Docker Network vpn ——→ [gluetun Container] ——→ Upstream VPN Provider
In this setup, wg-easy listens on UDP port 51820. After a client (your phone or computer) connects, the wg-easy container forwards traffic through the Docker network vpn
to the gluetun container, which then uses WireGuard/OpenVPN to connect to the upstream VPN. All downstream traffic ultimately exits via gluetun’s tun0
interface, achieving the chained VPN effect.
Environment & Prerequisites
- A Linux server/VPS that supports Docker and Docker Compose (kernel supports WireGuard and iptables/NFT). Kernel version ≥ 5.10 is recommended.
- Docker and Docker Compose v2+ installed.
- A valid account with a commercial VPN provider that supports OpenVPN/WireGuard.
- The host must allow
CAP_NET_ADMIN
andSYS_MODULE
permissions, and be able to load the WireGuard kernel module (lsmod | grep wireguard
can verify).
Create a Custom Docker Network
Before deployment, create the Docker bridge network on the host:
docker network create \
--driver bridge \
--subnet 172.32.0.0/16 \
--subnet fd01:beee:beee::/48 \
vpn
Note: If an existing network conflicts with these subnets, adjust them accordingly. This example assumes the Docker bridge supports IPv6 (for demonstration). If you don’t use IPv6, omit the related configuration or set disable_ipv6=1
.
Docker Compose Configuration Details
version: "3"
services:
gluetun:
image: qmcgaw/gluetun
cap_add:
- NET_ADMIN
- SYS_MODULE
devices:
- /dev/net/tun:/dev/net/tun
environment:
- VPN_SERVICE_PROVIDER=${VPN_SERVICE_PROVIDER}
- VPN_TYPE=${VPN_TYPE}
- WIREGUARD_PRIVATE_KEY=${WIREGUARD_PRIVATE_KEY}
- WIREGUARD_PRESHARED_KEY=${WIREGUARD_PRESHARED_KEY}
- WIREGUARD_ADDRESSES=${WIREGUARD_ADDRESSES}
- SERVER_REGIONS=${SERVER_REGIONS}
configs:
- source: post-rules.txt
target: /iptables/post-rules.txt
volumes:
- /lib/modules:/lib/modules:ro
- /data/gluetun-ws:/gluetun
sysctls:
- net.ipv4.ip_forward=1
- net.ipv4.conf.all.src_valid_mark=1
- net.ipv6.conf.all.disable_ipv6=0
- net.ipv6.conf.all.forwarding=1
- net.ipv6.conf.default.forwarding=1
networks:
vpn:
ipv4_address: 172.32.0.4
ipv6_address: "fd01:beee:beee::4"
restart: unless-stopped
wg-easy:
image: ghcr.io/wg-easy/wg-easy:15
ports:
- "51820:51820/udp"
networks:
vpn:
ipv4_address: 172.32.0.8
ipv6_address: "fd01:beee:beee::8"
cap_add:
- NET_ADMIN
- SYS_MODULE
sysctls:
- net.ipv4.ip_forward=1
- net.ipv4.conf.all.src_valid_mark=1
- net.ipv6.conf.all.disable_ipv6=0
- net.ipv6.conf.all.forwarding=1
- net.ipv6.conf.default.forwarding=1
volumes:
- /lib/modules:/lib/modules:ro
- /data/wg-easy:/etc/wireguard
restart: unless-stopped
networks:
vpn:
external: true
configs:
post-rules.txt:
content: |
iptables -A FORWARD -i eth0 -o tun0 -j ACCEPT
iptables -A FORWARD -i tun0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
ip6tables -I INPUT -p icmpv6 -j ACCEPT
ip6tables -I OUTPUT -p icmpv6 -j ACCEPT
ip6tables -A FORWARD -i eth0 -o tun0 -j ACCEPT
ip6tables -A FORWARD -i tun0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
ip6tables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
Below is a breakdown of key configurations:
-
image: qmcgaw/gluetun
: A lightweight, Swiss-army-knife–style VPN client that can connect to multiple VPN providers in a container environment and offer VPN connectivity to other containers. Official gluetun image supports various VPN protocols and providers. -
cap_add: [NET_ADMIN, SYS_MODULE]
: Grants the container network administration and the ability to load kernel modules (needed for both wg-easy and gluetun). This ensures the container can create a TUN device and set up routing, enabling WireGuard tunnel and routing functionality. -
devices: ["/dev/net/tun:/dev/net/tun"]
: Mounts the host’s TUN device into the container so gluetun can create the virtual network interface. -
image: ghcr.io/wg-easy/wg-easy:15
: The official wg-easy image (version adjustable as needed). -
"51820:51820/udp"
: Maps host’s UDP port 51820 to the container, allowing downstream devices to connect to the WireGuard service via the host or directly via Docker network. If you also need to expose wg-easy’s web management interface, add"51821:51821/tcp"
.
The environment variables referenced here can be found in the gluetun official documentation.
The two network configurations below specify fixed IP addresses for each container on the Docker bridge, which are heavily referenced in subsequent wg-easy routing and firewall rules. If you replace these values, make sure to update subsequent configurations accordingly.
networks:
vpn:
ipv4_address: 172.32.0.4
ipv6_address: "fd01:beee:beee::4"
networks:
vpn:
ipv4_address: 172.32.0.8
ipv6_address: "fd01:beee:beee::8"
Explanation of gluetun’s Additional Startup Script (post-rules.txt
):
# 1. IPv4 forward rules (allow bidirectional forwarding between eth0 and tun0)
iptables -A FORWARD -i eth0 -o tun0 -j ACCEPT
iptables -A FORWARD -i tun0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
# 2. IPv4 NAT masquerade (SNAT) for all packets exiting tun0
iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
# 3. IPv6 ICMP allow (ensure Neighbor Discovery Protocol, etc., works properly)
ip6tables -I INPUT -p icmpv6 -j ACCEPT
ip6tables -I OUTPUT -p icmpv6 -j ACCEPT
# 4. IPv6 forward rules (allow bidirectional forwarding between eth0 and tun0)
ip6tables -A FORWARD -i eth0 -o tun0 -j ACCEPT
ip6tables -A FORWARD -i tun0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
# 5. IPv6 NAT masquerade (SNAT) for all IPv6 packets exiting tun0
ip6tables -t nat -A POSTROUTING -o tun0 -j MASQUERADE
With these rules, downstream traffic (from wg-easy) inside the gluetun container will be forwarded to the tun0
interface and then into the upstream VPN tunnel. Upstream return packets will also reach the Docker network, then to wg-easy, and finally back to the client—effectively implementing a VPN gateway.
WG-Easy “Post Up” Script Explanation
In addition to gluetun’s network rules, you need to set up “Post Up” and “Post Down” scripts within wg-easy to dynamically add routing and ip rule
entries after the WireGuard tunnel is established, ensuring all downstream traffic goes through gluetun. This example targets wg-easy v15, as IPv6 support was introduced in v15. For v14 configuration, see my GitHub post.
For wg-easy v15.0.0, you need to convert the example below into a single line (semicolon-separated commands) and paste it into the input box shown in the screenshot. Alternatively, you can write the script to a file within the wg-easy container and reference it. Then click “Save” and wait for wg-easy to confirm the configuration. If needed, restart the wg-easy container.
Configuration Example
# WG-Easy Post Up Script (Example)
VPN=$(ifconfig | grep -B1 172.32.0.8 | grep -o "^\w*")
# Default to DROP all forwarding traffic
iptables -P FORWARD DROP
ip6tables -P FORWARD DROP
# Allow WireGuard UDP port ({{port}} will be replaced by WireGuard port, typically 51820)
iptables -A INPUT -p udp -m udp --dport {{port}} -j ACCEPT
ip6tables -A INPUT -p udp -m udp --dport {{port}} -j ACCEPT
# Allow forwarding between local and WireGuard tunnel
iptables -A FORWARD -i wg0 -j ACCEPT
iptables -A FORWARD -o wg0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -s {{ipv4Cidr}} -d {{ipv4Cidr}} -i wg0 -o wg0 -j ACCEPT
ip6tables -A FORWARD -i wg0 -j ACCEPT
ip6tables -A FORWARD -o wg0 -m state --state RELATED,ESTABLISHED -j ACCEPT
ip6tables -A FORWARD -s {{ipv6Cidr}} -d {{ipv6Cidr}} -i wg0 -o wg0 -j ACCEPT
# Configure IPv4 routing table (table 200 dedicated for downstream traffic)
iptables -t nat -A POSTROUTING -s {{ipv4Cidr}} -o $VPN -j MASQUERADE
# Add ip rule so that traffic from {{ipv4Cidr}} uses table 200
ip rule add from {{ipv4Cidr}} table 200
ip route add default via 172.32.0.4 dev $VPN table 200
ip route add 172.32.0.0/16 via 172.32.0.1 dev $VPN table 200
ip route add {{ipv4Cidr}} dev wg0 table 200
# Configure IPv6 routing table (also using table 200)
ip -6 rule add from {{ipv6Cidr}} table 200
ip -6 route add default via fd01:beee:beee::4 dev $VPN table 200
ip -6 route add fd01:beee:beee::/48 via fd01:beee:beee::1 dev $VPN table 200
ip -6 route add {{ipv6Cidr}} dev wg0 table 200
The three variables below are wg-easy template variables automatically filled by wg-easy, but users must ensure they are valid:
{{port}}
: The WireGuard listening port, typically mapped to 51820 in Docker.{{ipv4Cidr}}
: Downstream client IPv4 subnet, e.g.,10.8.0.0/24
, assigned by wg-easy.{{ipv6Cidr}}
: Downstream client IPv6 subnet, e.g.,fdcc:ad94:bacf:61a4::/64
, assigned by wg-easy.
Below is the PostDown script, which simply replaces -A
with -D
and add
with del
to remove rules:
VPN=$(ifconfig | grep -B1 172.32.0.8 | grep -o "^\w*")
iptables -P FORWARD DROP
ip6tables -P FORWARD DROP
iptables -D INPUT -p udp -m udp --dport {{port}} -j ACCEPT
ip6tables -D INPUT -p udp -m udp --dport {{port}} -j ACCEPT
iptables -D FORWARD -i wg0 -j ACCEPT
iptables -D FORWARD -o wg0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -D FORWARD -s {{ipv4Cidr}} -d {{ipv4Cidr}} -i wg0 -o wg0 -j ACCEPT
ip6tables -D FORWARD -i wg0 -j ACCEPT
ip6tables -D FORWARD -o wg0 -m state --state RELATED,ESTABLISHED -j ACCEPT
ip6tables -D FORWARD -s {{ipv6Cidr}} -d {{ipv6Cidr}} -i wg0 -o wg0 -j ACCEPT
iptables -t nat -D POSTROUTING -s {{ipv4Cidr}} -o $VPN -j MASQUERADE
ip rule del from {{ipv4Cidr}} table 200
ip route del default via 172.32.0.4 dev $VPN table 200
ip route del 172.32.0.0/16 via 172.32.0.1 dev $VPN table 200
ip route del {{ipv4Cidr}} dev wg0 table 200
ip -6 rule del from {{ipv6Cidr}} table 200
ip -6 route del default via fd01:beee:beee::4 dev $VPN table 200
ip -6 route del fd01:beee:beee::/48 via fd01:beee:beee::1 dev $VPN table 200
ip -6 route del {{ipv6Cidr}} dev wg0 table 200
Using Multiple Docker Bridges
In my setup, wg-easy connects to multiple Docker bridges—besides the vpn bridge, there are additional bridges dedicated to other services. If you need to attach wg-easy to another bridge network, it’s recommended to keep using the $VPN variable from the above configuration instead of wg-easy’s {{device}} template variable. The $VPN variable automatically binds to the correct interface for each subnet, ensuring that your routing and firewall rules only apply to the intended interface and won’t inadvertently affect other networks.
For a second Docker bridge, you can add the following in your PostUp script:
VPN_IF=$(ifconfig | grep -B1 172.32.0.8 | grep -o "^\w*")
Note that you should replace 172.32.0.8
with the IP of the wg-easy container on that specific Docker bridge. You could also adjust the regex to match by subnet rather than a single IP. This way, $VPN_IF
resolves to the interface being used for that particular subnet. Then update your routing table and firewall rules accordingly to ensure proper isolation and forwarding across each service network:
iptables -t nat -A POSTROUTING -s <NETWORK_CIDR> -o <NETWORK_INTERFACE> -j MASQUERADE
ip6tables -t nat -A POSTROUTING -s <NETWORK_CIDR> -o <NETWORK_INTERFACE> -j MASQUERADE # if this bridge supports IPv6
ip route add <NETWORK_CIDR> via <NETWORK_GATEWAY> dev <NETWORK_INTERFACE> table 200
ip -6 route add <NETWORK_CIDR> via <NETWORK_GATEWAY> dev <NETWORK_INTERFACE> table 200 # if this bridge supports IPv6
For readers who need to provide VPN connectivity to multiple containers but don’t want to attach wg-easy to several separate bridges, there’s a more granular alternative: bind each service that requires a VPN to its own WireGuard client container (for example, using linuxserver/wireguard or a gluetun-like image). Then, use wg-easy (or a similar management UI) to create a separate WireGuard “account” for each service and import the corresponding configuration into that service’s WireGuard client container. This approach offers several advantages:
- Each service has its own WireGuard configuration file, providing stronger traffic isolation.
- You can more easily monitor and restrict an individual service’s network access in wg-easy.
- You avoid using broad iptables rules like
iptables -A FORWARD -i eth+ -o tun0
, and instead only authorize specific container interfaces.
This model gives you fine-grained control over each container—enabling real-time traffic monitoring or revoking a service’s VPN access at any time. However, running multiple WireGuard client containers will incur additional CPU and memory overhead.
Deployment
Create and Load Environment Variables
In the same directory as docker-compose.yml
, create a .env
file and add (refer to gluetun documentation above):
VPN_SERVICE_PROVIDER=XXX
VPN_TYPE=XXX
WIREGUARD_PRIVATE_KEY=YOUR_PRIVATE_KEY
WIREGUARD_PRESHARED_KEY=YOUR_PSK
WIREGUARD_ADDRESSES=XXX
SERVER_REGIONS=XXX
Ensure the Docker Network Exists
As mentioned earlier, ensure a custom network named vpn
has been created.
Start the Containers
Run the following command and observe the logs of wg-easy and gluetun-ws containers to confirm that both the WireGuard client and server are established correctly.
$ docker-compose up -d
Add a WireGuard Client
- Access the wg-easy web interface at
http://<WG-Easy-Container-IP>:51821
(depending on wg-easy configuration). Or add a TCP mapping for port 51821 in Docker Compose, then browse tohttp://<Host-IP>:51821
. You can also use an Nginx reverse proxy. - In the Admin Panel under Hooks, paste the Post Up and Post Down configurations shown above, then save. This step is critical.
- Create a new Peer (enter a peer name), and wg-easy will generate the corresponding configuration file with
AllowedIPs = 0.0.0.0/0, ::/0
and allow you to download the.conf
file. - Import that configuration file into your local device (Windows/Mac/Linux/phone) and start the WireGuard client.
Verify the VPN-over-VPN Tunnel
- After connecting via WireGuard on the client, visit https://ipleak.net or https://ifconfig.co to check your exit IP.
- If it shows the upstream commercial VPN IP, the chained VPN tunnel is working correctly.
- You can also run
curl ifconfig.co
inside the gluetun container to verify the commercial VPN IP, and repeat inside the wg-easy container to confirm that the exit IP matches.
Check Traffic Routing
- Inside the wg-easy container, run
ip rule show
andip route show table 200
to confirm the routing table 200 is active and the default route points to 172.32.0.4 (gluetun). - Inside the gluetun container, run
iptables -t nat -L -nv
to view the corresponding MASQUERADE rules. - If troubleshooting is needed, use
tcpdump -i <interface>
in both containers to capture packets and verify traffic flows.
Notes and Considerations
DNS Leakage
If no additional DNS security measures are taken in this setup, there is a risk of DNS leaks: when a container or client uses the host’s or a public DNS server directly, DNS queries can bypass the VPN tunnel and connect upstream directly, exposing the real IP and browsing records. In fact, gluetun creates its own internal DNS server and forces all DNS traffic through it to prevent leaks via the local DNS. However, gluetun by default only listens for DNS requests inside its container, so external containers or hosts cannot access it. This means wg-easy cannot simply forward its DNS traffic to gluetun’s internal DNS service.
To avoid these issues, it is usually necessary to deploy a separate DNS service (e.g., AdGuard Home or Pi-hole) and route all DNS queries through that service. The service then performs upstream resolution through the VPN exit, ensuring that DNS queries also go over the VPN while providing ad blocking and malicious domain filtering.
Deploying AdGuard Home or a Similar DNS Service
Run a standalone AdGuard Home container on the same Docker network (e.g., vpn network). AdGuard Home will listen on UDP/TCP port 53 and offers stronger ad blocking and malicious domain filtering capabilities. For example, you can add the following service to your docker-compose.yml (for demonstration only):
adguard:
image: adguard/adguardhome
restart: unless-stopped
networks:
vpn:
ipv4_address: 172.32.0.16
ipv6_address: "fd01:beee:beee::16"
ports:
- "53:53/udp"
- "53:53/tcp"
- "3000:3000/tcp" # Web interface port
volumes:
- /data/adguard/workdir:/opt/adguardhome/work
- /data/adguard/conf:/opt/adguardhome/conf
This way, AdGuard Home will be exposed on the host’s port 53 and assigned an IP (e.g., 172.32.0.5) within the Docker vpn network. Subsequently, we can directly access this AdGuard container from wg-easy, clients, or even other containers. You can also choose not to expose port 53, providing DNS service only via the vpn bridge.
Visit AdGuard’s web interface (typically at http://<Host-IP>:3000
) to verify that the upstream DNS is correctly configured and that ad blocking policies are functioning. Then, you need to test, preferably by running nslookup hostname 172.32.0.16 inside the wg-easy container to ensure the DNS path from wg-easy to AdGuard Home is working.
After deploying, configuring, and successfully testing AdGuard, we need to adjust the client DNS settings generated by wg-easy, as wg-easy defaults to setting DNS servers to 1.1.1.1 and 2606:4700:4700::111. To do this, change the DNS in wg-easy’s Admin Panel to the IP of the AdGuard container: 172.32.0.16 and fd01:beee:beee::16.
Note: For devices already added, the DNS configuration will not automatically update to follow the DNS settings in the Admin Panel. You need to manually update the DNS configuration in WireGuard on the device, or adjust this device individually in wg-easy and then re-download and apply the configuration file to the device.
Finally, to strictly limit wg-easy’s external DNS access and prevent DNS leaks, we can add the following instructions in the PostUp hook script:
# Allow packets to AdGuard’s DNS port
iptables -A OUTPUT -o $VPN -d 172.32.0.16 -p udp --dport 53 -j ACCEPT
iptables -A OUTPUT -o $VPN -d 172.32.0.16 -p tcp --dport 53 -j ACCEPT
ip6tables -A OUTPUT -o $VPN -d fd01:beee:beee::16 -p udp --dport 53 -j ACCEPT
ip6tables -A OUTPUT -o $VPN -d fd01:beee:beee::16 -p tcp --dport 53 -j ACCEPT
# Reject other network stack DNS queries to avoid leaking to the host
iptables -A OUTPUT -p tcp --dport 53 -j REJECT
iptables -A OUTPUT -p udp --dport 53 -j REJECT
ip6tables -A OUTPUT -p tcp --dport 53 -j REJECT
ip6tables -A OUTPUT -p udp --dport 53 -j REJECT
The corresponding PostDown script is as follows:
iptables -D OUTPUT -o $VPN -d 172.32.0.16 -p udp --dport 53 -j ACCEPT
iptables -D OUTPUT -o $VPN -d 172.32.0.16 -p tcp --dport 53 -j ACCEPT
ip6tables -D OUTPUT -o $VPN -d fd01:beee:beee::16 -p udp --dport 53 -j ACCEPT
ip6tables -D OUTPUT -o $VPN -d fd01:beee:beee::16 -p tcp --dport 53 -j ACCEPT
iptables -D OUTPUT -p tcp --dport 53 -j REJECT
iptables -D OUTPUT -p udp --dport 53 -j REJECT
ip6tables -D OUTPUT -p tcp --dport 53 -j REJECT
ip6tables -D OUTPUT -p udp --dport 53 -j REJECT
IPv6 Support
- If not using IPv6, remove the
ipv6_address
entries and ip6tables rules indocker-compose.yml
, and do not enable IPv6 when creating the network. - If using IPv6, ensure both the host and VPN provider support IPv6 tunneling.
Performance & Resources
- WireGuard requires significant CPU resources for high-throughput encryption. Ensure your hardware can handle the expected bandwidth.
Multi-Hop (Extra VPN Layers)
This architecture provides a double-hop tunnel (downstream → wg-easy → gluetun → upstream VPN → internet). If you want additional hops, you can chain another VPN on top of the upstream VPN, but be aware of performance degradation and increased maintenance complexity.
Firewall & Security
By default, the wg-easy container only exposes UDP port 51820. If you need public access, configure the host firewall (iptables/nftables) to only allow traffic from trusted IPs.
PostDown Script
To avoid leaving orphaned rules after teardown, be sure to add corresponding iptables -D
, ip rule del
, and ip route del
commands to wg-easy’s PostDown script. Otherwise, restarting the container may lead to duplicate rules or conflicts.
Network Troubleshooting Example
When a client cannot access the LAN or the Internet, here are some common steps to quickly pinpoint the issue. This is the process I often use. I believe that by following this coarse-to-fine approach, you can effectively identify the root cause of most network connectivity failures.
Below are the specific steps for testing, shown only for IPv4. For IPv6, readers can easily adapt the commands as needed.
Phased Ping Testing (Local)
ping 10.8.0.1
: Verify that the VPN tunnel is established.ping 172.32.0.8
: Check whether the client can reach the wg-easy container (Docker bridge network).ping 172.32.0.4
: Confirm connectivity from the bridge network to the gluetun container.ping 8.8.8.8
: Test whether gluetun can access the Internet.
If any step fails, inspect the corresponding routing or firewall rules.
DNS Resolution Verification (Local)
After confirming that the IP-level links are working, the next step is to check for DNS issues.
nslookup example.com
: Use the default DNS server.nslookup example.com 172.32.0.16
: Test via AdGuard, if you have deployed it.nslookup example.com 8.8.8.8
: Verify whether a public DNS server is reachable.
If you do not get a correct response, it means the DNS request is not reaching the intended server.
Inspecting Inside Containers
If DNS also looks fine, the problem may lie deeper in routing tables or firewall rules. We can inspect from within each container:
docker exec -it <container-name-or-id-of-wg-easy> /bin/bash
docker exec -it <container-name-or-id-of-gluetun> /bin/sh
In the wg-easy/gluetun container:
ip rule show # Check if routing rules are correct
ip route show table 200 # Verify routing table 200
ping -c 2 8.8.8.8 # Test outbound connectivity from the container
ping -c 2 example.com # Test DNS resolution inside the container
iptables-save # Review firewall rules
ip route # Verify the default routing table
Advanced Methods: tcpdump, Routing Table, and Firewall Inspection
If the above simple methods still cannot locate the problem, deeper conflicts in routing tables or firewall configurations may exist. At this point, use tcpdump -i <interface>
to capture packets and observe whether DNS (UDP/TCP 53) or other traffic reaches the container or is being dropped. Then carefully examine both the host and container routing tables (ip rule show, ip route show) for overlapping or conflicting entries; ensure that ip rule priorities and table entries are taking effect. Finally, review all iptables/ip6tables rules—especially those in the FORWARD, OUTPUT, and nat tables—to confirm that no incorrect or duplicate rules are blocking traffic.
Issues Caused by Updates
Be aware that your problem could also be caused by a container image update. In this case, check whether an automatic update occurred recently, and try rolling back to a previous version to see if the issue disappears. If it does, pin the image to the older version until the underlying issue is resolved.