Always-On Claude Code in Docker: The LAN Exposure You Missed
Your Claude Code container can curl your router's admin page. Here's the exact finding from a real always-on production setup — and the 3-step hardening fix one developer built, named, and shipped.
When you run Claude Code 24/7 inside a Docker container on the default bridge network, that container inherits default gateway access to your host machine's local subnet — including your router admin page, any LAN services, and any device on your home or office network. An autonomous AI agent with --dangerously-skip-permissions set has no internal gate between a task and a raw network call. This post explains exactly why it happens, shows you how to reproduce the exposure in 90 seconds, and walks through the hardening steps that one developer codified into a named wrapper called hermit after months of production always-on use.
TL;DR
Docker bridge networking does not isolate containers from the host LAN. The container's default gateway routes through the host's
docker0interface, which has forwarding enabled to all host subnets including your local network. To lock it down: (1) create a custom bridge network, (2) addiptablesrules in theDOCKER-USERchain to block RFC 1918 egress while preserving internet access, and (3) enforce this via a container launch wrapper so the policy can't accidentally be skipped. If your Claude Code container is running unattended with permission prompts disabled, treating this as optional is a mistake.
The Production Setup That Exposed This
A thread in r/ClaudeAI documented the specific finding: a developer running Claude Code continuously in Docker — with Discord-based remote control for managing tasks away from the desk — discovered that the container could successfully curl the router admin page on the local network. The setup had been in production for months. The exposure wasn't the result of a misconfiguration the developer knew about; it was a consequence of Docker's defaults.
The architecture that surfaced the issue:
[Developer's phone]
↓ Discord bot command
[Docker container: Claude Code, always-on, --dangerously-skip-permissions]
↓ docker0 bridge / default gateway
[Host machine: Linux, connected to LAN + internet]
↓ IP forwarding enabled (required by Docker)
[LAN: 192.168.1.0/24 — router at .1, printers, NAS, smart home devices]
This matters disproportionately in autonomous agent setups compared to interactive sessions. When you're watching each prompt interactively, you notice unusual tool behavior. When the agent runs overnight on a long-horizon task, there is no human watching the outbound connections it — or the code it executes — initiates.
Why Docker Bridge Networking Is Not the Isolation You Think
Docker bridge networking (the default mode for docker run without explicit --network flags) creates a virtual switch called docker0 on the host. Containers get IPs in a private range — typically 172.17.0.0/16. The host machine acts as the default gateway for all container traffic.
Here's what that means concretely for routing:
Container IP: 172.17.0.2
Container gateway: 172.17.0.1 ← the docker0 interface on the host
Host interfaces:
docker0: 172.17.0.1
eth0: 192.168.1.100 ← same LAN subnet as your router
When the container sends a packet to 192.168.1.1 (a common router admin address), it sends it to the gateway at 172.17.0.1. The host has net.ipv4.ip_forward=1 set — Docker requires this and sets it at startup. The packet arrives on eth0, which is directly attached to 192.168.1.0/24. No default firewall rule prevents this.
This is not a Docker bug — bridge networking is designed for containers to reach the internet. But it means bridge networking provides no isolation from the host's LAN. Docker does add iptables rules to protect the host's own services from inbound container connections, but it does not add outbound egress rules to block containers from routing to your local subnet.
The --internal flag creates a truly isolated network, but it also cuts off all internet access. That won't work for Claude Code, which needs to reach api.anthropic.com for inference.
The accidentalrebel.com post on running AI agents in a box captures the gap well: "Docker isn't a security boundary the way a VM is. But it's enough friction that an AI agent can't accidentally (or intentionally) do something I'd regret." The friction they describe must be explicitly applied — Docker's defaults don't provide it.
How to Reproduce the LAN Exposure in Your Own Setup
Before hardening anything, verify the actual exposure. This takes about 90 seconds from inside your running container:
# Get a shell in your Claude Code container
docker exec -it your-claude-container sh
# Check the default gateway
ip route show default
# Output: default via 172.17.0.1 dev eth0
# Find your host's LAN IP (run this on the host)
# ip addr | grep -E "192\.|10\.\|172\."
# Then test reachability from inside the container:
curl -s --connect-timeout 3 http://192.168.1.1/ | head -10
If you receive any HTTP response — a router login page, a redirect, an error from the admin interface — the exposure is confirmed. The developer in the r/ClaudeAI thread got exactly this: a valid HTTP response from the router admin page from inside a bridge-networked container running a Claude Code session.
You can also probe the broader subnet to understand the blast radius:
# Quick sweep of your LAN from inside the container
for i in $(seq 1 254); do
result=$(curl -s --connect-timeout 0.5 -o /dev/null -w "%{http_code}" http://192.168.1.$i/ 2>/dev/null)
[ "$result" != "000" ] && echo "192.168.1.$i responded: HTTP $result"
done
This is the network visibility an autonomous agent has when permission prompts are bypassed. Any code the agent writes and executes — a Python script, a bash one-liner, a test runner — inherits this network access.
The Root Cause: IP Forwarding With No Egress Firewall
Two defaults combine to create the exposure:
1. Linux IP forwarding is enabled by Docker. Docker sets net.ipv4.ip_forward=1 on the host as a requirement for bridge networking to function. This allows the kernel to forward packets between interfaces — including from docker0 to eth0.
2. No egress rules exist in DOCKER-USER for LAN traffic. Docker populates iptables chains to allow container traffic out to the internet and to protect host ports from inbound container access. It does not add rules to block containers from reaching the host's LAN subnets. The DOCKER-USER chain — which is the correct place for operator-defined custom rules — is empty by default.
Anthropic's own documentation on securely deploying AI agents flags this class of issue in the context of agent sandboxing: without explicit network restrictions, an agent running in a container can reach any host that the container host itself can reach. For always-on DIY setups, this means the threat model extends beyond filesystem writes to include outbound network calls to local infrastructure.
The 3-Step Container Hardening Fix
This is the approach the hermit wrapper encodes. Apply all three steps — each one alone is insufficient.
Step 1: Create a Custom Bridge Network
Replace the default docker0 bridge with a dedicated named network using an explicit subnet. This gives you a controlled scope for iptables rules and separates your always-on agent container from any other containers sharing the default bridge:
docker network create \
--driver bridge \
--subnet 172.25.0.0/24 \
--gateway 172.25.0.1 \
--opt com.docker.network.bridge.name=claude-bridge \
claude-isolated
Then launch your Claude Code container on this network:
docker run -d \
--name claude-agent \
--network claude-isolated \
--restart unless-stopped \
your-claude-image
This step alone does not block LAN access — it just makes step 2 easier to scope correctly.
Step 2: Add iptables Rules to Block RFC 1918 Egress
Add rules in the DOCKER-USER chain to drop packets from your container subnet to RFC 1918 addresses, while preserving internet access. The DOCKER-USER chain is processed before Docker's own chains and is not overwritten by Docker daemon restarts or docker network operations:
CONTAINER_SUBNET="172.25.0.0/24"
# Block LAN egress from the container subnet
iptables -I DOCKER-USER -s $CONTAINER_SUBNET -d 192.168.0.0/16 -j DROP
iptables -I DOCKER-USER -s $CONTAINER_SUBNET -d 10.0.0.0/8 -j DROP
iptables -I DOCKER-USER -s $CONTAINER_SUBNET -d 172.16.0.0/12 -j DROP
# Explicitly allow intra-container communication on the same subnet
iptables -I DOCKER-USER -s $CONTAINER_SUBNET -d $CONTAINER_SUBNET -j ACCEPT
To persist these across reboots on Debian/Ubuntu:
apt-get install -y iptables-persistent
iptables-save > /etc/iptables/rules.v4
If you need to allow specific LAN resources (a local database, an internal API), add explicit ACCEPT rules before the DROP rules — iptables processes rules in order and stops at the first match:
# Example: allow a specific internal service
iptables -I DOCKER-USER -s $CONTAINER_SUBNET -d 192.168.1.50 -p tcp --dport 5432 -j ACCEPT
# Then the DROP rule applies to everything else in the subnet
Step 3: Enforce the Policy Via a Launch Wrapper (the hermit Pattern)
The developer in the r/ClaudeAI thread named their wrapper "hermit" precisely because manually remembering network flags is error-prone. Running docker run once without --network claude-isolated re-exposes the container. A launch wrapper makes the secure configuration the only available path:
#!/usr/bin/env bash
# hermit-launch.sh — enforces network isolation for always-on Claude Code
set -euo pipefail
NETWORK_NAME="claude-isolated"
CONTAINER_SUBNET="172.25.0.0/24"
CONTAINER_NAME="${1:-claude-agent}"
IMAGE="${2:-your-claude-image}"
# Ensure the isolated network exists
if ! docker network inspect "$NETWORK_NAME" &>/dev/null; then
echo "[hermit] Creating isolated network $NETWORK_NAME..."
docker network create \
--driver bridge \
--subnet "$CONTAINER_SUBNET" \
--gateway 172.25.0.1 \
--opt com.docker.network.bridge.name=claude-bridge \
"$NETWORK_NAME"
fi
# Ensure egress rules are in place (idempotent check)
if ! iptables -C DOCKER-USER -s "$CONTAINER_SUBNET" -d 192.168.0.0/16 -j DROP 2>/dev/null; then
echo "[hermit] Applying egress firewall rules..."
iptables -I DOCKER-USER -s "$CONTAINER_SUBNET" -d "$CONTAINER_SUBNET" -j ACCEPT
iptables -I DOCKER-USER -s "$CONTAINER_SUBNET" -d 172.16.0.0/12 -j DROP
iptables -I DOCKER-USER -s "$CONTAINER_SUBNET" -d 10.0.0.0/8 -j DROP
iptables -I DOCKER-USER -s "$CONTAINER_SUBNET" -d 192.168.0.0/16 -j DROP
fi
# Launch the container on the isolated network
echo "[hermit] Launching $CONTAINER_NAME on $NETWORK_NAME..."
docker run -d \
--name "$CONTAINER_NAME" \
--network "$NETWORK_NAME" \
--restart unless-stopped \
"$IMAGE"
echo "[hermit] Done. Container $CONTAINER_NAME is LAN-isolated."
The core principle behind the hermit pattern: the security policy belongs in the launch mechanism, not in the operator's memory. You can't misconfigure what you can't skip. The infralovers.com analysis of sandboxing Claude Code on macOS makes the same point — the goal is to make the secure path the default, not the opt-in.
How to Verify the Fix Worked
After applying all three steps, verify from inside the container:
docker exec -it claude-agent sh
# This should now time out
curl -v --connect-timeout 3 http://192.168.1.1/ 2>&1
# Expected: curl: (28) Connection timed out after 3001ms
# This should still succeed (Anthropic API reachable)
curl -s --connect-timeout 5 https://api.anthropic.com/v1/messages \
-H "x-api-key: invalid" \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d '{"model":"claude-sonnet-4-6","max_tokens":1,"messages":[{"role":"user","content":"hi"}]}' \
| python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('error',{}).get('type','ok'))"
# Expected: authentication_error (connection succeeded, API key invalid but LAN is blocked)
If the router curl times out and the Anthropic API returns any JSON response (even an auth error), your isolation is working correctly. The agent can reach Anthropic for inference; your local network is inaccessible.
Add a verification step to your startup sequence:
# Add to hermit-launch.sh after container start
sleep 2
echo "[hermit] Verifying LAN isolation..."
RESULT=$(docker exec "$CONTAINER_NAME" curl -s --connect-timeout 2 http://192.168.1.1/ 2>&1 || true)
if echo "$RESULT" | grep -q "timed out\|Connection refused\|Network is unreachable"; then
echo "[hermit] LAN isolation confirmed."
else
echo "[hermit] WARNING: LAN may still be reachable. Review iptables rules."
fi
Also note: iptables rules are not persistent across reboots on most distributions, and docker network rm followed by recreation can require re-applying rules. Automate the rule check as part of your container orchestration rather than relying on manual application.
This connects to a pattern worth understanding: why Claude Code PreToolUse hooks can still be bypassed. Hooks intercept tool calls the agent makes through the SDK. They do not intercept network calls made by code the agent writes and runs — a Python script executing requests.get("http://192.168.1.1") bypasses hooks entirely. Network-level controls are the enforcement layer that applies unconditionally, regardless of how the agent structures its actions.
How Grass Makes This Workflow Better
The three-step hardening process above is correct and will close the LAN exposure. But there's a class of failure it doesn't solve: configuration drift over time.
You apply the iptables rules today. In two months, a kernel update clears the persistence layer and you don't notice until something alerts you. You spin up a second always-on container for a different project and forget to use the hermit wrapper. A teammate sets up the same stack on their server and doesn't know hermit exists.
This is the practical reason to look at Daytona-backed cloud VMs versus DIY container setups. The isolation model is fundamentally different: a Daytona workspace runs in a dedicated VM in the cloud, not in a container on your LAN. There is no 192.168.1.1 reachable from inside the workspace — your router is simply not on the same network as the VM. The isolation is architectural, not rule-based.
Grass provides this as the default environment for always-on Claude Code sessions:
[Your phone]
↓ Grass mobile app (QR scan or saved server)
[Grass cloud VM: Daytona-backed, always-on]
└── Claude Code (your API key, BYOK — Grass never touches it)
└── Network: isolated cloud subnet, no path to your home LAN
↓ outbound to api.anthropic.com only
[Anthropic API]
For the always-on remote-control use case, Grass also addresses the second half of the problem that drives --dangerously-skip-permissions in the first place: handling approval gates remotely. When Claude Code hits a tool call that needs approval, Grass surfaces it as a native modal on your phone — syntax-highlighted preview of what will execute, one tap to allow or deny. That means you get genuinely unattended operation without disabling the permission layer.
This matters for the security posture beyond just LAN exposure. As documented in the AI agent disaster postmortems, the incidents that cause real damage share a common factor: an agent with broad capabilities and no human checkpoint between intent and execution. Keeping approval gates active and accessible remotely is how you avoid that pattern without sacrificing the always-on workflow.
To get started (Grass is recommended for this workflow, not required):
The hermit pattern above works today on any Docker host — you don't need Grass.
If you want the managed path: codeongrass.com has a free tier (10 hours, no credit card). Your Claude Code session runs in a pre-configured Daytona workspace, accessible from your phone, with permission forwarding built in.
FAQ
Can a Docker container really reach my router admin page by default?
Yes. On a Linux host running Docker with the default bridge network (docker0), containers can reach any IP address the host can route to — including 192.168.1.1 or 192.168.0.1, the typical addresses for home router admin interfaces. This is because Docker enables net.ipv4.ip_forward=1 on the host, and no default egress firewall rules exist to block RFC 1918 traffic from container subnets to the host LAN. The finding surfaced in a real r/ClaudeAI production thread with months of always-on use behind it — not a theoretical edge case.
Does this affect Docker Compose setups too?
Yes. Docker Compose creates a named bridge network per project by default (e.g., myapp_default), but that network has the same routing behavior as the default bridge for purposes of LAN access. Containers in a Compose stack can reach the host's LAN subnet unless you explicitly apply egress iptables rules or mark services that don't need internet access with internal: true in the network definition.
Does --network host make this worse?
Significantly. With --network host, the container shares the host's entire network stack — no bridge, no NAT, no translation layer. The container can bind to any port and reach any interface the host has. --network host is sometimes used for convenience in local development, but for an always-on agent container it means Claude Code is effectively running as a privileged network process with full host network access. Don't use it for unattended agent workloads.
What if my container legitimately needs to reach a LAN resource (local database, internal API)?
Use an allowlist approach rather than a full block. Add explicit ACCEPT rules in DOCKER-USER before the DROP rules for the specific IPs and ports you need:
# Allow specific LAN host and port
iptables -I DOCKER-USER -s $CONTAINER_SUBNET -d 192.168.1.50 -p tcp --dport 5432 -j ACCEPT
# Block everything else in the LAN range
iptables -I DOCKER-USER -s $CONTAINER_SUBNET -d 192.168.0.0/16 -j DROP
iptables processes rules in insertion order. The ACCEPT rule fires first for the specific host; the DROP rule catches everything else in the subnet.
Is this specific to Claude Code, or does it affect any container workload?
Any process in a bridge-networked Docker container has this LAN access. The reason it matters more for always-on AI agent setups is that autonomous agents — especially those with --dangerously-skip-permissions set — execute network calls (directly or through code they write) without a human reviewing each one. The combination of unexpected network reach and unchecked autonomous execution is what creates real risk. A cron job in Docker with the same network access is less concerning because its behavior is deterministic and known in advance.