How to Store Your API Key Securely When Running Coding Agents on a VPS
TL;DR
When you run a coding agent like Claude Code on a VPS, the agent inherits your shell environment and has full read access to any .env file you hand it — which means a misconfigured agent or a prompt injection could exfiltrate your API keys. The practical mitigation is a three-level stack: env vars for convenience, a permission-restricted .env file for baseline protection, and a secrets manager (Bitwarden Secrets Manager or Doppler) for anything team-facing or production-adjacent. If you want a platform that sidesteps the "key passes through a third party" problem entirely, Grass uses a BYOK model — your key lives in your VM's environment and never touches Grass infrastructure.
Why does this matter more for agents than for regular scripts?
A regular script reads one hardcoded env var and exits. An AI coding agent is different: it has a shell, it can list files, it can cat your .bashrc, and it executes arbitrary commands as part of its normal workflow. That's the design — it's what makes it useful.
The consequence is that any secret your agent can reach from its working directory or shell environment is effectively readable. A compromised tool call, a malicious package in the repo the agent is working on, or even a confused agent following a prompt injection in a file it reads — any of these can lead to credential exfiltration.
This is not a reason to avoid agents. It is a reason to be deliberate about what your agent can see.
Level 1: Environment variables in .bashrc (convenient, not great)
The most common pattern you'll see in tutorials:
# ~/.bashrc or ~/.zshrc
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
Then you source ~/.bashrc and launch your agent. The agent picks up the key via process.env or os.environ.
What this gets you: It works. The key isn't in a file in your project directory. Most casual attackers who get a web shell into your app won't immediately find it.
What this doesn't get you: If the agent has shell access (and it does), it can trivially read this:
env | grep API_KEY
cat ~/.bashrc | grep KEY
Your agent can do both of those things on your behalf. So can any command it runs. The .bashrc approach is fine for personal, single-user, low-stakes work. It is not a security boundary.
One improvement: Add the exports to ~/.profile instead of ~/.bashrc, and launch the agent via a non-login shell. This narrows the surface slightly — the key won't be visible to subshells that don't source ~/.profile — but a determined agent or attacker can still find it.
Level 2: Dedicated .env file with restricted permissions
Create a dedicated secrets file outside your project tree, owned by your user, not readable by others:
# Create the file
mkdir -p ~/.secrets
touch ~/.secrets/agent-keys.env
chmod 600 ~/.secrets/agent-keys.env
# Add keys
cat > ~/.secrets/agent-keys.env <<'EOF'
ANTHROPIC_API_KEY=sk-ant-...
EOF
Then source it only when you need it, rather than adding it to .bashrc:
# In a launch script or manually before starting the agent
set -a
source ~/.secrets/agent-keys.env
set -a
Or use a wrapper script:
#!/usr/bin/env bash
# ~/bin/launch-agent
set -a
source ~/.secrets/agent-keys.env
set +a
exec claude "$@"
chmod +x ~/bin/launch-agent
What chmod 600 actually protects against: Other users on a shared system. If you're on a $5 VPS that only you SSH into, this mainly keeps web app processes or other services from stumbling onto your key file.
What it does not protect against: The agent itself. Once you source the file and the variable is in the environment, the agent can read it. That's unavoidable — the agent needs the key to call the API.
The real value here is defense-in-depth and auditability. You know exactly where the key lives, you can rotate it in one place, and it's not scattered across shell init files.
Scoping the environment further with env -i
If you want to run an agent with a minimal environment that only contains what you explicitly pass:
env -i \
HOME="$HOME" \
PATH="$PATH" \
ANTHROPIC_API_KEY="$(grep ANTHROPIC_API_KEY ~/.secrets/agent-keys.env | cut -d= -f2)" \
claude
This strips inherited environment variables, which reduces what a confused agent can read. In practice it causes friction (some tools expect TERM, LANG, etc.), but for automated non-interactive agent runs in CI-like environments it is worth the setup cost.
Level 3: Secrets manager injection (recommended for teams and production)
For anything beyond personal use — multiple developers, a shared VPS, automated agent pipelines — you want a secrets manager that:
- Stores secrets encrypted at rest with access controls
- Injects secrets at process launch time without writing them to disk
- Lets you rotate keys in one place and audit access
Option A: Bitwarden Secrets Manager (bws run)
Bitwarden has a CLI tool specifically for this. After storing your key in Bitwarden Secrets Manager:
# Install the bws CLI
curl -L https://github.com/bitwarden/sdk-sm/releases/latest/download/bws-x86_64-unknown-linux-gnu.zip -o bws.zip
unzip bws.zip
sudo mv bws /usr/local/bin/
chmod +x /usr/local/bin/bws
# Authenticate (one time per session, or use BWS_ACCESS_TOKEN env var)
export BWS_ACCESS_TOKEN="your-bws-machine-account-token"
# Run the agent with injected secrets
bws run -- claude
bws run resolves the secrets you've mapped to env var names in the Bitwarden Secrets Manager UI and injects them into the child process's environment. The plaintext key is never written to disk.
You can also pull a specific secret inline:
export ANTHROPIC_API_KEY=$(bws secret get <secret-id> | jq -r '.value')
Option B: Doppler
Doppler has a similar injection model:
# Install
curl -Ls https://cli.doppler.com/install.sh | sudo sh
# Authenticate and configure project
doppler login
doppler setup # run in project directory, links to a Doppler project/config
# Run agent with injected secrets
doppler run -- claude
Your keys live in Doppler's encrypted store, team members each authenticate with their own credentials, and the doppler run wrapper handles injection. Access logs are automatic.
# Expected output when running doppler run -- claude
Doppler: injecting 3 secrets into environment
Option C: pass (GPG-backed, self-hosted)
If you don't want a SaaS dependency:
# Initialize pass store (requires GPG key)
gpg --gen-key
pass init your-gpg-email@example.com
# Store a key
pass insert agents/anthropic-api-key
# Inject at launch time
ANTHROPIC_API_KEY=$(pass show agents/anthropic-api-key) claude
This keeps everything local, but you're now managing GPG key backup and distribution if you have a team.
What about putting keys in a .env file in the project directory?
Don't. This is the most common mistake and the highest-risk pattern for agent workflows.
When Claude Code or any other agent is working in your repo, it reads files to understand context. A .env file in the project root is a natural target — agents scan for configuration files, linters check them, documentation tools index them. Even if you add .env to .gitignore, you're relying on every tool in the chain respecting that.
The Bitwarden team documented this explicitly: coding agents like Claude Code and Cursor can and do read .env files as part of their normal context-gathering behavior. That's not a bug — it becomes a bug when secrets are in there.
If you use a .env file at all, put non-secret config there (log levels, feature flags, API endpoints) and load actual credentials from a secrets manager or the restricted file approach above.
What about using a non-root user with restricted permissions?
This is a good practice regardless of key storage approach. Run your agent as a dedicated user that doesn't have access to other services on the box:
# Create a dedicated user for agent runs
sudo useradd -m -s /bin/bash agent-runner
sudo passwd agent-runner
# Set up keys for that user only
sudo -u agent-runner mkdir -p /home/agent-runner/.secrets
sudo -u agent-runner chmod 700 /home/agent-runner/.secrets
This limits blast radius: if the agent executes something malicious, it runs as agent-runner, not as your main user that has access to other services, SSH keys, or root-adjacent permissions.
Troubleshooting common issues
Agent can't find ANTHROPIC_API_KEY after you set it in .bashrc
You probably set it in a non-login shell config and launched the agent in a context that didn't source it. Verify:
echo $ANTHROPIC_API_KEY # should print the key
claude # if blank above, the agent won't see it either
Fix: explicitly export it in the same terminal session before launching, or use a wrapper script that sources the file.
bws run fails with "missing access token"
The BWS_ACCESS_TOKEN env var needs to be set before calling bws run:
export BWS_ACCESS_TOKEN="0.machine-account-token..."
bws run -- claude
Store this token (and only this token) in your .bashrc. It's a machine account credential with scoped permissions, not your actual API key — rotating it doesn't require changing your Anthropic key.
Doppler doppler setup asks for a project but you haven't created one
You need to create the project and config in the Doppler dashboard first, then add your secrets there, then run doppler setup to link the local directory to that project.
Agent reads .env from project root despite your setup
Check if there's a .env file already in the repo you handed the agent:
find /path/to/project -name ".env" -o -name "*.env" 2>/dev/null
If there is, either remove it and replace with .env.example containing only placeholder values, or ensure .gitignore excludes it and audit what's in it.
FAQ
Can Claude Code read my .env file without me explicitly telling it to?
Yes. When Claude Code explores a codebase — reading config files, understanding project structure — it may open .env files it finds in the working directory. This is documented behavior, not a vulnerability in Claude Code specifically. It's how any agent with file-read access works. Keep secrets out of project-local .env files.
Is it safe to put ANTHROPIC_API_KEY in a systemd service file?
No, not in plaintext. If you're running an agent as a systemd service, use EnvironmentFile=/path/to/restricted.env with chmod 600 on that file, owned by the service user. Better: use a secrets manager that your service can authenticate to at startup and pull credentials from.
Does running the agent in Docker help with key security?
Somewhat. Docker gives you isolation at the container level, but if you pass the key in via -e ANTHROPIC_API_KEY=... or an --env-file, it's still plaintext in the container environment. Docker secrets (for Swarm) or Kubernetes secrets with proper RBAC are the right tools if you're containerizing agent workloads at scale.
What's the difference between a .env file and shell environment variables for an agent?
From the agent's perspective, if the variable is in its environment (visible via env), the source doesn't matter. The distinction matters for you: env vars set in the shell disappear when the session ends, while a .env file persists on disk. The on-disk persistence is what creates the security surface.
Should I rotate my API key if I've been storing it in .bashrc on a VPS?
Yes, especially if anyone else has ever had SSH access to the box or if you've run untrusted code on it. Rotate the key in your Anthropic account settings, update the value in your secrets file, and audit recent API usage for anomalies.
BYOK: why your key should stay in your infrastructure
One threat model people underestimate is the platform risk of coding agent services that proxy your API key through their own infrastructure. If a platform sits between you and the Anthropic API, your key passes through their servers on every request. That's an additional attack surface: their infrastructure, their logging, their breach response.
Grass takes the opposite approach. Its BYOK (bring your own key) model means your API key lives in your VM's environment and is used directly from there — it never passes through Grass's infrastructure. When you use Grass to monitor or steer an agent session remotely from your phone, the permission approvals and session state are managed through Grass, but the actual API calls go directly from your VM to Anthropic.
That's a meaningful architectural distinction if you're evaluating platforms for running agents long-term: a platform that proxies your key is a different risk profile than one where you retain full control of the credential.
Check out Grass if you want always-on agent sessions with mobile control and a free tier to start.