Coordinate Multiple Claude Code Sessions on a Shared Repo
Two Claude Code sessions hit the same file and one silently wins. Your CI pipeline fires twice for the same branch. Here's the coordination architecture — MCP presence + Daytona isolation — that stops both.
Running multiple Claude Code sessions against the same codebase without a coordination layer produces three silent failure modes: conflicting file writes that corrupt shared state, duplicate CI jobs triggered by parallel branch pushes, and Browser MCP session state lost to 5-minute timeouts mid-execution. This tutorial walks through a layered coordination architecture — claude-presence MCP for real-time session awareness, per-session Daytona sandboxes for CI isolation, and persistent Browser MCP configuration — so you can parallelize agent work without any of these blowing up on you.
TL;DR: When two or more Claude Code sessions run against the same repo simultaneously, they have no awareness of each other. The fix is three-layered: (1) install the claude-presence MCP server to give sessions visibility into what others are touching, (2) run each session in an isolated Daytona sandbox to prevent CI thrashing, and (3) configure Browser MCP with session-pinned persistence to avoid the 5-minute state loss that causes tool integration failures and hallucinations. All three layers can be set up in under 30 minutes.
What breaks when parallel Claude Code sessions share a repo
Before building a fix, it's worth being precise about what actually fails. A recent r/mcp thread announcing the claude-presence tool documented three distinct failure modes that are easy to miss until they've already cost you work.
Conflicting writes and corrupted shared state. Two sessions editing the same file produce a last-writer-wins outcome — whichever session flushes to disk last silently overwrites the other's changes. With agents operating at speed across dozens of files simultaneously, by the time you notice the conflict, one session's work may be entirely gone. No merge conflict marker, no warning — just overwritten.
Duplicate CI jobs and PR thrashing. Both sessions reach a natural commit point, both push to the same branch (or related branches), and your CI pipeline fires twice for what is logically one change. Worse, if both sessions open PRs, you end up with duplicate review requests, duplicate status checks, and reviewers confused about which PR to act on. This is the failure mode that most often causes teams to abandon multi-agent workflows entirely.
Browser MCP session state loss. A separate active thread in r/mcp documented how Browser MCP extensions lose their session state after approximately 5 minutes of inactivity mid-execution. When Claude Code resumes after a disconnect or a pause, it's operating with stale or missing browser context — leading to tool integration failures and, in more severe cases, the agent acting on browser state it assumes is present but isn't.
Prerequisites
- Claude Code installed and authenticated (
claude --version≥ 1.0) - Node.js 18+ on each machine or sandbox where sessions run
- A Daytona account (free tier is sufficient for testing)
- Git repository with CI (GitHub Actions, GitLab CI, or equivalent)
- Optional: Grass CLI for mobile monitoring (
npm install -g @grass-ai/ide)
Step 1: Add claude-presence MCP for inter-session conflict detection
The claude-presence MCP server is a community tool built specifically for this problem. It gives each running Claude Code session a live view of what other sessions are doing — which files they've claimed, which branches they're on, and what tasks they're actively executing.
Clone, build, and link the package from GitHub (garniergeorges/claude-presence), then add it to your Claude Code MCP configuration at ~/.claude/settings.json:
git clone https://github.com/garniergeorges/claude-presence
cd claude-presence && npm install && npm run build && npm link
The npm link step makes the claude-presence-mcp binary available globally. (The package isn't yet on the npm registry — clone-and-link is the only working install path for now.)
{
"mcpServers": {
"claude-presence": {
"command": "claude-presence-mcp",
"args": ["--workspace", "/absolute/path/to/your/repo"],
"env": {
"PRESENCE_BROADCAST_INTERVAL": "5000"
}
}
}
}
Each session broadcasts its current working files and task description over a local pub/sub channel. Every other session on the same workspace subscribes and can see conflicts before they happen.
In practice, instruct Claude Code to check presence before starting file edits. Add this to your repo's CLAUDE.md:
## Multi-session coordination
Before editing any file, check claude-presence to confirm no other active
session is working in the same directory or file. If a conflict is detected,
report it and pause — do not overwrite the conflicting session's work.
You can also inspect presence state directly from a terminal:
# List all active sessions on the workspace
claude-presence-mcp status --workspace /absolute/path/to/your/repo
Expected output when two sessions are running:
Active sessions (2):
session-a7f3 │ branch: feat/payments │ working: src/api/payments.ts
session-b2c1 │ branch: feat/auth │ working: src/middleware/auth.ts
No conflicts detected.
This visibility is the minimum viable coordination layer. It's a cooperative protocol — it relies on sessions honoring the presence data, which they will when instructed via CLAUDE.md. For the complementary post-run view, How to Audit What Your AI Agent Actually Did After the Session covers a diff-based review workflow that works alongside presence monitoring.
Step 2: Per-session Daytona isolation to prevent CI thrashing
Presence detection handles file conflicts, but it doesn't solve the CI problem. If two sessions are on different feature branches and both push at the same time, you still get parallel CI runs competing for the same runner pool and potentially queuing behind each other — or triggering the same pipeline twice on different branches that touch overlapping code.
The structural fix is to give each session its own isolated sandbox. Daytona's composable sandbox model is purpose-built for this: each workspace is a fully isolated environment with its own filesystem, git state, and process namespace. Sessions can't accidentally share working tree state because they physically can't reach each other's files.
Install the Daytona CLI and create a workspace per session:
# Install Daytona CLI
curl -sfL https://download.daytona.io/daytona/install.sh | sudo bash
# Workspace for session A — payments feature
daytona create \
--git-url https://github.com/your-org/your-repo \
--branch feat/payments \
--name agent-payments
# Workspace for session B — auth feature
daytona create \
--git-url https://github.com/your-org/your-repo \
--branch feat/auth \
--name agent-auth
Install Claude Code inside each workspace:
daytona exec agent-payments -- npm install -g @anthropic-ai/claude-code
daytona exec agent-auth -- npm install -g @anthropic-ai/claude-code
Start each session in its own workspace from separate terminals:
# Terminal 1
daytona code agent-payments
claude # runs inside the isolated agent-payments workspace
# Terminal 2
daytona code agent-auth
claude # completely separate filesystem and git state
With this setup: each session has its own working tree with no shared file state, each branch pushes from its own sandbox without cross-contamination, and CI jobs trigger per-branch as intended. If one workspace gets into a bad state, tearing it down and recreating it takes under a minute and has no effect on any other running session.
For a detailed comparison of Daytona against other sandbox options for this use case, Daytona vs AgentBox vs DIY: Sandbox Runtime for AI Agents covers the tradeoffs. If you're already using a git-level isolation approach, How to Keep Parallel Coding Agents from Stepping on Each Other covers worktree-based ownership patterns as a lighter-weight alternative.
Throttle CI with branch concurrency groups
Even with isolated sandboxes, two sessions pushing to different branches simultaneously will trigger two independent CI runs. That's usually what you want — but if your runner pool is small, add a concurrency group to your workflow to prevent queueing:
# .github/workflows/ci.yml
on:
push:
branches:
- 'feat/**'
- 'fix/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: false # queue rather than cancel
Scoping the group to github.ref means two pushes to different branches run in parallel, while two pushes to the same branch queue behind each other — which is the right behavior for single-session workspaces.
Step 3: Manage Browser MCP session state across reconnects
The 5-minute Browser MCP timeout documented in the r/mcp complaint thread happens because most Browser MCP configurations treat the browser session as ephemeral — it's tied to the MCP server process lifetime, which isn't guaranteed to survive a Claude Code session disconnect or a long idle period.
The fix is to configure Browser MCP to use a persistent session store. If you're using a Browserbase-backed MCP server, configure it with an explicit session ID and persistence flag:
{
"mcpServers": {
"browser": {
"command": "npx",
"args": ["@browserbasehq/mcp"],
"env": {
"BROWSERBASE_API_KEY": "your-key",
"BROWSERBASE_PROJECT_ID": "your-project-id",
"PERSIST_SESSION": "true"
}
}
}
}
If you're using a local Playwright-backed Browser MCP, configure a persistent user data directory scoped to the session:
{
"mcpServers": {
"browser": {
"command": "npx",
"args": [
"@modelcontextprotocol/server-playwright",
"--user-data-dir", "/tmp/mcp-browser-sessions/session-a",
"--no-sandbox"
]
}
}
}
The key is using a directory path that persists across MCP server restarts. Use a different path per Claude Code session to avoid two sessions sharing a browser profile.
For long-running sessions with browser interactions, add a liveness check to your session prompts:
Before continuing any browser-dependent task, verify the browser session
is active by navigating to about:blank and confirming the browser responds.
If the session has expired, reinitialize before proceeding.
This prevents the hallucination mode where the agent attempts to interact with browser state it assumes is there but isn't — which is exactly the failure pattern the r/mcp thread describes.
How do you verify the coordination setup is working?
Run a controlled two-session test on throwaway branches before using this on production work.
Test 1 — presence conflict detection:
# In workspace A
claude --print "Create file test-a.txt with content 'session A was here', \
then use claude-presence to confirm no other session is \
working in this directory"
# Simultaneously in workspace B
claude --print "Create file test-b.txt with content 'session B was here', \
then list all active sessions via claude-presence"
Success criteria:
claude-presence statusshows both sessions with distinct session IDs- Each file was created without overwriting the other
- Presence output lists both sessions' claimed working paths
Test 2 — CI isolation:
# Commit and push from each workspace, verify CI runs separately
daytona exec agent-payments -- git add test-a.txt && git commit -m "test" && git push
daytona exec agent-auth -- git add test-b.txt && git commit -m "test" && git push
Check your Actions/Pipeline logs — you should see two distinct runs, one per branch, with no cross-triggered jobs.
Test 3 — Browser MCP persistence:
Start a browser session in Claude Code, navigate to a URL, kill and restart the MCP server, then reconnect. Verify the browser returns to the previous page rather than opening a blank tab.
Troubleshooting common issues
claude-presence not showing other sessions
Check that all sessions are using the same --workspace path resolved to an absolute path — symlinks can cause mismatches. Also confirm PRESENCE_BROADCAST_INTERVAL hasn't been set above 30000ms; above that threshold, sessions can appear offline to each other.
Two Daytona workspaces pushing to the same remote branch
Each workspace should push to a different branch. Enforce this in each workspace's CLAUDE.md:
# CLAUDE.md
This session operates exclusively on branch: feat/payments.
Never push to main, develop, or any branch not named feat/payments.
Browser MCP session still expiring
If using Browserbase, confirm your plan supports persistent sessions and that PERSIST_SESSION=true is set in the MCP env block. For local Playwright, confirm the --user-data-dir path exists, is writable, and doesn't conflict with another session's path.
Duplicate CI runs despite isolated sandboxes
Check for webhook configurations that trigger on workspace creation events rather than just git pushes. Some Daytona integrations can trigger CI on workspace provisioning. Scope all CI triggers to push events on specific branch patterns only.
How Grass makes this workflow better
The coordination architecture above works from any terminal. Grass adds a specific capability that matters when you're running sessions across a multi-hour parallel task: you can monitor every session, handle approval gates for all of them, and intervene — from your phone, away from a terminal.
When session A in agent-payments hits a permission prompt (a bash command, a file write, a network request), that approval gate appears as a native modal in the Grass mobile app in real time. You tap Allow or Deny, and the agent continues. The same happens for session B in agent-auth simultaneously — Grass surfaces both permission queues in a single interface, one modal at a time, so you never accidentally approve the wrong action. How to Manage Multiple Coding Agents from Your Phone covers the multi-session dashboard in detail.
Here's the setup inside each Daytona workspace:
# Install Grass CLI inside the workspace
npm install -g @grass-ai/ide
# Start Grass server — auto-selects an available port from 32100-32199
grass start --network tailscale
Each workspace gets its own Grass server on a distinct port. The Grass mobile app supports multiple server connections — one per workspace — giving you a live view of every parallel session from a single app:
Workspace A → http://100.64.x.x:32100 (agent-payments, feat/payments)
Workspace B → http://100.64.x.x:32101 (agent-auth, feat/auth)
Scan the QR code for each server on your phone and save both connections. From there you get:
- Real-time streaming output from both sessions — you can watch them work in parallel without sitting at two terminals
- Permission gate modals for both sessions — native iOS/Android modals with haptic feedback
- Diff viewer to review what each session has changed before approving a git push
- Session history per workspace, so you can trace what each agent did and when
The permission forwarding is particularly valuable in the multi-session context. When two agents are running in parallel and both hit approval gates within a few seconds of each other, managing them from separate terminal windows is error-prone. Grass serializes them — one modal at a time — and gives you enough context (tool name, command preview, syntax-highlighted diff) to make a confident decision fast.
For the complete Tailscale + Daytona + Grass setup walkthrough, Setting Up Grass with a Daytona Remote Server has step-by-step instructions and troubleshooting.
Grass is available with a free tier (10 hours, no credit card) at codeongrass.com. The CLI is MIT-licensed: npm install -g @grass-ai/ide.
FAQ
How do you prevent two Claude Code sessions from editing the same file at the same time?
The claude-presence MCP server gives each session visibility into which files other sessions are claiming. By instructing Claude Code via CLAUDE.md to check presence before editing any file, you prevent simultaneous writes. This is a cooperative protocol — it works when sessions respect the presence data, which they consistently do when the instruction is included in the session's system context.
Why do parallel Claude Code sessions cause duplicate CI runs?
Sessions running on the same branch push overlapping changes, which triggers CI multiple times for the same logical work. The fix is to run each session in an isolated sandbox — like a separate Daytona workspace — on its own dedicated branch, so each CI run corresponds to exactly one logical change from exactly one session.
What causes Browser MCP session state to be lost mid-execution?
Browser MCP sessions are typically tied to the MCP server process lifetime. If Claude Code disconnects and reconnects, or if the MCP server restarts (which can happen during a long session), the browser session context is lost. Configuring Browser MCP with a persistent user data directory or a session-pinned cloud browser session prevents this. The 5-minute timeout is a default inactivity timeout in some Browser MCP implementations — not a hard platform limit.
Can claude-presence MCP coordinate sessions running across different machines?
Not out of the box — claude-presence uses local pub/sub and assumes sessions share a common filesystem path for the workspace. For cross-machine coordination, you'd need all sessions to point at a shared network path or use a presence backend that supports remote session registration. This is a current gap in the tooling; per-session Daytona isolation plus branch naming conventions is the more reliable cross-machine approach today.
How many parallel Claude Code sessions can Daytona handle?
Daytona's sandbox model is designed for elastic scaling — each workspace is an isolated container and the limit is your account plan, not a technical ceiling on the sandbox runtime itself. Workspaces can be created and torn down programmatically via the Daytona SDK, which means you can script workspace provisioning as part of your agent dispatch workflow rather than managing them manually.
This post is published by Grass — a machine built for AI coding agents that gives every agent an always-on cloud VM, accessible and controllable from your phone. Works with Claude Code and OpenCode.