Automated Quality Gates for Agent Code: Beyond Passing Tests

Your agent's PR passed CI. Tests are green. But hardcoded secrets, hallucinated imports, and convention drift all survive standard checks. Here's the three-layer pipeline that catches what tests miss.

Automated quality gates for agent-generated code are layered checkpoints — running at local, pre-push, and PR levels — that catch hardcoded secrets, convention drift, hallucinated imports, and business logic risk before a human reviewer ever opens the diff. This post walks through implementing all three layers as a complete pipeline.

TL;DR

Standard CI validates behavior at boundaries. It doesn't inspect the code itself for secrets, bad imports, or convention drift — and agents generate code fast enough that those issues routinely reach human review. This tutorial builds three gates: a local Claude Code verification skill you invoke on demand, a pre-push hook running static analysis, and a PR-level standards gate that generates a focused reviewer brief. Each layer catches a different class of risk. All three together close the gap between "tests pass" and "safe to ship."


Why "Passes the Tests" Is No Longer Enough

As one developer noted in a recent r/AI_Agents discussion on managing agent-generated code quality: "agents can ship PRs faster than senior devs can meaningfully review them. once agents start touching business logic, 'passes the tests' isn't good enough."

Standard CI pipelines validate behavior at the boundary — given inputs produce expected outputs. They don't validate:

  • Hardcoded credentials baked into config, test fixtures, or environment setup scripts
  • Hallucinated package imports that resolve against your local node_modules but fail on a clean install
  • Unbounded loops or tight polling patterns that pass unit tests but saturate production under real load
  • Convention drift — the agent follows a pattern from a different codebase or training sample, not your team's STANDARDS.md
  • Stale or vulnerable dependencies added mid-session that cleared lockfile checks but carry known CVEs

Another developer described the pattern directly: "we keep seeing clean-looking code that clears basic checks but has real risk underneath — edge cases, stale dependencies."

The problem isn't agent code quality — it's volume and velocity. The implicit assumption behind most CI pipelines is that a human reviewed the logic before pushing. Agents break that assumption. SonarQube's documentation on quality gates for AI code puts it plainly: AI-generated code requires strict quality control on both new and overall code, not just the diff. A purpose-built automated layer fills that gap before human reviewers ever open the PR.


What You'll Build

A three-layer automated review pipeline:

Layer Runs when What it catches
Local verification skill On demand during development Secrets, hallucinated imports, unbounded loops, convention violations
Pre-push git hook Before git push Static analysis issues, dependency CVEs
PR gate On PR open/update Convention drift, cross-agent standards violations, reviewer brief

Prerequisites

  • Claude Code installed and authenticated (claude --version)
  • Node.js 18+ for CLI tooling
  • gitleaks for secret scanning: brew install gitleaks (macOS) or see releases
  • semgrep for static analysis: pip install semgrep
  • A GitHub-hosted repository with Actions enabled
  • Optional: Grass for real-time execution-layer oversight — covered in the Grass section below

Layer 1: The Local Verification Skill

A verification skill is an instruction block in your CLAUDE.md that Claude Code executes when triggered by a natural-language command. It runs a defined set of checks against modified files and returns a structured report — all without leaving your existing agent session.

A developer recently open-sourced one on r/AI_Agents specifically designed for agent-generated code. The install is a CLAUDE.md addition; the trigger is typing verify agent.

Adding the skill to CLAUDE.md

Add the following block to ~/.claude/CLAUDE.md for global availability, or to your project-level CLAUDE.md for repo-specific checks:

## verify agent

When asked to "verify agent", identify all files modified since the last commit
using `git diff --name-only HEAD`, then run these checks against each modified file.
Do not proceed to any further operation if ❌ issues are found.

### Checks

1. **Secret detection**
   Run: `gitleaks detect --source=. --no-git --report-format=json --report-path=/tmp/gl-report.json`
   Parse /tmp/gl-report.json. Flag any HIGH or CRITICAL findings as ❌.

2. **Hallucinated imports**
   For each import/require statement in modified files, check whether the package
   exists in package.json (or requirements.txt / go.mod as appropriate).
   Flag any unresolved package as ❌.

3. **Unbounded loops**
   Search modified files for `while (true)`, `while True:`, `for (;;)`.
   For each match, check whether a break/return condition exists within 25 lines.
   If no break condition is visible, flag as ⚠️.

4. **Dependency CVEs**
   Run: `npm audit --audit-level=high --json`
   Flag any HIGH or CRITICAL vulnerability as ❌.

5. **Convention compliance**
   If STANDARDS.md exists at the repo root, read it and check modified files
   for pattern violations. Flag each violation as ⚠️.

### Output format

Verification Report — [timestamp] Scope: [N files, N checks]

✅ [N] checks passed ⚠️ [N] warnings ❌ [N] issues

[For each ⚠️ or ❌:] [SEVERITY] [file:line] — [description]


What the output looks like

Verification Report — 2026-04-29 14:32:11
Scope: 7 files, 10 checks

✅ 8 checks passed
⚠️  3 warnings
❌ 2 issues

❌ src/config/database.ts:14 — Hardcoded credential: DB_PASSWORD = "prod_secret_xyz"
❌ src/lib/fetcher.ts:3 — Unresolved import: 'axios-retry' not found in package.json
⚠️  src/workers/poller.ts:88 — Unbounded loop: while(true) — no break condition found within 25 lines
⚠️  src/api/routes.ts:22 — Convention drift: Express v4 pattern; STANDARDS.md requires v5 routing
⚠️  src/api/routes.ts:45 — Convention drift: user-supplied ID used without ownership validation

The agent will not proceed with any further operation until the ❌ items are resolved. The ⚠️ items are surfaced for explicit acknowledgment before the next step.

This is the development-time gate — it runs in your session, on your machine, before anything is committed. For a deeper look at what happens when PreToolUse hooks still let things slip through, this post on hook bypass and blast radius covers the gap this skill is designed to close at the code level.


Layer 2: The Pre-Push Git Hook

The verification skill is on-demand. The pre-push hook is enforcement — it runs automatically before code leaves your machine and blocks pushes that contain unresolved issues.

Create .git/hooks/pre-push:

#!/usr/bin/env bash
set -e

echo "Running pre-push quality gate..."

# 1. Secret scanning
if ! gitleaks protect --staged --verbose; then
  echo ""
  echo "❌ Secrets detected in staged changes. Push blocked."
  echo "   Run 'gitleaks detect --source=.' for details."
  exit 1
fi

# 2. Semgrep static analysis on changed files
CHANGED=$(git diff --name-only @{u} 2>/dev/null || git diff --name-only HEAD~1 2>/dev/null || true)
if echo "$CHANGED" | grep -qE "\.(ts|js|tsx|py)$"; then
  if ! semgrep scan --config=p/secrets --config=p/owasp-top-ten \
       --error --quiet 2>/dev/null; then
    echo "❌ Semgrep found HIGH severity issues. Push blocked."
    exit 1
  fi
fi

# 3. Dependency audit
if [ -f package.json ]; then
  VULN=$(npm audit --audit-level=high --json 2>/dev/null \
    | python3 -c "
import sys, json
d = json.load(sys.stdin)
v = d.get('metadata', {}).get('vulnerabilities', {})
print(v.get('high', 0) + v.get('critical', 0))
" 2>/dev/null || echo "0")
  if [ "$VULN" -gt "0" ]; then
    echo "❌ $VULN high/critical vulnerabilities found. Run 'npm audit' for details."
    exit 1
  fi
fi

echo "✅ Pre-push gate passed."

Make it executable:

chmod +x .git/hooks/pre-push

For team-wide enforcement (so the hook ships with the repo and runs for every contributor), use husky:

npm install --save-dev husky
npx husky init
# Then move the script above to .husky/pre-push

The Autonoma vibe coding quality gate guide covers a compatible five-layer stack for teams that want to extend this further — their Semgrep configuration section is worth reviewing if you're seeing false-positive complaints from the broad --config=auto ruleset. For most agent workflows, p/secrets and p/owasp-top-ten give you high signal with low noise.


Layer 3: The PR-Level Cross-Agent Standards Gate

The first two layers protect your local machine. Layer 3 runs in CI and does the thing neither layer above can: enforce standards consistency across all agents and all contributors, and generate a brief that tells human reviewers exactly where to focus.

The key insight from this six-lesson orchestration writeup on r/ClaudeCode is the recommendation to have "multiple auto review loops configured on every agent workflow at multiple levels: within a local tool, within a cloud run, within a PR." The PR gate is where cross-agent standards enforcement lives — one agent following its local CLAUDE.md perfectly can still drift from the conventions every other agent on the team is supposed to share.

Creating STANDARDS.md

Add a STANDARDS.md to your repo root. This becomes the source of truth for both the verification skill (Layer 1) and the PR gate:

# STANDARDS.md

## TypeScript
- `strictNullChecks: true` — no implicit nulls
- No `any` types without an explicit `// safe: <reason>` comment on the same line
- All exported functions require JSDoc with `@returns` type annotation

## API Routes
- All user-supplied IDs validated against the database before use
- No string interpolation in SQL — parameterized queries only
- Rate limiting required on all public endpoints

## Dependencies
- No packages with HIGH or CRITICAL CVEs in npm audit
- No packages with last publish date > 24 months without explicit approval comment in PR

The GitHub Actions workflow

name: Agent Code Quality Gate

on:
  pull_request:
    types: [opened, synchronize, reopened]

jobs:
  quality-gate:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write

    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - run: npm ci

      # Layer: secret scanning
      - name: Secret scanning (gitleaks)
        uses: gitleaks/gitleaks-action@v2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

      # Layer: static analysis
      - name: Static analysis (semgrep)
        uses: semgrep/semgrep-action@v1
        with:
          config: >-
            p/secrets
            p/owasp-top-ten
            p/sql-injection
        env:
          SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}

      # Layer: dependency audit
      - name: Dependency audit
        run: npm audit --audit-level=high

      # Layer: standards compliance + reviewer brief
      # Surmado Code Review handles this step for cross-agent convention enforcement
      # See: https://surmado.dev — reads STANDARDS.md, outputs structured reviewer brief
      - name: Standards compliance check
        run: |
          npx surmado-review \
            --standards=STANDARDS.md \
            --base=${{ github.base_ref }} \
            --output=reviewer-brief.md

      - name: Post reviewer brief to PR
        uses: actions/github-script@v7
        with:
          script: |
            const fs = require('fs');
            const brief = fs.readFileSync('reviewer-brief.md', 'utf8');
            await github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: `## Automated Code Review\n\n${brief}`
            });

The reviewer brief

The reviewer brief posted to every PR tells the human reviewer exactly where to look. As one developer described the result: "The review runs on every push and tells you what's good, what drifted, and gives your human reviewer a brief so they know where to actually focus."

A brief for a typical agent-generated PR looks like:

## Automated Code Review — 2026-04-29

✅ Secret scanning: clean
✅ Static analysis: 0 HIGH issues
✅ Dependency audit: 0 critical CVEs

⚠️ Convention drift detected (2 files):
  - src/api/users.ts:34 — user-supplied ID not validated before DB query [STANDARDS.md §API Routes]
  - src/api/payments.ts:89 — string interpolation in SQL query [STANDARDS.md §API Routes]

**Reviewer focus:** Lines 34 and 89 above are the only items that need eyes.
Everything else passed automated checks.

Surmado Code Review was built specifically for this cross-agent standards gating use case — enforcing the same STANDARDS.md against PRs from Claude Code, Codex, OpenCode, and human contributors alike.

For teams already running SonarQube, the Sonar way for AI code quality gate provides a concrete threshold set to adopt: 80% test coverage on new code, Security rating A on overall code, all security hotspots reviewed. These conditions were designed specifically because AI-generated code exhibits different risk patterns than human-written code. The helio post on quality gates in the age of agentic coding covers how to write targeted GenAI prompts for each gate layer if you want to add an LLM-assisted review step.


How to Verify the Pipeline Works

Run a deliberate injection test before relying on any layer in production:

# Add a detectable secret to a test file
echo 'const DB_PASSWORD = "prod_secret_xyz_real_looking";' >> src/config/test-inject.ts
git add src/config/test-inject.ts

# Test Layer 1: run "verify agent" in your Claude Code session
# Expected: ❌ Hardcoded credential detected at test-inject.ts:1

# Test Layer 2: attempt a push
git commit -m "test injection"
git push
# Expected: ❌ Secrets detected in staged changes. Push blocked.

# Test Layer 3: open a PR with the file
# Expected: gitleaks-action fails the check; reviewer brief flags the finding

# Clean up
git reset HEAD~1
rm src/config/test-inject.ts

A working pipeline produces:

Pre-push gate: ✅ No secrets | ✅ Semgrep clean | ✅ 0 critical CVEs
PR gate:       ✅ Secret scan | ✅ Static analysis | ⚠️ 1 convention drift (see brief)

How Grass Makes This Workflow Better

Automated gates inspect code that's already been written. They don't intercept what the agent is about to execute.

There's a class of risk between "code passes review" and "code runs in production": the agent's runtime decisions during the session. The bash command it wants to run. The file it wants to overwrite without a backup. The external API it decides to call. These decisions don't appear in any diff — they happen live, before anything is committed.

This is the gap Grass fills.

When you run a Claude Code or OpenCode session on Grass — either on the always-on cloud VM or via the local @grass-ai/ide CLI — every permission request is forwarded to your phone in real time. The agent hits a tool call you didn't anticipate, and before it executes, a native modal appears on your phone:

Tool: Bash
Command: rm -rf ./migrations/archived_2024

[Allow]  [Deny]

You're not reviewing a diff after the fact. You're approving or denying at the moment of execution — from your phone, from wherever you are.

This execution-layer gate completes the picture that the three automated layers above can't:

Gate Catches When
Verification skill Secrets, bad imports, loop issues Before commit
Pre-push hook CVEs, static analysis errors Before push
PR gate + standards Convention drift, cross-agent inconsistency On PR open
Grass permission forwarding Unexpected runtime executions During session

A verification skill catches an unbounded loop in committed code. It can't stop the agent from running curl prod-api.internal/users/reset during the session, because that command was a runtime decision — never in the code at all.

The combination of all four layers is what practitioners who've shipped at scale consistently land on. As discussed in how to build human-in-the-loop approval gates for AI coding agents, the execution-layer gate is the one most teams skip — and the one that catches the incidents that make it past every automated check.

If you want the full picture on the human-reviewer checkpoint side, how to review AI-generated code that ships faster than you can read it covers the four-checkpoint workflow that sits on top of this automated pipeline.

Grass is free to try (10 hours, no credit card). Install: npm install -g @grass-ai/ide, then grass start and scan the QR code.


Troubleshooting

gitleaks flags a false positive on a test fixture

Suppress inline:

const MOCK_API_KEY = "test-key-not-real"; // gitleaks:allow

Or add a .gitleaks.toml allowlist:

[[rules.allowlist]]
paths = ["tests/fixtures/.*"]
regexes = ["^MOCK_|^FAKE_|^TEST_"]

Semgrep generates noise from broad rulesets

Narrow from --config=auto to specific packs:

semgrep scan --config=p/secrets --config=p/sql-injection --config=p/owasp-top-ten --error

This eliminates style warnings while keeping the security signal. The SoftwareSeni guide on building quality gates for AI-generated code has a detailed section on ruleset tuning for agent-heavy workflows.

Pre-push hook doesn't run in CI

By design — git hooks are local only. The pre-push hook is a developer-side first line; the PR gate (Layer 3) is the non-bypassable CI enforcement layer. Developers can pass --no-verify to skip the hook; the PR gate cannot be bypassed without an admin override.

Reviewer brief is too long and nobody reads it

Cap the brief to ❌ and ⚠️ items only — strip the ✅ passing checks. If the PR has zero findings, the comment should say "All automated checks passed — no items for review." Reviewers should only see the brief when it contains actionable findings.

STANDARDS.md grows stale after a few sprints

Treat STANDARDS.md as a living document with a clear owner. Add a CI check that fails if STANDARDS.md hasn't been modified in more than 90 days — it forces a deliberate review of whether the standards still reflect what the team actually wants to enforce.


Frequently Asked Questions

Why don't standard CI tests catch quality issues in agent-generated code?

Standard CI validates behavior at system boundaries — given inputs produce expected outputs. It does not inspect the code itself for secrets, hallucinated imports, unbounded loops, or convention drift. These issues survive test suites because tests verify intended behavior, not implementation quality. Agents generate code at a volume and velocity that makes assuming human pre-review of every line untenable, which is the assumption most CI pipelines were designed around.

What is a verification skill for Claude Code?

A verification skill is an instruction block in CLAUDE.md that defines a named command ("verify agent") Claude Code executes on demand. It runs a defined set of checks — secret scanning, import validation, static analysis, convention compliance — against the files modified since the last commit, and returns a structured report with ✅/⚠️/❌ severity ratings. It's a local, session-integrated quality gate that requires no separate CI configuration to start using.

How do I enforce the same coding standards across Claude Code, Codex, and OpenCode on a shared repo?

Maintain a STANDARDS.md at your repo root that encodes conventions as explicit, checkable rules rather than guidelines in wiki documentation. Run a standards compliance check at the PR level against every push, regardless of which agent or which developer authored the code. Tools like Surmado Code Review were built specifically for this cross-agent enforcement use case. The PR gate is the only layer that treats all contributors — human and agent — identically.

What's the difference between a pre-push hook and a PR quality gate, and do I need both?

A pre-push hook runs locally before code leaves your machine — fast, catches issues early, but bypassable with --no-verify. A PR gate runs in CI on every PR open and update — non-bypassable without admin override, and is where cross-agent standards checks and reviewer briefs are generated. You need both: the hook gives developers immediate feedback during their own session; the PR gate is the team-level enforcement backstop.

How does Grass permission forwarding complement automated code quality gates?

Automated quality gates inspect code that has been written and committed. Grass permission forwarding intercepts the agent's runtime decisions — bash commands, file overwrites, API calls — before they execute, during the live session. These execution-time decisions never appear in a diff, so they're entirely outside the scope of code-level quality gates. The combination covers both the code layer (automated gates) and the execution layer (real-time approval forwarding).