How I Shipped a Production iOS App Solo with Claude

A CPO stepped away from iOS in 2014 and came back 12 years later — no team, just Claude. Here's how he shipped BaselineBody at full-team pace.

A developer who walked away from iOS development in 2014 — before Swift even existed — returned in 2026 and shipped a production app called BaselineBody without a team. Not a prototype. An App Store release. The workflow that made it possible wasn't some clever prompt hack. It was treating Claude as a structured pair programmer: a collaborator who absorbed a twelve-year platform gap, surfaced the right framework choices for each problem, and kept the shipping pace from collapsing under the weight of being a team of one.

TL;DR: Solo mobile developers — especially returning devs with large platform knowledge gaps — can match team output using three Claude workflows in sequence: a structured API gap audit, a framework pre-flight check before each major subsystem, and a continuous decision offload loop. The BaselineBody iOS build is a production-grade proof point that this works. Each workflow below is tool-agnostic; a dedicated section at the end covers how Grass extends these sessions beyond your laptop.


The Problem: What Solo Mobile Dev Actually Looks Like

If you've shipped production mobile apps before, you know the team tax. A developer writes the feature. Someone else catches the API edge cases. A senior reviews the architecture. QA flags the regression you didn't test. Strip that down to one person and the math turns ugly — not because you can't write code, but because you can't hold all that context simultaneously.

The developer behind BaselineBody had shipped two #1 App Store apps and then left iOS in 2014 to become a CPO. When he returned in 2026, the gap was substantial: Swift 6.0, SwiftUI, structured concurrency, StoreKit 2, WidgetKit, and twelve years of WWDC sessions he'd missed. His summary:

"I used Claude as a pair programmer for the entire build. Not to generate the app. To get back up to speed and move at a pace that would've been impossible solo otherwise."

That distinction — pair programmer, not code generator — is the operational frame that makes this work. Claude didn't write BaselineBody. Claude compressed a twelve-year platform gap into days of structured orientation, then stayed on-call to absorb the questions that would otherwise stall a solo developer every few hours.


Goal

Ship a production-quality iOS (or Android) app as a solo developer by running three structured Claude workflows that replicate what a team absorbs naturally: platform orientation, architecture guidance, and continuous code review.


Prerequisites

  • Claude Pro or Max subscription (long context sessions are essential — free tier hits limits too quickly)
  • Xcode 16+ with a physical test device
  • A scoped MVP: this workflow breaks down for vague or undefined projects
  • A project CLAUDE.md file — covered in Workflow 1
  • Recommended: Grass CLI (@grass-ai/ide) for sessions that survive stepping away from your desk

Workflow 1: How to Close a Platform API Gap Before You Write a Line of Code

The hardest part of returning to a platform after years away isn't relearning syntax. It's not knowing what you don't know. iOS moved from Objective-C to Swift. UIKit is still present but SwiftUI is the idiomatic starting point for new apps. Completion handlers became async/await. Core Data has a successor in SwiftData. StoreKit 2 is a complete API replacement. A developer who shipped production apps on iOS 7 is missing a decade of tribal knowledge, distributed across hundreds of WWDC sessions they never watched.

The pattern that works is a structured gap audit at the start of the project — before feature code, before scaffolding, before anything:

You are my iOS pair programmer. I'm returning to iOS development after 
12 years away (last shipped on iOS 7, Objective-C). I'm building a 
fitness tracking app targeting iOS 17+ in SwiftUI.

Walk me through:
1. What has fundamentally changed in iOS app architecture since 2014 
   that I will encounter in the first two weeks of building?
2. What specific APIs I would have used in 2014 are now deprecated 
   or fully replaced?
3. What is the idiomatic 2026 approach for: background data sync, 
   in-app purchases, and local persistence?

Be specific to my domain. I don't need iOS history — I need a 
senior developer's two-paragraph brief before I start.

Three constraints in that prompt do most of the work. First, the explicit prior knowledge level ("iOS 7, Objective-C") calibrates Claude's output toward genuinely useful deltas rather than an introduction to Swift. Second, scoping to your app domain ("fitness tracking") filters out irrelevant framework changes. Third, "two-paragraph brief from a senior developer" forces density over comprehensiveness — you want decision signal, not survey.

After the gap audit, capture what you learned. Create a CLAUDE.md file at the repo root:

# BaselineBody — Project Context for Claude

## Platform
iOS 17+, SwiftUI-first, Swift 6.0, single developer

## Architecture decisions (as of project start)
- SwiftData for local persistence (not Core Data)
- StoreKit 2 for in-app purchases
- BGAppRefreshTask for background sync
- async/await throughout (no completion handlers)

## What I'm building
[Brief app description and MVP scope]

## What not to suggest
Do not suggest UIKit unless I ask. Do not suggest Combine unless async/await 
can't handle the use case.

Claude Code reads CLAUDE.md automatically. For manual Claude sessions, paste it at the start of every conversation. This file is the primary fix for context loss across sessions — without it, you re-orient Claude every time and lose the compounding benefit.

During development, run the gap-fill variant of this prompt every time you reach for an API you remember and it doesn't behave as expected:

I'm trying to implement background data refresh. In 2014 I would have 
used performFetchWithCompletionHandler on UIApplication. What's the 
current approach in iOS 17? Show me a minimal BGAppRefreshTask setup 
that compiles in Xcode 16 with the import statement included.

The explicit "show me code that compiles with the import statement" constraint is not optional. Without it you get conceptual explanation. With it you get something you can paste into Xcode and verify.


Workflow 2: How to Navigate Unfamiliar Frameworks Before Committing to Them

Returning developers face a specific trap: they're too confident to look up everything from scratch, but not current enough to trust their instincts. The result is writing code that compiles but uses the wrong tool for the job — a wrong-architecture decision you discover halfway through implementation.

The pre-flight check pattern runs before implementing any major subsystem:

Before I build local push notifications for this app, tell me:
1. What frameworks should I be choosing between for this on iOS 17?
2. What is the recommended approach given I need: persistent scheduling, 
   user-configurable timing, and background delivery?
3. What are the two or three decisions I'll regret if I get them wrong now 
   before I've written anything?

I don't want an overview. I want the brief a senior iOS developer would 
give a junior before they started — opinions included.

The "opinions included" instruction matters. Claude without that instruction tends toward balanced, hedge-everything answers. With it, you get the actual recommendation a senior dev would make based on your specific constraints.

A second version of this prompt handles mid-implementation uncertainty — when you've started and something feels wrong:

I'm halfway through implementing background location using 
CLLocationManager with allowsBackgroundLocationUpdates. I'm seeing 
[specific issue]. Is this the right API for what I'm actually trying 
to do, or did I pick the wrong approach at the start?

This is the pattern that catches wrong-API decisions before you've committed several hundred lines to them. For the BaselineBody build, an early-stage check like this caught a Core Motion vs HealthKit architecture question that would have required a full rewrite to fix post-launch.

For a broader map of where Claude fits in the 2026 mobile toolchain — including session management, memory, and how different tools complement each other — the Claude Code Ecosystem 2026 overview covers it in detail.


Workflow 3: How to Maintain Shipping Pace When You're a Team of One

Velocity for solo developers doesn't collapse on the big decisions. It collapses on the accumulation of small ones. Should this be a struct or a class? Is this the right layer for this logic? Is this naming idiomatic? In a team, these questions get resolved in five-minute conversations. Solo, they either slow you down or produce a codebase full of half-considered choices you'll pay for during the next feature.

The continuous decision offload pattern is a lightweight review you run throughout the day — not a thorough review, a fast one:

Quick take needed: I'm deciding between putting this network fetch in 
the view model or a dedicated service layer. The app has 3 screens that 
need this data, no shared state yet, I'm the only developer.

Give me the pragmatic answer for my situation, not the theoretically 
correct answer.

The "pragmatic answer for my situation, not the theoretically correct one" framing consistently produces decisions rather than essays. Without it, you get a balanced architecture analysis. With it, you get a call.

The code review variant replaces what disappears entirely when you go solo:

I'm going to paste 80 lines of Swift. Flag anything that:
1. Will cause a definite bug
2. Is wrong-idiomatic for Swift 6.0 — not style preferences, actual 
   wrong choices for the language
3. Will create a problem when I eventually add a second developer

Skip everything else. I'm not asking for a rewrite.

Combining this with the Mobile UI Quality-Control Checklist for AI-Generated Code gives you a structured review loop — not a rubber stamp, an actual gate. The checklist covers what Claude won't surface on its own: platform-specific UI behaviors, accessibility gaps, and edge cases that only appear on real hardware.

The r/androiddev community is asking the same question the BaselineBody developer already answered: does AI actually help with real mobile production work, or just toy projects? The answer is yes — but structured pair programming is what separates useful from gimmicky. Tools like Cursor and Claude Code in mobile workflows show a consistent pattern: the feedback loop between AI code generation and live device testing is what makes the difference, not the AI doing everything.


How to Verify the Workflow Is Working

Concrete signals that the pair programmer pattern is functioning:

You're making fewer wrong-API decisions mid-implementation. If you're frequently scrapping and restarting a subsystem, the pre-flight check prompt is too vague or you're skipping it. Add more constraint to the prompt or run it earlier.

Your daily decision queue is clearing. If you finish the day with unresolved architecture questions, the continuous offload loop isn't running often enough. Treat Claude like a Slack channel you message throughout the day, not a tool you open for big problems.

Code review is catching real things. The review prompt should surface at least one genuine issue per 50–100 lines. If it's returning "looks fine," you're not giving it enough context. Paste the file, the function calling it, and the CLAUDE.md section for this subsystem.

You're not re-explaining context every session. If onboarding Claude takes more than two or three messages, the CLAUDE.md file needs more detail or isn't being pasted at session start.


Troubleshooting: What Breaks This Workflow

Claude invents an API that doesn't exist. This happens most often with recently changed APIs or platform-specific methods. Add to every code request: "If you are not certain this API exists in iOS 17, say so. Include the exact import statement." The explicit uncertainty prompt reduces confident hallucinations significantly.

Sessions lose context mid-build. Long Claude.ai conversations hit context limits. The CLAUDE.md file at project root solves this. Paste it at the start of every new session. For Claude Code CLI, it's read automatically from the repo root.

You're getting generic advice instead of iOS-specific answers. Add a platform prefix to every prompt session: "iOS 17+, SwiftUI-first, Swift 6.0, solo developer, production app." Restate it at the start of each conversation. Without platform context, Claude defaults to cross-platform, framework-agnostic responses.

The workflow stalls when you step away from your desk. This is a structural problem with laptop-bound sessions — covered in the next section.

For more on keeping output quality high as the agent writes code faster than you can review it, the AI-generated code review workflow covers the four-checkpoint system that keeps you genuinely in control.


How Grass Makes This Workflow Better

The three workflows above run on any machine with a Claude subscription. But there's an architectural problem with laptop-bound Claude Code sessions: they live and die with your hardware.

A real mobile app build doesn't fit in a single sitting. The API gap audit above runs for an hour. The StoreKit 2 framework orientation might turn into an afternoon of back-and-forth as you implement and re-check. And Claude Code sessions on your laptop die the moment your machine sleeps, you close the terminal, or you walk away. Return to your desk an hour later and you're starting over — re-establishing context, re-pasting CLAUDE.md, losing the thread.

Grass is a machine built for AI coding agents. It runs Claude Code on an always-on cloud VM so the session stays alive when you step away. An API gap audit you start in the morning is still running when you come back from a meeting. A framework orientation that spans your workday doesn't get interrupted when your laptop lid closes. You pick up exactly where you left off.

The practical workflow:

npm install -g @grass-ai/ide
cd ~/projects/baseline-body
grass start

Scan the QR code on your phone. Fire off the gap audit prompt. Set it running, then leave your desk.

When Claude Code wants to write a file or run a bash command — which it will throughout a long session — a permission request surfaces on your phone as a native modal. You tap Allow or Deny. The session continues without you being at your laptop. For a mobile app build specifically, this means you can handle approval gates throughout your day — between meetings, on a commute, when you step away to do the CPO work that pays for the side project.

The local CLI (@grass-ai/ide) is open-source under MIT and runs a direct WiFi connection between your phone and laptop — no cloud relay, nothing leaves your network except Claude's own API calls. The cloud VM product at codeongrass.com extends this further with a Daytona-powered VM that runs even when your laptop is off — one surface for every agent, always on. The free tier includes 10 hours with no credit card.

The BaselineBody workflow is the use case Grass was built for: long, multi-hour Claude Code sessions that span the gaps in a developer's day, where permission gates need to be handled in real time without being chained to a desk. One surface. Every agent. Always on.


FAQ

How is this different from using GitHub Copilot or ChatGPT for mobile development? The specific prompt constraints matter more than the model. The pre-flight check and continuous offload patterns work because they force opinionated, production-scoped output rather than generic code suggestions. Claude's larger context window makes it better for long framework orientation sessions where you need several related API decisions in context simultaneously. That said, the workflows above are prompt structures — they can be adapted to other models with longer context windows.

Can this workflow actually replace a senior iOS developer? Not entirely. Claude handles routine decisions well: API selection, idiomatic patterns, code review on specific files. What it doesn't replicate is the pattern recognition that comes from a senior who has seen your specific architecture fail in production. Use these workflows to eliminate the 80% of decisions that don't require that judgment, and invest the time saved in the 20% that do.

What if I'm starting from scratch with no prior mobile experience? The gap audit prompt structure needs adjustment — instead of "what changed since I last shipped," ask Claude to help define an MVP scope before writing any code. The YouTube walkthrough on building a first app with Claude Code covers the zero-to-running-app path in detail. The Claude Code Handbook on freeCodeCamp is the most thorough written reference for the underlying capabilities.

How do I handle conflicting advice across sessions? Document decisions in your CLAUDE.md as you make them. Paste the relevant section at the start of each new session. Claude stays consistent with documented decisions; it only drifts when it doesn't know what you've already committed to.

Do these workflows translate to Android? Directly. The gap audit prompt changes platform context: Kotlin, Jetpack Compose, Android 14+, Gradle. The pre-flight check and decision offload prompts are identical in structure. The r/androiddev community's question about whether AI tools actually help with real mobile performance work has the same answer as the BaselineBody case study — yes, when you use it as a structured pair, not a generator.


Next Steps

Start with the API gap audit — one prompt, one session, before you write any feature code. It's the highest-leverage use of Claude for returning mobile developers. Block two hours and run it before you scaffold anything.

If your sessions routinely run longer than a sitting, get started with Grass in 5 minutes to keep them alive when you step away. The BaselineBody build ran Claude Code sessions across full workdays — the workflow only scales when those sessions don't die when you walk away from your desk.