← Back to Blog

How I Build: Stack, Workflow, and Shipping as a Solo Dev

engineeringworkflowaisolo-dev

I've been building software under Cone Crows full-time since last May — almost ten months now. In that time, I've shipped Builds to both app stores, launched Harken, and have Grimly, Corvus, and Augur in various stages of development. Along the way, my tech stack and workflow have gone through their own evolution. This is a snapshot of where things stand today and how I actually get work done.

The Stack Evolution

When I started Builds, everything was new territory. The backend was Node.js, TypeScript, Fastify, and Apollo GraphQL. Supabase handled auth, the database, and file storage. The mobile app was React Native with Expo and TypeScript, using Apollo on the client side to talk to the API. It was a monorepo, with npm as the package manager. After a few months of development, Builds shipped to both app stores.

For Grimly, I followed the same pattern with one key change: REST instead of GraphQL. (I wrote about why I moved away from GraphQL in a previous post.) Grimly is also a monorepo. It started with a Svelte web front end, but we recently replaced that with Next.js. I kept the landing page inside the repo this time instead of maintaining it separately.

Harken is where things started to shift more noticeably. There was no mobile app — just a developer-facing SDK that needed to support React Native and Expo. For the backend and developer console, I reached for Next.js based on a recommendation, not prior experience. The rest of the stack was familiar: TypeScript, Fastify, REST, and Supabase. But I also discovered the Supabase CLI during Harken's development, and it immediately became a first-class part of my local development and CI/CD workflow. Harken uses pnpm instead of npm, and it's a monorepo with separate front ends for the developer console and landing page.

By the time I got to Corvus and Augur, the stack had converged. Every new project now starts with the same recipe:

  • Runtime & Language: Node.js + TypeScript
  • API: Fastify + REST
  • Data & Auth: Supabase (Postgres, Auth, Storage) + Supabase CLI
  • Front End: Next.js
  • Package Manager: pnpm
  • Repo Structure: Monorepo

That's the stack I reach for without thinking about it. It's not the result of exhaustive research or careful benchmarking — it's what I arrived at through building real products and paying attention to what felt productive.

I've also found that I genuinely enjoy web development more than mobile. Not having to deal with app store submissions and the gatekeeping that comes with them is a real quality-of-life improvement. Builds will always have its mobile apps, but for everything else, the web is where I'd rather be.

The Workflow

Tracking Work

I use Linear for issue tracking. I started on the free tier, burned through the 250-ticket limit quickly, and upgraded to a paid plan. I keep it as lean as possible — no team or org account. That means nobody else can create or edit tickets, but as a solo dev, that's not a problem.

When it's time to pick up work, I try to respect the priority order I've set in Linear. But honestly, especially early in a project's lifecycle, I'm just iterating as fast as I can and not always bothering with tickets. Either way, once I decide what's next, I sit down and open Claude Code.

Refinement

This is where the real work starts — and it's the part of the process I've invested the most in refining.

After Claude orients itself on the repo (it reads the CLAUDE.md automatically, and I provide any additional context it needs), I tell it we're going to refine a feature. Sometimes I paste in something I've already thought through, like a Linear ticket. Sometimes I reference a feature document we've already started. And sometimes I just start talking.

In that last case, Claude becomes my sounding board. I'll work through everything iteratively: the technical approach, non-functional requirements, data flows, UI elements — all of it. Depending on complexity, this can take hours. It's not wasted time. This is where bad ideas get caught early and good ideas get sharpened.

As we refine, I ask Claude to capture everything into a feature document that lives in the repo, usually under /feature-log. I use a simple naming convention: 001-feature-name, 002-feature-name, with the index incrementing monotonically. Nothing fancy, but it works well enough for a solo dev.

That feature doc is essentially the PRD. It covers the background, the vision, and a phased implementation plan. Once it's in good shape, I hand it to Codex for review. This is a recent addition to my process — I use OpenAI's Codex CLI as a complement to Claude Code. Codex is good at critical analysis, and it regularly surfaces things that Claude and I didn't consider during refinement.

Development

Once the feature doc is solid, I start a fresh Claude Code context and point it at the document. From there, I let Claude work through the implementation plan until it hits an agreed-upon milestone.

When Claude tells me it's done, I don't push the code. First, I ask Codex to do a code review — all local, before anything hits a remote branch. Codex usually finds plenty of things to address: edge cases, style inconsistencies, potential issues. I feed those findings back to Claude, which addresses them. Then Codex reviews again. I repeat this loop several times, and each round typically surfaces fewer issues than the last. Once I'm satisfied, I package it up, commit, and push.

This Claude-then-Codex ping-pong has become one of the most valuable parts of my workflow. Using different AI tools for generation versus review gives me a form of cross-validation that catches things neither tool would catch alone. It's not perfect, but it's dramatically better than just shipping whatever the first pass produces.

Concurrency

Here's the part that makes all of this scale: I don't wait around.

When I tell Claude to go implement something, that can take a while. So I don't sit and watch. I switch to another session — usually a different project entirely — and start working on something else. I have several projects all running in parallel, and the backlog across them is practically infinite. At any given time, I have three to five active features in development simultaneously, spread across three different workstations.

I know a lot of people use git worktrees for this kind of thing. I tried them. I gave up as soon as I found out dotfiles weren't copied to the worktree. I'm sure there are ways to make that work, but since I'm already working across multiple projects and multiple repos, I didn't pursue it. Separate checkouts on separate machines works fine for me.

That's It

I'm not trying to present this as the way to build software. There are a million blog posts about coding with AI, and I don't have anything novel to add to the discourse. This is just what I've found works well for me after ten months of daily development. The stack has converged to something predictable and productive. The workflow — refinement, implementation, cross-tool review — has made me significantly more effective than I was before AI tooling was part of my process. And the concurrency model means I'm rarely blocked and rarely idle.

If any of this resonates, great. If not, that's fine too. The best process is the one that helps you ship.


This is the third post on the Cone Crows engineering blog. Subscribe to the RSS feed to follow along.