I spent five weeks building an orchestration system for AI agents. Two months later I deleted it. Here’s what that taught me about building on ground that won’t stop moving.
Six Windows
I’d been applying park documents manually. End of session, park the learnings. Open the project, ask Claude to apply them. Repeat for each project.
One morning I realized I could parallelize this. Six projects. Six VS Code windows. Six Claude Code sessions. Each working on applying parked learnings to their project’s documentation.
Coffee in hand, scanning across windows. Answer a question here. Approve a change there. Context switches every thirty seconds. For about an hour, it felt like I’d figured something out.
Then my working memory started to fail. Which project was waiting for input? Did I already answer that question or just think about answering it? By hour two I was approving things I should have questioned and missing notifications entirely.
The parallelization worked. My brain didn’t scale.
What I Built
Six agents, one human, zero coordination. I needed to fix the coordination part.
I called it Hive. One orchestrator, many workers, structured communication through a file protocol:
Orchestrator (you, in notes repo)
|
+-- Worker: project-a (background Claude session)
+-- Worker: project-b (background Claude session)
+-- Worker: project-c (background Claude session)
+-- ... spawned on demand
Workers wrote questions to status.json and waited. The orchestrator polled for changes. You handled questions when you were ready, not when they interrupted.
| Manual Parallel | Hive |
|---|---|
| Interrupt-driven | Poll-driven |
| Questions demand immediate attention | Questions queue until you’re ready |
| Context in your head | Context in status.json |
| Can’t step away | Workers continue while you’re gone |
It evolved over five weeks. Each week fixed the previous week’s bottleneck:
- Week 1: Manual multi-window chaos.
- Week 2: Built the status.json protocol.
- Week 3: Added transcript logging. Could review what happened while away.
- Week 4: Added session resume. Workers could pause and continue.
- Week 5: Used Hive to extract learnings from 50+ park documents across 9 projects.
Bash scripts for spawning, monitoring, resuming. A polling loop. A work queue with priorities and dependencies. Rules I learned the hard way: the orchestrator never does the work, workers self-correct before asking, always run pre-flight checks.
Real infrastructure. It worked.
Then the Platform Shipped It
Early 2026, Claude Code released Agent Teams. Built-in task lists. Built-in message passing. Built-in team coordination. Native context sharing instead of file protocols. Proper message delivery instead of polling.
I ran an audit with three agents comparing Hive to Agent Teams. Verdict: mostly obsolete. 233MB of worker directories. 2,550 lines of lesson index. 153 extracted lessons. All that value fit into 75 lines of memory file.
I started archiving.
Five Weeks of Tuition
Five weeks building. Two months using. One afternoon to archive. Sounds like wasted effort.
It wasn’t. I didn’t understand orchestration until I built it by hand. The principles I extracted (orchestrator never executes, context lives where it lives, workers need full project context) transferred directly to how I use Agent Teams now. I’m faster with the built-in tool because I built the handmade version first.
What Hive taught me:
- When parallelization helps vs. when it’s premature
- File-based state is more durable than in-memory state
- Polling beats interrupting for human attention management
- The orchestrator’s job is coordination, never execution
- Workers with full project context outperform workers with instructions only
Every one of those applies to Agent Teams. None of them are obvious from reading the docs.
Build Like It’s Compost
Any custom AI tooling you build today has a shelf life measured in months. Stop treating that as a problem. Treat it as a design constraint.
Build for learning, not permanence. The value of Hive was the orchestration intuition I developed writing it, not the bash scripts.
Extract principles before archiving. When I retired Hive, I pulled 153 lessons into a compact memory file. Implementation gone. Knowledge compounds.
Notice when you’re defending your tools. The moment I caught myself arguing “but mine does X better” against Agent Teams, I knew I was protecting the implementation instead of using the insight.
Assume the platform will ship it in six months. This changes how you invest. Less polish, more learning. Ship the ugly version. Get the lesson. Move on.
What You Keep
When the platform absorbs your pattern, the code is gone. The architecture diagrams are gone. The bash scripts are gone.
You keep the judgment. When to parallelize, when not to. How to isolate context. Why durable state matters. You learn those things by building something, using it under pressure, and watching where it breaks. Documentation doesn’t teach that.
Five weeks building. Two months using. One afternoon to archive. 75 lines of extracted principles that make me better at every orchestration tool I touch from here on.
That’s a good trade.
Sources: Claude Code best practices, The Rutter Pattern