Master Orchestrator

Let Claude Code and Codex collaborate on your large-scale coding tasks. Decompose goals into DAGs, execute in parallel, converge on verified results.

The Problem with Single-Agent Coding

Large tasks break single AI sessions. You end up manually stitching results together.

🚨

Context Overflow

Refactoring 15 files? A single agent loses track of changes made earlier. Inconsistent edits, missed dependencies, broken imports.

🔄

No Parallelism

Independent tasks run sequentially. 100 file fixes that could take 2 minutes instead take 30. No retry on transient failures.

🤔

Wrong Tool for the Job

Planning needs reasoning. Execution needs speed. Using one model for everything means compromising on both.

🔀

Manual Coordination

You become the orchestrator — running agents, checking outputs, feeding results to the next step. That's not automation.

How Master Orchestrator Solves This

One command. Automatic decomposition, parallel execution, cross-agent verification.

Input
Your Goal
Natural language
Phase 1
Decompose
Claude
Phase 2
Execute
Codex (parallel)
Phase 3
Review
Claude
Output
Verified
Tested & merged
# One command replaces hours of manual coordination
$ mo do "Add JWT auth with middleware, routes, tests, and docs"

# Decomposing goal into 6 tasks...
# Task 1: auth-middleware (claude) ─── running
# Task 2: token-utils (codex) ─── running
# Task 3: auth-routes (codex) ─── waiting [depends: 1,2]
# Task 4: auth-tests (codex) ─── waiting [depends: 1,3]
# Task 5: api-docs (codex) ─── waiting [depends: 3]
# Task 6: integration-review (claude) ─── waiting [depends: 4,5]

# All 6 tasks completed. 0 failures. Duration: 4m 23s

Single Agent vs. Master Orchestrator

Capability Single Agent Master Orchestrator
Multi-file refactoring Context overflow, inconsistent DAG decomposition, parallel
Bulk operations (100+ files) Sequential, no retry 16 parallel workers, auto-retry
Error recovery Manual restart Classification, backoff, fallback
Provider flexibility Stuck with one model Mix Claude + Codex per phase
Convergence detection None Plateau, regression, deterioration
Crash recovery Start over Checkpoint + resume

Key Features

DAG Decomposition

Goals are automatically broken into dependent sub-tasks. Independent tasks run in parallel.

Provider Routing

Route each phase to the best AI agent. Claude for reasoning, Codex for code generation.

Simple Mode

High-throughput bulk execution with 16 parallel workers, auto-retry, and syntax validation.

Error Intelligence

Classifies errors (rate limit, context overflow, transient) and applies the right recovery strategy.

Convergence Detection

Monitors for plateaus, regressions, and quality deterioration. Automatically escalates or rolls back.

Crash Recovery

SQLite-backed checkpoints. Resume any interrupted run from where it left off.

Budget Control

Per-provider budget limits with accounting mode. Never exceed your spend ceiling.

Self-Improvement

The orchestrator can analyze its own runs and propose improvements to your workflow.

Extensible Config

TOML-based configuration for providers, routing, retry policies, and validation rules.

Get Started in 30 Seconds

# Install
$ git clone https://github.com/amber132/Master-Orchestrator.git
$ cd Master-Orchestrator
$ pip install -e ".[dev]"

# Run your first orchestrated task
$ mo do "Add input validation to all POST endpoints"