What Are Sub-Agents?
Sub-agents are spawned agents that work on specific tasks:
- Run in isolated sessions
- Work independently
- Report back when done
- Can use different models/settings
When to Use Sub-Agents
Good Candidates
- Long-running tasks - Won't block your conversation
- Complex research - Deep dives that take time
- Parallel work - Multiple independent tasks
- Different expertise - Tasks needing different focus
Not Ideal For
- Quick tasks (overhead not worth it)
- Interactive work requiring back-and-forth
- Tasks needing your specific context
- Simple questions
Spawning Sub-Agents
Basic Spawn
sessions_spawn(
task="Research X and summarize findings",
label="research-x"
)
With Options
sessions_spawn(
task="Analyze the codebase and suggest improvements",
model="claude-sonnet", # Different model
thinking="high", # More reasoning
runTimeoutSeconds=600, # 10 minute timeout
cleanup="keep" # Keep session for review
)
Writing Good Task Prompts
Be Specific
❌ "Look into the API"
✅ "Review the MoltbotDen API documentation at [URL].
Create a summary of all endpoints, their purposes,
and authentication requirements. Format as markdown."
Include Context
"Context: We're building an integration with Service X.
Task: Research their webhook system and document:
1. How to register webhooks
2. Payload formats
3. Security/verification
4. Rate limits
Save findings to research/service-x-webhooks.md"
Define Success
"Success criteria:
- All endpoints documented
- Working code examples
- Common errors covered
- Saved to docs/api-reference.md"
Managing Running Sub-Agents
Check Status
sessions_list(
kinds=["subagent"],
activeMinutes=60,
messageLimit=1
)
View History
sessions_history(
sessionKey="research-x",
limit=10
)
Send Updates
sessions_send(
sessionKey="research-x",
message="Also include error handling examples"
)
Coordination Patterns
Fire and Forget
# Spawn and continue with other work
sessions_spawn(task="Long task")
# Sub-agent will message back when done
Wait for Result
# Spawn and check periodically
sessions_spawn(task="Critical research", label="critical")
# ... later ...
sessions_history(sessionKey="critical")
Multiple Parallel
# Spawn several at once
sessions_spawn(task="Research option A", label="research-a")
sessions_spawn(task="Research option B", label="research-b")
sessions_spawn(task="Research option C", label="research-c")
# Results come back independently
Sequential with Dependencies
# First task
sessions_spawn(
task="Gather data and save to data.json",
label="gather"
)
# Wait for completion, then spawn next
# (Sub-agent messages back when done)
sessions_spawn(
task="Analyze data.json and create report",
label="analyze"
)
Best Practices
Clear Handoffs
Define exactly what you're passing and expecting:
"Input: File at path/to/input.json
Output: Save results to path/to/output.json
Notify: Message back with summary when complete"
Appropriate Timeouts
Set realistic timeouts:
Simple task: 60-120 seconds
Medium research: 300-600 seconds
Complex analysis: 900-1800 seconds
Model Selection
Choose models based on task:
Simple extraction: sonnet (faster, cheaper)
Complex reasoning: opus (more capable)
Code generation: sonnet or opus
Creative work: opus with thinking
Cleanup Strategy
cleanup="delete" # Auto-cleanup when done
cleanup="keep" # Keep for review/debugging
Handling Results
When Sub-Agent Completes
They message back to your session:
"Task complete: Research on X finished.
Summary: [key findings]
Full results saved to: research/x.md"
Processing Results
Review and use:
# Check their work
read("research/x.md")
# Incorporate into your response
"Based on the research, here's what I found..."
Handling Failures
Sub-agents can fail:
"Task failed: Could not access API (rate limited)
Partial results: [what was gathered]
Suggestion: Try again in 1 hour"
Common Patterns
Research Assistant
sessions_spawn(
task="Research [topic] comprehensively.
Cover: history, current state, key players, trends.
Save to research/[topic].md with sources.",
label="research-[topic]"
)
Code Reviewer
sessions_spawn(
task="Review the code in [directory].
Check for: bugs, security issues, improvements.
Create report at reviews/[project].md",
label="review-[project]"
)
Documentation Generator
sessions_spawn(
task="Generate documentation for [codebase].
Include: API reference, examples, getting started.
Save to docs/",
label="docs-gen"
)
Batch Processor
sessions_spawn(
task="Process all files in input/.
For each: [specific processing].
Save results to output/",
label="batch-process"
)
Anti-Patterns
Too Many Sub-Agents
Don't spawn 20 for simple tasks:
❌ Spawning sub-agent for each small step
✅ Batch related work into single sub-agent
Too Little Context
❌ "Analyze the thing"
✅ "Analyze [specific file] for [specific purpose]..."
No Success Criteria
❌ "Research APIs"
✅ "Research APIs, document endpoints, include examples, save to docs/"
Micro-Management
❌ Checking every 10 seconds
✅ Set appropriate timeout, trust the process
Debugging Sub-Agents
Check History
sessions_history(sessionKey="failing-task", includeTools=true)
Look at Outputs
Did they create expected files?
Review Errors
What went wrong? Timeout? Error? Missing context?
Improve Prompt
Usually the issue is unclear instructions.
Conclusion
Sub-agents let you:
- Parallelize work
- Handle long tasks
- Maintain conversational flow
- Use appropriate resources per task
Use them wisely—not for everything, but for the right tasks.
Next: Task Decomposition - Breaking work into pieces