Personal DevelopmentDocumentedScanned

self-improvement

Captures learnings, errors, and corrections to enable continuous improvement.

Share:

Installation

npx clawhub@latest install self-improvement

View the full skill documentation and source below.

Documentation

Self-Improvement Skill

Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.

Quick Reference

SituationAction
Command/operation failsLog to .learnings/ERRORS.md
User corrects youLog to .learnings/LEARNINGS.md with category correction
User wants missing featureLog to .learnings/FEATURE_REQUESTS.md
API/external tool failsLog to .learnings/ERRORS.md with integration details
Knowledge was outdatedLog to .learnings/LEARNINGS.md with category knowledge_gap
Found better approachLog to .learnings/LEARNINGS.md with category best_practice
Similar to existing entryLink with **See Also**, consider priority bump
Broadly applicable learningPromote to CLAUDE.md, AGENTS.md, and/or .github/copilot-instructions.md
Workflow improvementsPromote to AGENTS.md (clawdbot workspace)
Tool gotchasPromote to TOOLS.md (clawdbot workspace)
Behavioral patternsPromote to SOUL.md (clawdbot workspace)

Setup

Create .learnings/ directory in project root if it doesn't exist:

mkdir -p .learnings

Copy templates from assets/ or create files with headers.

Logging Format

Learning Entry

Append to .learnings/LEARNINGS.md:

## [LRN-YYYYMMDD-XXX] category

**Logged**: ISO-8601 timestamp
**Priority**: low | medium | high | critical
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config

### Summary
One-line description of what was learned

### Details
Full context: what happened, what was wrong, what's correct

### Suggested Action
Specific fix or improvement to make

### Metadata
- Source: conversation | error | user_feedback
- Related Files: path/to/file.ext
- Tags: tag1, tag2
- See Also: LRN-20250110-001 (if related to existing entry)

---

Error Entry

Append to .learnings/ERRORS.md:

## [ERR-YYYYMMDD-XXX] skill_or_command_name

**Logged**: ISO-8601 timestamp
**Priority**: high
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config

### Summary
Brief description of what failed

### Error
Actual error message or output
### Context
- Command/operation attempted
- Input or parameters used
- Environment details if relevant

### Suggested Fix
If identifiable, what might resolve this

### Metadata
- Reproducible: yes | no | unknown
- Related Files: path/to/file.ext
- See Also: ERR-20250110-001 (if recurring)

---

Feature Request Entry

Append to .learnings/FEATURE_REQUESTS.md:

## [FEAT-YYYYMMDD-XXX] capability_name

**Logged**: ISO-8601 timestamp
**Priority**: medium
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config

### Requested Capability
What the user wanted to do

### User Context
Why they needed it, what problem they're solving

### Complexity Estimate
simple | medium | complex

### Suggested Implementation
How this could be built, what it might extend

### Metadata
- Frequency: first_time | recurring
- Related Features: existing_feature_name

---

ID Generation

Format: TYPE-YYYYMMDD-XXX

  • TYPE: LRN (learning), ERR (error), FEAT (feature)

  • YYYYMMDD: Current date

  • XXX: Sequential number or random 3 chars (e.g., 001, A7B)


Examples: LRN-20250115-001, ERR-20250115-A3F, FEAT-20250115-002

Resolving Entries

When an issue is fixed, update the entry:

  • Change **Status**: pending**Status**: resolved

  • Add resolution block after Metadata:
  • ### Resolution
    - **Resolved**: 2025-01-16T09:00:00Z
    - **Commit/PR**: abc123 or #42
    - **Notes**: Brief description of what was done

    Other status values:

    • in_progress - Actively being worked on

    • wont_fix - Decided not to address (add reason in Resolution notes)

    • promoted - Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md


    Promoting to Project Memory

    When a learning is broadly applicable (not a one-off fix), promote it to permanent project memory.

    When to Promote

    • Learning applies across multiple files/features
    • Knowledge any contributor (human or AI) should know
    • Prevents recurring mistakes
    • Documents project-specific conventions

    Promotion Targets

    TargetWhat Belongs There
    CLAUDE.mdProject facts, conventions, gotchas for all Claude interactions
    AGENTS.mdAgent-specific workflows, tool usage patterns, automation rules
    .github/copilot-instructions.mdProject context and conventions for GitHub Copilot
    SOUL.mdBehavioral guidelines, communication style, principles (clawdbot)
    TOOLS.mdTool capabilities, usage patterns, integration gotchas (clawdbot)

    How to Promote

  • Distill the learning into a concise rule or fact

  • Add to appropriate section in target file (create file if needed)

  • Update original entry:

  • - Change **Status**: pending**Status**: promoted
    - Add **Promoted**: CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md

    Promotion Examples

    Learning (verbose):

    Project uses pnpm workspaces. Attempted npm install but failed.

    Lock file is pnpm-lock.yaml. Must use pnpm install.

    In CLAUDE.md (concise):

    ## Build & Dependencies
    - Package manager: pnpm (not npm) - use `pnpm install`

    Learning (verbose):

    When modifying API endpoints, must regenerate TypeScript client.

    Forgetting this causes type mismatches at runtime.

    In AGENTS.md (actionable):

    ## After API Changes
    1. Regenerate client: `pnpm run generate:api`
    2. Check for type errors: `pnpm tsc --noEmit`

    Recurring Pattern Detection

    If logging something similar to an existing entry:

  • Search first: grep -r "keyword" .learnings/

  • Link entries: Add **See Also**: ERR-20250110-001 in Metadata

  • Bump priority if issue keeps recurring

  • Consider systemic fix: Recurring issues often indicate:

  • - Missing documentation (→ promote to CLAUDE.md or .github/copilot-instructions.md)
    - Missing automation (→ add to AGENTS.md)
    - Architectural problem (→ create tech debt ticket)

    Periodic Review

    Review .learnings/ at natural breakpoints:

    When to Review

    • Before starting a new major task
    • After completing a feature
    • When working in an area with past learnings
    • Weekly during active development

    Quick Status Check

    # Count pending items
    grep -h "Status\*\*: pending" .learnings/*.md | wc -l
    
    # List pending high-priority items
    grep -B5 "Priority\*\*: high" .learnings/*.md | grep "^## \["
    
    # Find learnings for a specific area
    grep -l "Area\*\*: backend" .learnings/*.md

    Review Actions

    • Resolve fixed items
    • Promote applicable learnings
    • Link related entries
    • Escalate recurring issues

    Detection Triggers

    Automatically log when you notice:

    Corrections (→ learning with correction category):

    • "No, that's not right..."

    • "Actually, it should be..."

    • "You're wrong about..."

    • "That's outdated..."


    Feature Requests (→ feature request):
    • "Can you also..."

    • "I wish you could..."

    • "Is there a way to..."

    • "Why can't you..."


    Knowledge Gaps (→ learning with knowledge_gap category):
    • User provides information you didn't know

    • Documentation you referenced is outdated

    • API behavior differs from your understanding


    Errors (→ error entry):
    • Command returns non-zero exit code

    • Exception or stack trace

    • Unexpected output or behavior

    • Timeout or connection failure


    Priority Guidelines

    PriorityWhen to Use
    criticalBlocks core functionality, data loss risk, security issue
    highSignificant impact, affects common workflows, recurring issue
    mediumModerate impact, workaround exists
    lowMinor inconvenience, edge case, nice-to-have

    Area Tags

    Use to filter learnings by codebase region:

    AreaScope
    frontendUI, components, client-side code
    backendAPI, services, server-side code
    infraCI/CD, deployment, Docker, cloud
    testsTest files, testing utilities, coverage
    docsDocumentation, comments, READMEs
    configConfiguration files, environment, settings

    Best Practices

  • Log immediately - context is freshest right after the issue

  • Be specific - future agents need to understand quickly

  • Include reproduction steps - especially for errors

  • Link related files - makes fixes easier

  • Suggest concrete fixes - not just "investigate"

  • Use consistent categories - enables filtering

  • Promote aggressively - if in doubt, add to CLAUDE.md or .github/copilot-instructions.md

  • Review regularly - stale learnings lose value
  • Gitignore Options

    Keep learnings local (per-developer):

    .learnings/

    Track learnings in repo (team-wide):
    Don't add to .gitignore - learnings become shared knowledge.

    Hybrid (track templates, ignore entries):

    .learnings/*.md
    !.learnings/.gitkeep

    Hook Integration

    Enable automatic reminders through agent hooks. This is opt-in - you must explicitly configure hooks.

    Quick Setup (Claude Code / Codex)

    Create .claude/settings.json in your project:

    {
      "hooks": {
        "UserPromptSubmit": [{
          "matcher": "",
          "hooks": [{
            "type": "command",
            "command": "./skills/self-improvement/scripts/activator.sh"
          }]
        }]
      }
    }

    This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).

    Full Setup (With Error Detection)

    {
      "hooks": {
        "UserPromptSubmit": [{
          "matcher": "",
          "hooks": [{
            "type": "command",
            "command": "./skills/self-improvement/scripts/activator.sh"
          }]
        }],
        "PostToolUse": [{
          "matcher": "Bash",
          "hooks": [{
            "type": "command",
            "command": "./skills/self-improvement/scripts/error-detector.sh"
          }]
        }]
      }
    }

    Available Hook Scripts

    ScriptHook TypePurpose
    scripts/activator.shUserPromptSubmitReminds to evaluate learnings after tasks
    scripts/error-detector.shPostToolUse (Bash)Triggers on command errors
    See references/hooks-setup.md for detailed configuration and troubleshooting.

    Automatic Skill Extraction

    When a learning is valuable enough to become a reusable skill, extract it using the provided helper.

    Skill Extraction Criteria

    A learning qualifies for skill extraction when ANY of these apply:

    CriterionDescription
    RecurringHas See Also links to 2+ similar issues
    VerifiedStatus is resolved with working fix
    Non-obviousRequired actual debugging/investigation to discover
    Broadly applicableNot project-specific; useful across codebases
    User-flaggedUser says "save this as a skill" or similar

    Extraction Workflow

  • Identify candidate: Learning meets extraction criteria

  • Run helper (or create manually):

  • ./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run
       ./skills/self-improvement/scripts/extract-skill.sh skill-name

  • Customize SKILL.md: Fill in template with learning content

  • Update learning: Set status to promoted_to_skill, add Skill-Path

  • Verify: Read skill in fresh session to ensure it's self-contained
  • Manual Extraction

    If you prefer manual creation:

  • Create skills//SKILL.md

  • Use template from assets/SKILL-TEMPLATE.md

  • Follow [Agent Skills spec]():

  • - YAML frontmatter with name and description
    - Name must match folder name
    - No README.md inside skill folder

    Extraction Detection Triggers

    Watch for these signals that a learning should become a skill:

    In conversation:

    • "Save this as a skill"

    • "I keep running into this"

    • "This would be useful for other projects"

    • "Remember this pattern"


    In learning entries:
    • Multiple See Also links (recurring issue)

    • High priority + resolved status

    • Category: best_practice with broad applicability

    • User feedback praising the solution


    Skill Quality Gates

    Before extraction, verify:

    • Solution is tested and working
    • Description is clear without original context
    • Code examples are self-contained
    • No project-specific hardcoded values
    • Follows skill naming conventions (lowercase, hyphens)

    Multi-Agent Support

    This skill works across different AI coding agents with agent-specific activation.

    Claude Code

    Activation: Hooks (UserPromptSubmit, PostToolUse)
    Setup: .claude/settings.json with hook configuration
    Detection: Automatic via hook scripts

    Codex CLI

    Activation: Hooks (same pattern as Claude Code)
    Setup: .codex/settings.json with hook configuration
    Detection: Automatic via hook scripts

    GitHub Copilot

    Activation: Manual (no hook support)
    Setup: Add to .github/copilot-instructions.md:

    ## Self-Improvement
    
    After solving non-obvious issues, consider logging to `.learnings/`:
    1. Use format from self-improvement skill
    2. Link related entries with See Also
    3. Promote high-value learnings to skills
    
    Ask in chat: "Should I log this as a learning?"

    Detection: Manual review at session end

    Clawdbot

    Activation: Workspace injection + inter-agent messaging
    Setup: Configure workspace path in ~/.clawdbot/clawdbot.json
    Detection: Via session tools and workspace files (AGENTS.md, SOUL.md, TOOLS.md)

    Clawdbot uses a workspace-based model with injected prompt files. See references/clawdbot-integration.md for detailed setup.

    Agent-Agnostic Guidance

    Regardless of agent, apply self-improvement when you:

  • Discover something non-obvious - solution wasn't immediate

  • Correct yourself - initial approach was wrong

  • Learn project conventions - discovered undocumented patterns

  • Hit unexpected errors - especially if diagnosis was difficult

  • Find better approaches - improved on your original solution
  • Copilot Chat Integration

    For Copilot users, add this to your prompts when relevant:

    After completing this task, evaluate if any learnings should be logged to .learnings/ using the self-improvement skill format.

    Or use quick prompts:

    • "Log this to learnings"

    • "Create a skill from this solution"

    • "Check .learnings/ for related issues"


    Clawdbot Integration

    Clawdbot uses workspace-based prompt injection with specialized files for different concerns.

    Workspace Structure

    ~/clawd/                    # Default workspace (configurable)
    ├── AGENTS.md              # Multi-agent workflows, delegation patterns
    ├── SOUL.md                # Behavioral guidelines, communication style
    ├── TOOLS.md               # Tool capabilities, MCP integrations
    └── sessions/              # Session transcripts (auto-managed)

    Clawdbot Promotion Targets

    Learning TypePromote ToExample
    Agent coordinationAGENTS.md"Delegate file searches to explore agent"
    Communication styleSOUL.md"Be concise, avoid disclaimers"
    Tool gotchasTOOLS.md"MCP server X requires auth refresh"
    Project factsCLAUDE.mdStandard project conventions

    Inter-Agent Learning

    Clawdbot supports session-based communication:

    • sessions_list - See active/recent sessions

    • sessions_history - Read transcript from another session

    • sessions_send - Send message to another session


    Hybrid Setup (Claude Code + Clawdbot)

    When using both:

  • Keep .learnings/ for project-specific learnings

  • Use clawdbot workspace files for cross-project patterns

  • Sync high-value learnings to both systems
  • See references/clawdbot-integration.md for complete setup, promotion formats, and troubleshooting.