Browser & AutomationDocumentedScanned

firecrawl-skills

firecrawl-skills skill for OpenClaw agents

Share:

Installation

npx clawhub@latest install firecrawl-skills

View the full skill documentation and source below.

Documentation

Firecrawl CLI

Use the firecrawl CLI to fetch and search the web. Firecrawl returns clean markdown optimized for LLM context windows, handles JavaScript rendering, bypasses common blocks, and provides structured data.

Installation

Check status, auth, and rate limits:

firecrawl --status

Output when ready:

🔥 firecrawl cli v1.0.2

  ● Authenticated via FIRECRAWL_API_KEY
  Concurrency: 0/100 jobs (parallel scrape limit)
  Credits: 500,000 remaining
  • Concurrency: Max parallel jobs. Run parallel operations close to this limit but not above.
  • Credits: Remaining API credits. Each scrape/crawl consumes credits.
If not installed: npm install -g firecrawl-cli

Always refer to the installation rules in rules/install.md for more information if the user is not logged in.

Authentication

If not authenticated, run:

firecrawl login --browser

The --browser flag automatically opens the browser for authentication without prompting.

Organization

Create a .firecrawl/ folder in the working directory unless it already exists to store results. Add .firecrawl/ to the .gitignore file if not already there. Always use -o to write directly to file (avoids flooding context):

# Search the web (most common operation)
firecrawl search "your query" -o .firecrawl/search-{query}.json

# Search with scraping enabled
firecrawl search "your query" --scrape -o .firecrawl/search-{query}-scraped.json

# Scrape a page
firecrawl scrape  -o .firecrawl/{site}-{path}.md

Examples:

.firecrawl/search-react_server_components.json
.firecrawl/search-ai_news-scraped.json
.firecrawl/docs.github.com-actions-overview.md
.firecrawl/firecrawl.dev.md

Commands

Search - Web search with optional scraping

# Basic search (human-readable output)
firecrawl search "your query" -o .firecrawl/search-query.txt

# JSON output (recommended for parsing)
firecrawl search "your query" -o .firecrawl/search-query.json --json

# Limit results
firecrawl search "AI news" --limit 10 -o .firecrawl/search-ai-news.json --json

# Search specific sources
firecrawl search "tech startups" --sources news -o .firecrawl/search-news.json --json
firecrawl search "landscapes" --sources images -o .firecrawl/search-images.json --json
firecrawl search "machine learning" --sources web,news,images -o .firecrawl/search-ml.json --json

# Filter by category (GitHub repos, research papers, PDFs)
firecrawl search "web scraping python" --categories github -o .firecrawl/search-github.json --json
firecrawl search "transformer architecture" --categories research -o .firecrawl/search-research.json --json

# Time-based search
firecrawl search "AI announcements" --tbs qdr:d -o .firecrawl/search-today.json --json  # Past day
firecrawl search "tech news" --tbs qdr:w -o .firecrawl/search-week.json --json          # Past week

# Location-based search
firecrawl search "restaurants" --location "San Francisco,California,United States" -o .firecrawl/search-sf.json --json
firecrawl search "local news" --country DE -o .firecrawl/search-germany.json --json

# Search AND scrape content from results
firecrawl search "firecrawl tutorials" --scrape -o .firecrawl/search-scraped.json --json
firecrawl search "API docs" --scrape --scrape-formats markdown,links -o .firecrawl/search-docs.json --json

Search Options:

OptionDescription
--limit Maximum results (default: 5, max: 100)
--sources Comma-separated: web, images, news (default: web)
--categories Comma-separated: github, research, pdf
--tbs Time filter: qdr:h (hour), qdr:d (day), qdr:w (week), qdr:m (month), qdr:y (year)
--location Geo-targeting (e.g., "Germany")
--country ISO country code (default: US)
--scrapeEnable scraping of search results
--scrape-formats Scrape formats when --scrape enabled (default: markdown)
-o, --output Save to file

Scrape - Single page content extraction

# Basic scrape (markdown output)
firecrawl scrape  -o .firecrawl/example.md

# Get raw HTML
firecrawl scrape  --html -o .firecrawl/example.html

# Multiple formats (JSON output)
firecrawl scrape  --format markdown,links -o .firecrawl/example.json

# Main content only (removes nav, footer, ads)
firecrawl scrape  --only-main-content -o .firecrawl/example.md

# Wait for JS to render
firecrawl scrape  --wait-for 3000 -o .firecrawl/spa.md

# Extract links only
firecrawl scrape  --format links -o .firecrawl/links.json

# Include/exclude specific HTML tags
firecrawl scrape  --include-tags article,main -o .firecrawl/article.md
firecrawl scrape  --exclude-tags nav,aside,.ad -o .firecrawl/clean.md

Scrape Options:

OptionDescription
-f, --format Output format(s): markdown, html, rawHtml, links, screenshot, json
-H, --htmlShortcut for --format html
--only-main-contentExtract main content only
--wait-for Wait before scraping (for JS content)
--include-tags Only include specific HTML tags
--exclude-tags Exclude specific HTML tags
-o, --output Save to file

Crawl - Crawl an entire website

# Start a crawl (returns job ID)
firecrawl crawl 

# Wait for crawl to complete
firecrawl crawl  --wait

# With progress indicator
firecrawl crawl  --wait --progress

# Check crawl status
firecrawl crawl <job-id>

# Limit pages
firecrawl crawl  --limit 100 --max-depth 3

# Crawl blog section only
firecrawl crawl  --include-paths /blog,/posts

# Exclude admin pages
firecrawl crawl  --exclude-paths /admin,/login

# Crawl with rate limiting
firecrawl crawl  --delay 1000 --max-concurrency 2

# Save results
firecrawl crawl  --wait -o crawl-results.json --pretty

Crawl Options:

OptionDescription
--waitWait for crawl to complete
--progressShow progress while waiting
--limit Maximum pages to crawl
--max-depth Maximum crawl depth
--include-paths Only crawl matching paths
--exclude-paths Skip matching paths
--delay Delay between requests
--max-concurrency Max concurrent requests

Map - Discover all URLs on a site

# List all URLs (one per line)
firecrawl map  -o .firecrawl/urls.txt

# Output as JSON
firecrawl map  --json -o .firecrawl/urls.json

# Search for specific URLs
firecrawl map  --search "blog" -o .firecrawl/blog-urls.txt

# Limit results
firecrawl map  --limit 500 -o .firecrawl/urls.txt

# Include subdomains
firecrawl map  --include-subdomains -o .firecrawl/all-urls.txt

Map Options:

OptionDescription
--limit Maximum URLs to discover
--search Filter URLs by search query
--sitemap include, skip, or only
--include-subdomainsInclude subdomains
--jsonOutput as JSON
-o, --output Save to file

Credit Usage

# Show credit usage
firecrawl credit-usage

# Output as JSON
firecrawl credit-usage --json --pretty

Reading Scraped Files

NEVER read entire firecrawl output files at once unless explicitly asked - they can be 1000+ lines. Instead, use grep, head, or incremental reads:

# Check file size and preview structure
wc -l .firecrawl/file.md && head -50 .firecrawl/file.md

# Use grep to find specific content
grep -n "keyword" .firecrawl/file.md
grep -A 10 "## Section" .firecrawl/file.md

Parallelization

Run multiple scrapes in parallel using & and wait:

# Parallel scraping (fast)
firecrawl scrape  -o .firecrawl/1.md &
firecrawl scrape  -o .firecrawl/2.md &
firecrawl scrape  -o .firecrawl/3.md &
wait

For many URLs, use xargs with -P for parallel execution:

cat urls.txt | xargs -P 10 -I {} sh -c 'firecrawl scrape "{}" -o ".firecrawl/$(echo {} | md5).md"'

Combining with Other Tools

# Extract URLs from search results
jq -r '.data.web[].url' .firecrawl/search-query.json

# Get titles from search results
jq -r '.data.web[] | "\(.title): \(.url)"' .firecrawl/search-query.json

# Count URLs from map
firecrawl map  | wc -l