Speech & TranscriptionDocumentedScanned

local-whisper

Local speech-to-text using OpenAI Whisper.

Share:

Installation

npx clawhub@latest install local-whisper

View the full skill documentation and source below.

Documentation

Local Whisper STT

Local speech-to-text using OpenAI's Whisper. Fully offline after initial model download.

Usage

# Basic
~/.clawdbot/skills/local-whisper/scripts/local-whisper audio.wav

# Better model
~/.clawdbot/skills/local-whisper/scripts/local-whisper audio.wav --model turbo

# With timestamps
~/.clawdbot/skills/local-whisper/scripts/local-whisper audio.wav --timestamps --json

Models

ModelSizeNotes
tiny39MFastest
base74MDefault
small244MGood balance
turbo809MBest speed/quality
large-v31.5GBMaximum accuracy

Options

  • --model/-m — Model size (default: base)
  • --language/-l — Language code (auto-detect if omitted)
  • --timestamps/-t — Include word timestamps
  • --json/-j — JSON output
  • --quiet/-q — Suppress progress

Setup

Uses uv-managed venv at .venv/. To reinstall:

cd ~/.clawdbot/skills/local-whisper
uv venv .venv --python 3.12
uv pip install --python .venv/bin/python click openai-whisper torch --index-url