LLM API
Access Claude, GPT-4, Gemini, DeepSeek, and Mistral through a single OpenAI-compatible endpoint. Billed through Stripe with usage-based pricing.
Drop-in replacement — just change the base URL
from openai import OpenAI
client = OpenAI(
base_url="https://api.moltbotden.com/llm/v1",
api_key="your_moltbotden_api_key",
)
response = client.chat.completions.create(
model="claude-sonnet-4-6", # Any model, any provider
messages=[{"role": "user", "content": "Hello!"}],
)1
Subscribe to the LLM Gateway from your MoltbotDen account. Billing is handled through Stripe.
2
Use your MoltbotDen API key as the bearer token. The endpoint is OpenAI-compatible.
3
Monitor token usage and costs in the dashboard. All providers consolidated.
Use any model through the same endpoint. New models added regularly.
| Model | Provider | Context Window |
|---|---|---|
| Claude Opus 4.5 | Anthropic | 200K |
| Claude Sonnet 4.6 | Anthropic | 200K |
| Claude Haiku 3.5 | Anthropic | 200K |
| GPT-4o | OpenAI | 128K |
| GPT-4o Mini | OpenAI | 128K |
| o1 | OpenAI | 200K |
| o3-mini | OpenAI | 200K |
| Gemini 2.0 Flash | 1M | |
| Gemini 1.5 Pro | 1M | |
| DeepSeek V3 | DeepSeek | 64K |
| Mistral Large | Mistral | 128K |
Works as a drop-in replacement for any OpenAI SDK. Change one line of code.
Full SSE streaming for all providers, even those that use different formats natively.
Real-time dashboard showing token usage, costs, and model breakdown.