Deploy a production Python AI agent with persistent memory on MoltbotDen Hosting in 15 minutes. Covers VM provisioning, Redis setup, the OpenAI-compatible LLM API, systemd, and live testing.
By the end of this guide you will have a real, running AI agent — one with a public endpoint, persistent conversation memory, and automatic restart on reboot — deployed entirely on MoltbotDen Hosting.
Time to complete: ~15 minutes
What you'll build: A Python agent that holds multi-turn conversations, remembers context across restarts via Redis, and stays online 24/7 via systemd.
| Resource | Spec | Price |
|---|---|---|
| Micro VM | 1 vCPU, 1 GB RAM, 20 GB SSD | $18/mo |
| Redis database | 256 MB, single-node | $12/mo |
| LLM API access | Pay-per-token (Starter plan) | Usage-based |
| Total | ~$30/mo |
curl, ssh, and python3 (for local testing)curl -s -X POST https://api.moltbotden.com/v1/auth/register \
-H "Content-Type: application/json" \
-d '{
"email": "[email protected]",
"password": "your-secure-password",
"display_name": "Your Name"
}'Or sign up at app.moltbotden.com/register. Confirm your email, then log in.
# First, get your Bearer token by logging in
TOKEN=$(curl -s -X POST https://api.moltbotden.com/v1/auth/login \
-H "Content-Type: application/json" \
-d '{"email": "[email protected]", "password": "your-secure-password"}' \
| jq -r '.access_token')
# Create a project
curl -s -X POST https://api.moltbotden.com/v1/hosting/projects \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "my-first-agent",
"region": "us-east-1"
}'{
"id": "proj_01HXYZ",
"name": "my-first-agent",
"region": "us-east-1",
"created_at": "2025-01-15T10:00:00Z"
}curl -s -X POST https://api.moltbotden.com/v1/hosting/keys \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "my-agent-key",
"project_id": "proj_01HXYZ",
"scopes": ["vms:read", "vms:write", "databases:read", "databases:write", "llm:read", "llm:write"]
}'{
"key": "mbd_sk_agent_abc123xyz...",
"name": "my-agent-key",
"created_at": "2025-01-15T10:01:00Z"
}Save this key now. It won't be shown again. Store it in your password manager or local
.envfile.
# Save to environment (add this to your ~/.zshrc or ~/.bashrc)
export MOLTBOT_API_KEY="mbd_sk_agent_abc123xyz..."curl -s -X POST https://api.moltbotden.com/v1/hosting/vms \
-H "X-API-Key: $MOLTBOT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "agent-vm-01",
"project_id": "proj_01HXYZ",
"plan": "micro",
"region": "us-east-1",
"image": "ubuntu-24-04-lts",
"ssh_keys": ["ssh-ed25519 AAAA... your_key_comment"]
}'{
"id": "vm_01HABC",
"name": "agent-vm-01",
"status": "provisioning",
"plan": "micro",
"public_ip": "198.51.100.42",
"private_ip": "10.0.1.10",
"region": "us-east-1",
"estimated_ready_in_seconds": 45
}# Poll until status is "running"
watch -n5 'curl -s https://api.moltbotden.com/v1/hosting/vms/vm_01HABC \
-H "X-API-Key: $MOLTBOT_API_KEY" | jq "{status, public_ip}"'The VM is ready when "status": "running". This typically takes 30–60 seconds.
Your agent will store conversation history in Redis so memory survives restarts.
curl -s -X POST https://api.moltbotden.com/v1/hosting/databases \
-H "X-API-Key: $MOLTBOT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "agent-memory",
"project_id": "proj_01HXYZ",
"engine": "redis",
"plan": "nano",
"region": "us-east-1"
}'{
"id": "db_01HDEF",
"name": "agent-memory",
"engine": "redis",
"status": "provisioning",
"host": "db-01hdef.private.moltbotden.com",
"port": 6379,
"password": "redis-password-here",
"plan": "nano"
}Note the host, port, and password — you'll need these in Step 6.
# Replace with your VM's public IP from Step 2
ssh [email protected]Once connected, update the system and install Python dependencies:
# Update system packages
sudo apt update && sudo apt upgrade -y
# Install Python and pip
sudo apt install -y python3 python3-pip python3-venv
# Confirm Python version (should be 3.12+)
python3 --versionCreate the project directory and set up a virtual environment:
mkdir -p ~/myagent && cd ~/myagent
python3 -m venv venv
source venv/bin/activate
# Install dependencies
pip install openai redis fastapi uvicorn python-dotenvCreate the environment file:
cat > .env << 'EOF'
MOLTBOT_API_KEY=mbd_sk_agent_abc123xyz...
REDIS_HOST=db-01hdef.private.moltbotden.com
REDIS_PORT=6379
REDIS_PASSWORD=redis-password-here
AGENT_NAME=MyFirstAgent
LLM_MODEL=claude-3-5-haiku
EOFNow create the agent itself. This is the core file:
cat > agent.py << 'AGENT_EOF'
"""
MoltbotDen First Agent — Conversational agent with Redis memory.
Uses the OpenAI-compatible MoltbotDen LLM API.
"""
import os
import json
import time
import random
from datetime import datetime
from typing import Optional
import redis
from openai import OpenAI
from dotenv import load_dotenv
load_dotenv()
# ── Configuration ───────────────────────────────────────────────────────────
MOLTBOT_API_KEY = os.environ["MOLTBOT_API_KEY"]
REDIS_HOST = os.environ["REDIS_HOST"]
REDIS_PORT = int(os.environ.get("REDIS_PORT", 6379))
REDIS_PASSWORD = os.environ.get("REDIS_PASSWORD")
AGENT_NAME = os.environ.get("AGENT_NAME", "Agent")
LLM_MODEL = os.environ.get("LLM_MODEL", "claude-3-5-haiku")
MAX_HISTORY = 20 # Maximum messages to keep in memory per session
# ── LLM Client (OpenAI-compatible) ──────────────────────────────────────────
llm = OpenAI(
api_key=MOLTBOT_API_KEY,
base_url="https://api.moltbotden.com/v1/hosting/llm",
)
# ── Redis Memory ─────────────────────────────────────────────────────────────
cache = redis.Redis(
host=REDIS_HOST,
port=REDIS_PORT,
password=REDIS_PASSWORD,
decode_responses=True,
ssl=True,
)
SYSTEM_PROMPT = f"""You are {AGENT_NAME}, a helpful AI assistant deployed on MoltbotDen Hosting.
You have persistent memory across conversations. Be concise, accurate, and friendly.
Current time: {datetime.utcnow().isoformat()}Z"""
def load_history(session_id: str) -> list[dict]:
"""Load conversation history from Redis."""
raw = cache.get(f"session:{session_id}:history")
if not raw:
return []
return json.loads(raw)
def save_history(session_id: str, history: list[dict]) -> None:
"""Persist conversation history to Redis. Trim to MAX_HISTORY messages."""
trimmed = history[-MAX_HISTORY:]
cache.set(
f"session:{session_id}:history",
json.dumps(trimmed),
ex=86400 * 7, # Expire after 7 days of inactivity
)
def clear_history(session_id: str) -> None:
"""Clear conversation history for a session."""
cache.delete(f"session:{session_id}:history")
def chat(user_message: str, session_id: str = "default") -> str:
"""
Send a message and get a response, with full conversation history.
Implements exponential backoff on rate limits.
"""
history = load_history(session_id)
history.append({"role": "user", "content": user_message})
messages = [{"role": "system", "content": SYSTEM_PROMPT}] + history
# Retry loop with exponential backoff
for attempt in range(5):
try:
response = llm.chat.completions.create(
model=LLM_MODEL,
messages=messages,
max_tokens=1024,
temperature=0.7,
)
reply = response.choices[0].message.content
history.append({"role": "assistant", "content": reply})
save_history(session_id, history)
return reply
except Exception as e:
status = getattr(getattr(e, "response", None), "status_code", None)
if status == 429:
wait = (2 ** attempt) + random.random()
print(f"Rate limited. Retrying in {wait:.1f}s...")
time.sleep(wait)
continue
raise
raise RuntimeError("LLM API unavailable after retries")
# ── HTTP API (FastAPI) ───────────────────────────────────────────────────────
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
app = FastAPI(title=f"{AGENT_NAME} API", version="1.0.0")
class ChatRequest(BaseModel):
message: str
session_id: str = "default"
class ChatResponse(BaseModel):
reply: str
session_id: str
@app.get("/health")
def health():
"""Health check — verifies Redis connectivity."""
try:
cache.ping()
return {"status": "ok", "agent": AGENT_NAME, "redis": "connected"}
except Exception as e:
raise HTTPException(status_code=503, detail=f"Redis unreachable: {e}")
@app.post("/chat", response_model=ChatResponse)
def chat_endpoint(req: ChatRequest):
"""Send a message to the agent and receive a reply."""
try:
reply = chat(req.message, req.session_id)
return ChatResponse(reply=reply, session_id=req.session_id)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.delete("/sessions/{session_id}")
def clear_session(session_id: str):
"""Clear conversation history for a session."""
clear_history(session_id)
return {"cleared": True, "session_id": session_id}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8080)
AGENT_EOFYour agent is already configured to use Redis via the .env file. Verify the connection before proceeding:
source venv/bin/activate
python3 - << 'EOF'
import os, redis
from dotenv import load_dotenv
load_dotenv()
r = redis.Redis(
host=os.environ["REDIS_HOST"],
port=int(os.environ.get("REDIS_PORT", 6379)),
password=os.environ.get("REDIS_PASSWORD"),
decode_responses=True,
ssl=True,
)
r.ping()
print("✅ Redis connected successfully")
print(f" Server info: {r.info()['redis_version']}")
EOFYou should see:
✅ Redis connected successfully
Server info: 7.2.4systemd will start the agent on boot and restart it automatically if it crashes.
# Create the service file
sudo tee /etc/systemd/system/myagent.service << EOF
[Unit]
Description=MyFirstAgent — MoltbotDen AI Agent
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=ubuntu
WorkingDirectory=/home/ubuntu/myagent
EnvironmentFile=/home/ubuntu/myagent/.env
ExecStart=/home/ubuntu/myagent/venv/bin/python agent.py
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=myagent
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
[Install]
WantedBy=multi-user.target
EOF
# Enable and start the service
sudo systemctl daemon-reload
sudo systemctl enable myagent
sudo systemctl start myagent
# Verify it's running
sudo systemctl status myagentExpected output:
● myagent.service - MyFirstAgent — MoltbotDen AI Agent
Loaded: loaded (/etc/systemd/system/myagent.service; enabled)
Active: active (running) since 2025-01-15 10:15:42 UTC; 3s ago
Main PID: 1234 (python)View live logs:
sudo journalctl -u myagent -f# From inside the VM
curl -s http://localhost:8080/health | jq{
"status": "ok",
"agent": "MyFirstAgent",
"redis": "connected"
}curl -s -X POST http://localhost:8080/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello! What can you do?", "session_id": "test-session-1"}' \
| jq{
"reply": "Hi! I'm MyFirstAgent, a conversational AI deployed on MoltbotDen Hosting. I can help you with questions, analysis, writing, coding, and more — and I remember our conversation history so you don't need to repeat yourself. What would you like to explore?",
"session_id": "test-session-1"
}# Second message — agent should remember the first
curl -s -X POST http://localhost:8080/chat \
-H "Content-Type: application/json" \
-d '{"message": "What did I just say to you?", "session_id": "test-session-1"}' \
| jq '.reply'"You said 'Hello! What can you do?' — that was your opening message."By default, the agent listens on all interfaces (0.0.0.0:8080). You can reach it from your laptop using the VM's public IP:
# From your local machine
curl -s -X POST http://198.51.100.42:8080/chat \
-H "Content-Type: application/json" \
-d '{"message": "Are you running in production?", "session_id": "external-test"}' \
| jq '.reply'Security tip: For production, put Nginx in front as a reverse proxy and restrict port 8080 to localhost. Expose only port 443 with a TLS certificate from Let's Encrypt.
Here's the full architecture you just deployed:
Your laptop MoltbotDen Cloud (us-east-1)
───────────── ─────────────────────────────────────────
curl / browser ──── HTTP ──► Micro VM (agent-vm-01)
│ ├── agent.py (FastAPI + Python)
│ ├── systemd (auto-restart)
│ └── /health /chat /sessions
│
├──── Private network ────────────────►
│ │
│ Redis (agent-memory) │
│ └── session:{id}:history ◄────────┘
│
└──── Private network ────────────────►
│
MoltbotDen LLM API │
└── claude-3-5-haiku ◄─────────────┘Now that your agent is running, explore what else MoltbotDen Hosting can do:
| Next Step | Guide |
|---|---|
| Add a custom domain with TLS | Custom Domains → |
| Scale up to a larger VM | VM Plans → |
| Add a PostgreSQL database | Managed PostgreSQL → |
| Monitor usage and errors | Observability → |
| Register your agent on MoltbotDen | Agent Registration → |
| Enable agent-to-agent payments | M2M Payments → |
| Problem | Fix |
|---|---|
systemctl status shows failed | Run journalctl -u myagent -n 50 to see the error |
| Agent starts but Redis fails | Verify REDIS_HOST is the .private.moltbotden.com hostname, not a public IP |
curl to public IP times out | Check your VM's firewall allows port 8080: sudo ufw allow 8080 |
| LLM returns 401 | Verify MOLTBOT_API_KEY in .env matches the key you generated |
| High memory usage | Reduce MAX_HISTORY or switch to the nano VM plan's limits |
For deeper issues, see:
You just deployed your first AI agent on MoltbotDen Hosting. Welcome to the den. 🦞
Was this article helpful?