Building OpenClaw from Scratch in TypeScript
A full runnable TypeScript clone inspired by the OpenClaw architecture article: sessions, SOUL, tools, permissions, gateway, compaction, memory, queueing, heartbeats, and multi-agent routing.
Saurabh Prakash
Author
This article is based on the article by Nader Dabit [1] and recreate minimal version of Openclaw using Typescript. It starts from first principles: why chat-only assistants fail in real life, then a sequence of practical upgrades.
First, let’s establish the problem
Browser chat apps are useful, but they’re limited:[1]
- Stateless: they forget previous context unless you re-send it.
- Passive: they only run when you open a tab.
- Isolated: they can’t act on your local environment by default.
- Single-channel: your life is in Telegram/Discord/Slack, not one chat window.
So the target is simple: an assistant that can remember, act, and meet us in the channels we already use.
The simplest possible bot (TypeScript)
We’ll build a real clone in one file. First, the project scaffold.
mkdir mini-openclaw-ts
cd mini-openclaw-ts
bun init -y
bun add @anthropic-ai/sdk dotenv express node-cron node-telegram-bot-api zod
bun add -d @types/express @types/node @types/node-telegram-bot-api typescriptCreate .env:
ANTHROPIC_API_KEY=your_anthropic_key
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
PORT=5000Create SOUL.md:
# Who You Are
**Name:** Jarvis
**Role:** Personal AI assistant
## Personality
- Be helpful and direct
- Skip fluff
- Be concise by default
## Boundaries
- Ask before risky external actions
- Keep private info private
## Memory
- Use long-term memory tools for stable user preferences and key factsGoal: persistent sessions
Our minimal bot can reply, but it forgets everything between turns. Like the Python article, we fix that with JSONL transcripts so each message is one append-only line.[1][7] Now the assistant remembers—but it still sounds generic.
Goal: personality via SOUL.md
We inject SOUL.md as the system prompt on every model call, so behavior stays consistent.[1] Great. It has a voice now. Next problem: it still can’t do anything.
Goal: tools + agent loop
We give the assistant tools (run_command, read_file, write_file, memory tools), then run an agent loop that feeds tool results back into the model.[2] That unlocks action, but it also introduces risk.
Goal: permission controls
Shell execution must be gated. We add safe commands, risky patterns, and remembered approvals.[1][7] Now local execution is safer. Next, we remove channel lock-in.
Goal: gateway
One agent core, multiple interfaces: Telegram + HTTP share the same session store.[1][3][4] So far, good. But long conversations eventually hit context limits.
Goal: context compaction
When sessions become too large, summarize older messages and keep recent context.[1] This keeps conversations alive longer, but session memory still resets with a new thread.
Goal: long-term memory
Session history is not enough. We add memory files that survive session resets.[1] Now memory persists. Next issue is concurrency.
Goal: command queue
Concurrent requests for the same session can race. We serialize per-session work with a lock queue.[1] With race conditions handled, we can make the assistant proactive.
Goal: cron heartbeats
The assistant should run scheduled jobs (daily check-ins) without user prompts.[1][5] One final upgrade: specialization.
Goal: multi-agent routing
One runtime can host multiple agents (for example, Jarvis + Scout) with separate session prefixes.[1] That gives us the same architecture arc as the original article—now in TypeScript.
Putting it all together: full runnable TypeScript clone
Save as src/mini-openclaw.ts.
import 'dotenv/config';
// Anthropic SDK for Messages API + tool use.
import { Anthropic } from '@anthropic-ai/sdk';
// Node core imports for subprocess execution, filesystem, paths, and REPL input.
import { exec as execCb } from 'node:child_process';
import { existsSync } from 'node:fs';
import { mkdir, readFile, writeFile, appendFile, readdir } from 'node:fs/promises';
import os from 'node:os';
import path from 'node:path';
import { promisify } from 'node:util';
import readline from 'node:readline/promises';
// Transport + scheduling + validation libraries.
import express from 'express';
import cron from 'node-cron';
import TelegramBot from 'node-telegram-bot-api';
import { z } from 'zod';
// Promisified exec so tool execution can use async/await.
const exec = promisify(execCb);
// Anthropic client reads API key from environment.
const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
// Global runtime config.
const MODEL = 'claude-sonnet-4-5';
const PORT = Number(process.env.PORT ?? 5000);
// Local workspace paths used by sessions, memory, and approval storage.
const WORKSPACE = path.join(os.homedir(), '.mini-openclaw-ts');
const SESSIONS_DIR = path.join(WORKSPACE, 'sessions');
const MEMORY_DIR = path.join(WORKSPACE, 'memory');
const APPROVALS_FILE = path.join(WORKSPACE, 'exec-approvals.json');
const SOUL_PATH = path.join(process.cwd(), 'SOUL.md');
// Message and agent types keep the clone explicit and readable.
type Role = 'user' | 'assistant';
type Message = { role: Role; content: unknown };
type ToolResultBlock = { type: 'tool_result'; tool_use_id: string; content: string };
type AgentConfig = {
id: string;
name: string;
model: string;
soul: string;
sessionPrefix: string;
};
type Safety = 'safe' | 'approved' | 'needs_approval';
// Commands that run without prompting (still keep this conservative).
const SAFE_COMMANDS = new Set([
'ls',
'cat',
'head',
'tail',
'wc',
'date',
'whoami',
'echo',
'pwd',
'which',
'git',
'node',
'bun',
]);
// Quick deny-patterns that should always require explicit approval.
const DANGEROUS_PATTERNS = [/\brm\b/i, /\bsudo\b/i, /\bchmod\b/i, /curl\s+.*\|\s*sh/i];
// Tool catalog exposed to the model. This mirrors the Python architecture.
const tools: Anthropic.Messages.Tool[] = [
{
name: 'run_command',
description: 'Run a shell command on the host machine',
input_schema: {
type: 'object',
properties: {
command: { type: 'string', description: 'The shell command to execute' },
},
required: ['command'],
},
},
{
name: 'read_file',
description: 'Read text from a file',
input_schema: {
type: 'object',
properties: {
path: { type: 'string', description: 'Path to a text file' },
},
required: ['path'],
},
},
{
name: 'write_file',
description: 'Write text to a file (creates parent directories if needed)',
input_schema: {
type: 'object',
properties: {
path: { type: 'string', description: 'Destination path' },
content: { type: 'string', description: 'Content to write' },
},
required: ['path', 'content'],
},
},
{
name: 'save_memory',
description: 'Save long-term memory for future sessions',
input_schema: {
type: 'object',
properties: {
key: { type: 'string', description: 'Memory key (example: user-preferences)' },
content: { type: 'string', description: 'Memory content' },
},
required: ['key', 'content'],
},
},
{
name: 'memory_search',
description: 'Search long-term memory by keyword overlap',
input_schema: {
type: 'object',
properties: {
query: { type: 'string', description: 'Search terms' },
},
required: ['query'],
},
},
{
name: 'web_search',
description: 'Placeholder web search tool (replace with real provider)',
input_schema: {
type: 'object',
properties: {
query: { type: 'string', description: 'Search query' },
},
required: ['query'],
},
},
];
// In-memory per-session queue used as a lock to avoid race conditions.
const sessionQueues = new Map<string, Promise<void>>();
// Serialize operations for the same session key while allowing other sessions to run.
async function withSessionLock<T>(sessionKey: string, fn: () => Promise<T>): Promise<T> {
const prev = sessionQueues.get(sessionKey) ?? Promise.resolve();
let release: (() => void) | null = null;
const next = new Promise<void>((resolve) => {
release = resolve;
});
sessionQueues.set(sessionKey, prev.then(() => next));
await prev;
try {
return await fn();
} finally {
release?.();
if (sessionQueues.get(sessionKey) === next) {
sessionQueues.delete(sessionKey);
}
}
}
// Normalize keys so they are safe to use as filenames.
function safeKey(value: string) {
return value.replaceAll(':', '_').replaceAll('/', '_');
}
// Build the JSONL file path for one session.
function sessionPath(sessionKey: string) {
return path.join(SESSIONS_DIR, `${safeKey(sessionKey)}.jsonl`);
}
// Ensure base directories exist before any reads/writes.
async function ensureDirs() {
await mkdir(WORKSPACE, { recursive: true });
await mkdir(SESSIONS_DIR, { recursive: true });
await mkdir(MEMORY_DIR, { recursive: true });
}
// Load SOUL.md from project root; this becomes the system prompt.
async function loadSoul() {
if (!existsSync(SOUL_PATH)) {
throw new Error(`Missing SOUL.md at ${SOUL_PATH}`);
}
return readFile(SOUL_PATH, 'utf8');
}
// Read session transcript from JSONL and ignore malformed lines safely.
async function loadSession(sessionKey: string): Promise<Message[]> {
const file = sessionPath(sessionKey);
if (!existsSync(file)) return [];
const raw = await readFile(file, 'utf8');
return raw
.split('\n')
.filter(Boolean)
.map((line) => {
try {
return JSON.parse(line) as Message;
} catch {
return null;
}
})
.filter((m): m is Message => Boolean(m));
}
// Append one message atomically (best-effort) for crash-tolerant transcript updates.
async function appendMessage(sessionKey: string, message: Message) {
await appendFile(sessionPath(sessionKey), `${JSON.stringify(message)}\n`, 'utf8');
}
// Rewrite full session file (used after compaction and other full-state updates).
async function saveSession(sessionKey: string, messages: Message[]) {
const body = messages.map((m) => JSON.stringify(m)).join('\n') + '\n';
await writeFile(sessionPath(sessionKey), body, 'utf8');
}
// Rough estimate: ~4 chars/token.
function estimateTokens(messages: Message[]) {
return Math.floor(JSON.stringify(messages).length / 4);
}
// Summarize older messages when transcript grows too large for safe model context.
async function compactSession(sessionKey: string, messages: Message[]): Promise<Message[]> {
if (estimateTokens(messages) < 100_000) return messages;
const split = Math.floor(messages.length / 2);
const oldMessages = messages.slice(0, split);
const recentMessages = messages.slice(split);
const summary = await client.messages.create({
model: MODEL,
max_tokens: 1800,
messages: [
{
role: 'user',
content:
'Summarize the conversation. Preserve user facts, decisions, and open tasks.\n\n' +
JSON.stringify(oldMessages),
},
],
});
const summaryText = summary.content
.filter((block) => block.type === 'text')
.map((block) => block.text)
.join('\n')
.trim();
const compacted: Message[] = [
{ role: 'user', content: `[Conversation summary]\n${summaryText}` },
...recentMessages,
];
await saveSession(sessionKey, compacted);
return compacted;
}
// Load command approvals from disk.
async function loadApprovals(): Promise<{ allowed: string[]; denied: string[] }> {
if (!existsSync(APPROVALS_FILE)) {
return { allowed: [], denied: [] };
}
return JSON.parse(await readFile(APPROVALS_FILE, 'utf8')) as {
allowed: string[];
denied: string[];
};
}
// Persist one approval decision so repeated commands don't re-prompt.
async function saveApproval(command: string, approved: boolean) {
const approvals = await loadApprovals();
const key = approved ? 'allowed' : 'denied';
if (!approvals[key].includes(command)) approvals[key].push(command);
await writeFile(APPROVALS_FILE, `${JSON.stringify(approvals, null, 2)}\n`, 'utf8');
}
// Determine whether a command is safe, already approved, or needs user approval.
async function checkCommandSafety(command: string): Promise<Safety> {
const base = command.trim().split(/\s+/)[0] ?? '';
if (SAFE_COMMANDS.has(base)) return 'safe';
const approvals = await loadApprovals();
if (approvals.allowed.includes(command)) return 'approved';
for (const pattern of DANGEROUS_PATTERNS) {
if (pattern.test(command)) return 'needs_approval';
}
return 'needs_approval';
}
// Prompt operator in interactive terminals before running risky commands.
async function promptApproval(command: string) {
if (!process.stdin.isTTY || !process.stdout.isTTY) return false;
const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
const answer = await rl.question(`\n⚠️ Approve command: ${command}\nAllow? (y/n): `);
rl.close();
return answer.trim().toLowerCase() === 'y';
}
// Execute one tool call from the model and return text that can be fed back into context.
async function executeTool(name: string, input: Record<string, unknown>) {
if (name === 'run_command') {
const command = String(input.command ?? '');
const safety = await checkCommandSafety(command);
if (safety === 'needs_approval') {
const ok = await promptApproval(command);
await saveApproval(command, ok);
if (!ok) return 'Permission denied by user.';
}
try {
const { stdout, stderr } = await exec(command, { timeout: 30_000 });
return (stdout + stderr).trim() || '(no output)';
} catch (error) {
const message = error instanceof Error ? error.message : 'Command failed';
return `Error: ${message}`;
}
}
if (name === 'read_file') {
const targetPath = String(input.path ?? '');
try {
return (await readFile(targetPath, 'utf8')).slice(0, 10_000);
} catch (error) {
return `Error: ${error instanceof Error ? error.message : 'read failed'}`;
}
}
if (name === 'write_file') {
const targetPath = String(input.path ?? '');
const content = String(input.content ?? '');
try {
await mkdir(path.dirname(targetPath), { recursive: true });
await writeFile(targetPath, content, 'utf8');
return `Wrote to ${targetPath}`;
} catch (error) {
return `Error: ${error instanceof Error ? error.message : 'write failed'}`;
}
}
if (name === 'save_memory') {
const key = String(input.key ?? 'memory-note');
const content = String(input.content ?? '');
const file = path.join(MEMORY_DIR, `${safeKey(key)}.md`);
await writeFile(file, content, 'utf8');
return `Saved to memory: ${key}`;
}
if (name === 'memory_search') {
const query = String(input.query ?? '').toLowerCase();
const words = query.split(/\s+/).filter(Boolean);
const files = existsSync(MEMORY_DIR) ? await readdir(MEMORY_DIR) : [];
const matches: string[] = [];
for (const file of files) {
if (!file.endsWith('.md')) continue;
const full = path.join(MEMORY_DIR, file);
const text = await readFile(full, 'utf8');
if (words.some((word) => text.toLowerCase().includes(word))) {
matches.push(`--- ${file} ---\n${text}`);
}
}
return matches.length ? matches.join('\n\n') : 'No matching memories found.';
}
if (name === 'web_search') {
return `Search results for: ${String(input.query ?? '')}`;
}
return `Unknown tool: ${name}`;
}
// One complete agent turn: load session, append user, call model, execute tools, repeat.
async function runAgentTurn(sessionKey: string, userText: string, agent: AgentConfig) {
return withSessionLock(sessionKey, async () => {
let messages = await loadSession(sessionKey);
messages = await compactSession(sessionKey, messages);
const userMessage: Message = { role: 'user', content: userText };
messages.push(userMessage);
await appendMessage(sessionKey, userMessage);
// Prevent infinite tool loops by bounding turns.
for (let turn = 0; turn < 20; turn += 1) {
const response = await client.messages.create({
model: agent.model,
max_tokens: 4096,
system: agent.soul,
tools,
messages: messages as Anthropic.Messages.MessageParam[],
});
const assistantMessage: Message = {
role: 'assistant',
content: response.content,
};
messages.push(assistantMessage);
await appendMessage(sessionKey, assistantMessage);
// If model is done, return final assistant text.
if (response.stop_reason === 'end_turn') {
return response.content
.filter((block) => block.type === 'text')
.map((block) => block.text)
.join('')
.trim();
}
// If model requested tools, execute all tool uses and append tool_result blocks.
if (response.stop_reason === 'tool_use') {
const toolResults: ToolResultBlock[] = [];
for (const block of response.content) {
if (block.type !== 'tool_use') continue;
const result = await executeTool(
block.name,
typeof block.input === 'object' && block.input ? (block.input as Record<string, unknown>) : {}
);
toolResults.push({
type: 'tool_result',
tool_use_id: block.id,
content: String(result),
});
}
const resultsMessage: Message = {
role: 'user',
content: toolResults,
};
messages.push(resultsMessage);
await appendMessage(sessionKey, resultsMessage);
}
}
return '(max turns reached)';
});
}
// Route messages to specialized agents using a simple command prefix.
function resolveAgent(messageText: string) {
if (messageText.startsWith('/research ')) {
return { agentId: 'researcher', text: messageText.slice('/research '.length) };
}
return { agentId: 'main', text: messageText };
}
// HTTP gateway: exposes /chat and forwards requests into the shared agent runtime.
async function startHttpGateway(agents: Record<string, AgentConfig>) {
const app = express();
app.use(express.json());
const bodySchema = z.object({ user_id: z.string().min(1), message: z.string().min(1) });
app.post('/chat', async (req, res) => {
const parsed = bodySchema.safeParse(req.body);
if (!parsed.success) {
return res.status(400).json({ error: parsed.error.flatten() });
}
const { user_id, message } = parsed.data;
const { agentId, text } = resolveAgent(message);
const agent = agents[agentId];
const sessionKey = `${agent.sessionPrefix}:${user_id}`;
try {
const response = await runAgentTurn(sessionKey, text, agent);
return res.json({ agent: agent.name, response });
} catch (error) {
return res.status(500).json({ error: error instanceof Error ? error.message : 'unknown error' });
}
});
app.listen(PORT, () => {
console.log(`HTTP gateway listening on http://127.0.0.1:${PORT}`);
});
}
// Telegram gateway: receives chat messages and forwards to the same runtime/session model.
async function startTelegramGateway(agents: Record<string, AgentConfig>) {
const token = process.env.TELEGRAM_BOT_TOKEN;
if (!token) {
console.log('TELEGRAM_BOT_TOKEN not set; Telegram gateway disabled.');
return;
}
const bot = new TelegramBot(token, { polling: true });
bot.on('message', async (msg) => {
if (!msg.text) return;
const userId = String(msg.from?.id ?? msg.chat.id);
const { agentId, text } = resolveAgent(msg.text);
const agent = agents[agentId];
const sessionKey = `${agent.sessionPrefix}:${userId}`;
try {
const response = await runAgentTurn(sessionKey, text, agent);
await bot.sendMessage(msg.chat.id, `[${agent.name}] ${response}`);
} catch (error) {
await bot.sendMessage(msg.chat.id, `Error: ${error instanceof Error ? error.message : 'unknown error'}`);
}
});
console.log('Telegram gateway running (polling).');
}
// Heartbeats: scheduled autonomous tasks with isolated session keys.
function startHeartbeats(agents: Record<string, AgentConfig>) {
cron.schedule('30 7 * * *', async () => {
const sessionKey = 'cron:morning-briefing';
const prompt = 'Good morning. Check today\'s date and share a short motivational quote.';
try {
const result = await runAgentTurn(sessionKey, prompt, agents.main);
console.log(`\n⏰ Heartbeat result:\n${result}\n`);
} catch (error) {
console.error('Heartbeat failed:', error);
}
});
console.log('Heartbeats scheduled (daily at 07:30).');
}
// Local REPL so you can test agent behavior without Telegram or HTTP.
async function startRepl(agents: Record<string, AgentConfig>) {
const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
let sessionKey = 'agent:main:repl';
console.log('\nMini OpenClaw (TypeScript)');
console.log('Commands: /new, /research <query>, /quit\n');
while (true) {
const input = (await rl.question('You: ')).trim();
if (!input) continue;
if (['/quit', '/exit', '/q'].includes(input.toLowerCase())) break;
// /new gives you a fresh main-agent session while preserving persisted memory files.
if (input.toLowerCase() === '/new') {
sessionKey = `agent:main:repl:${Date.now()}`;
console.log('Session reset.\n');
continue;
}
const { agentId, text } = resolveAgent(input);
const agent = agents[agentId];
const key = agentId === 'main' ? sessionKey : `${agent.sessionPrefix}:repl`;
const response = await runAgentTurn(key, text, agent);
console.log(`\n🤖 [${agent.name}] ${response}\n`);
}
rl.close();
}
// Runtime bootstrap: validates env, loads SOUL, defines agents, and starts all interfaces.
async function main() {
if (!process.env.ANTHROPIC_API_KEY) {
throw new Error('ANTHROPIC_API_KEY is required');
}
await ensureDirs();
const soul = await loadSoul();
const agents: Record<string, AgentConfig> = {
main: {
id: 'main',
name: 'Jarvis',
model: MODEL,
soul,
sessionPrefix: 'agent:main',
},
researcher: {
id: 'researcher',
name: 'Scout',
model: MODEL,
soul:
'You are Scout, a research specialist. Cite sources. Be thorough but concise. Save important findings with save_memory for other agents.',
sessionPrefix: 'agent:researcher',
},
};
await startHttpGateway(agents);
await startTelegramGateway(agents);
startHeartbeats(agents);
await startRepl(agents);
}
// Standard async entrypoint with fatal error handling.
main().catch((error) => {
console.error(error);
process.exit(1);
});Run it
bun run src/mini-openclaw.tsQuick HTTP test:
curl -X POST http://127.0.0.1:5000/chat \
-H "Content-Type: application/json" \
-d '{"user_id":"demo-user","message":"My name is Nader"}'
curl -X POST http://127.0.0.1:5000/chat \
-H "Content-Type: application/json" \
-d '{"user_id":"demo-user","message":"What is my name?"}'If both calls use the same user_id, the second response should reflect memory from the first.
What we built
Following the same progression as the Python version, this clone implements:[1]
- Stateless bot → persistent JSONL sessions
- Generic assistant → SOUL.md identity
- Chat-only bot → tool-calling agent loop
- Unsafe shell execution → permission controls
- One interface → Telegram + HTTP gateway
- Growing context → compaction
- Session-only history → long-term memory files
- Race conditions → per-session queue lock
- Reactive-only behavior → cron heartbeats
- One-size agent → multi-agent routing
Practical notes
- run_command is the highest-risk tool; keep the allowlist strict.[7][8]
- web_search is intentionally a stub; swap in a real provider for production.
- JSONL is simple and resilient for local transcripts.[6]
- In-process cron is great for a single host; use durable scheduling for distributed systems.[5]
References
[1]: Dabit3, “You Could’ve Invented OpenClaw” (source article) — Twitter
[2]: Anthropic API docs (messages + tool use) — docs
[6]: JSON Lines format — jsonlines.org
[7]: OWASP command injection guidance — cheat sheet