MimicLaw Architecture
Based on MimicLaw version 0.1.0 (commit 5bcb28a).
High-level data flow
User (Telegram / WebSocket / Serial CLI)
│
▼
┌──────────────────────────────────────────────────┐
│ ESP32-S3 (MimicLaw) │
│ │
│ [Input Channels — Core 0] │
│ ├─ Telegram Poller (long polling, 30s timeout) │
│ ├─ WebSocket Server (port 18789) │
│ └─ Serial CLI (USB, 115200 baud) │
│ │ │
│ ▼ │
│ ┌──────────────────────┐ │
│ │ Inbound Queue (8) │ FreeRTOS xQueue │
│ └──────────┬───────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────────┐ │
│ │ Agent Loop — Core 1 │ │
│ │ │ │
│ │ 1. Load session history (last 20) │ │
│ │ 2. Build system prompt │ │
│ │ SOUL.md + USER.md + MEMORY.md │ │
│ │ + recent daily notes (3 days) │ │
│ │ 3. ReAct loop (max 10 iterations): │ │
│ │ ├─ Call Claude API with tools │ │
│ │ ├─ If tool_use → execute tools │ │
│ │ └─ Repeat until end_turn │ │
│ │ 4. Save to session JSONL │ │
│ │ 5. Push response to outbound │ │
│ └──────────────┬───────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────┐ │
│ │ Outbound Queue (8) │ │
│ └──────────┬───────────┘ │
│ │ │
│ ▼ │
│ [Output Dispatch — Core 0] │
│ ├─ telegram.sendMessage (auto-split at 4096) │
│ └─ WebSocket.send │
└──────────────────────────────────────────────────┘
│
▼ HTTPS (via esp_tls / proxy tunnel)
Anthropic Claude API
Brave Search API
Module map
main/
├── mimi.c # Entry point (app_main), orchestrates init
├── mimi_config.h # All compile-time constants and NVS keys
├── mimi_secrets.h # Build-time credentials (gitignored)
│
├── bus/ # Message bus — FreeRTOS queues
│ ├── message_bus.h # mimi_msg_t: channel, chat_id, content
│ └── message_bus.c # Inbound (8) + Outbound (8) queues
│
├── wifi/ # WiFi STA management
│ └── wifi_manager.c # Connect, exponential backoff, NVS creds
│
├── telegram/ # Telegram Bot API client
│ └── telegram_bot.c # Long polling, proxy support, msg splitting
│
├── llm/ # Anthropic Messages API client
│ └── llm_proxy.c # Non-streaming, PSRAM buffers, proxy path
│
├── agent/ # AI agent core
│ ├── agent_loop.c # ReAct loop with tool use (max 10 iter)
│ └── context_builder.c # System prompt: bootstrap + memory + tools
│
├── memory/ # Persistent storage
│ ├── memory_store.c # MEMORY.md + daily notes (SPIFFS)
│ └── session_mgr.c # JSONL sessions with ring buffer (20 msgs)
│
├── tools/ # Tool registry + built-in tools
│ ├── tool_registry.c # Register tools, build tools JSON schema
│ ├── tool_web_search.c # Brave Search API (direct + proxy)
│ ├── tool_get_time.c # HTTP Date header → system clock sync
│ └── tool_files.c # SPIFFS: read, write, edit, list_dir
│
├── proxy/ # HTTP CONNECT proxy tunneling
│ └── http_proxy.c # TCP → CONNECT → TLS over tunnel
│
├── gateway/ # WebSocket server
│ └── ws_server.c # Port 18789, JSON protocol, max 4 clients
│
├── cli/ # Serial console
│ └── serial_cli.c # esp_console REPL, 17 commands
│
└── ota/ # Over-the-air updates
└── ota_manager.c # esp_https_ota wrapper
Total: ~4,000 lines of C.
FreeRTOS task layout
Design principle: Core 0 handles all network I/O. Core 1 is dedicated to the agent loop. This prevents I/O latency from blocking agent processing and vice versa.
Flash partition layout
Based on partitions.csv for 16MB flash:
RAM memory budget
Strategy: Small, frequently accessed data stays in internal SRAM (512 KB). Large buffers (32KB+) are allocated from PSRAM via heap_caps_calloc(1, size, MALLOC_CAP_SPIRAM).
SPIFFS storage layout
/spiffs/
├── config/
│ ├── SOUL.md # AI personality definition
│ └── USER.md # User profile
├── memory/
│ ├── MEMORY.md # Long-term persistent memory
│ └── daily/
│ ├── 2026-02-20.md # Daily notes (one file per day)
│ ├── 2026-02-21.md
│ └── ...
└── sessions/
├── tg_12345.jsonl # Per-chat session (Telegram)
├── tg_67890.jsonl
└── ws_3.jsonl # Per-client session (WebSocket)
Session format (JSONL — one JSON object per line):
{"role":"user","content":"Hello","ts":1738764800}
{"role":"assistant","content":"Hi there!","ts":1738764802}
Only user + assistant messages are saved. Intermediate tool-use steps are not persisted, keeping session files compact.
Message bus
The message bus uses two FreeRTOS queues (depth 8 each) to decouple input channels from the agent loop:
typedef struct {
char channel[16]; // "telegram", "websocket", "cli"
char chat_id[32]; // Telegram chat ID or "ws_<fd>"
char *content; // Heap-allocated text (ownership transferred)
} mimi_msg_t;
Ownership model: The sender allocates content with strdup(). Queue takes ownership. The receiver must free() it after processing.
Claude API integration
MimicLaw calls the Anthropic Messages API directly (no LiteLLM abstraction):
Endpoint: POST https://api.anthropic.com/v1/messages
Key differences from OpenClaw:
- Non-streaming — receives full JSON response, no SSE
- PSRAM buffers — response body accumulated in PSRAM-backed buffer
- Proxy path — if configured, routes through HTTP CONNECT tunnel
- Tool use protocol — uses Anthropic's native
tool_use / tool_result blocks (same as OpenClaw)
ReAct loop in agent_loop.c:
for each iteration (max 10):
1. Send "thinking..." indicator to user
2. Call llm_chat_tools(system_prompt, messages, tools_json)
3. If stop_reason == "end_turn" → done
4. If stop_reason == "tool_use":
a. Append assistant message (text + tool_use blocks)
b. Execute each tool via tool_registry
c. Append user message with tool_result blocks
d. Continue loop
System prompt construction
Built by context_builder.c:
# MimiClaw
You are MimiClaw, a personal AI assistant running on an ESP32-S3 device.
[personality + behavior guidelines]
## Available Tools
[tool descriptions from registry]
## Memory
[instructions for proactive memory use via file tools]
## Personality
[SOUL.md content]
## User Info
[USER.md content]
## Long-term Memory
[MEMORY.md content]
## Recent Notes
[last 3 days of daily/*.md]
Key design: The system prompt instructs the agent to actively use write_file / edit_file tools to persist memory, implementing agent-driven memory persistence (same pattern as OpenClaw's memory system).
Startup sequence
app_main()
├── init_nvs()
├── init_spiffs() ← Mount /spiffs
├── message_bus_init() ← Create inbound + outbound queues
├── memory_store_init()
├── session_mgr_init()
├── wifi_manager_init()
├── http_proxy_init() ← Load proxy config
├── telegram_bot_init() ← Load bot token
├── llm_proxy_init() ← Load API key + model
├── tool_registry_init() ← Register tools, build JSON schema
├── agent_loop_init()
├── serial_cli_init() ← Start REPL (works without WiFi)
│
├── wifi_manager_start() ← Connect (NVS creds → build-time fallback)
│ └── wait_connected(30s)
│
└── [if WiFi connected]
├── telegram_bot_start() ← Launch tg_poll task (Core 0)
├── agent_loop_start() ← Launch agent_loop task (Core 1)
├── ws_server_start() ← Start httpd on port 18789
└── outbound_dispatch() ← Launch dispatch task (Core 0)
If WiFi connection fails, the serial CLI remains operational for diagnostics and reconfiguration.
Networking: proxy support
MimicLaw supports HTTP CONNECT proxy tunneling for networks where direct HTTPS access is restricted (common in China):
1. TCP connect to proxy server
2. Send: CONNECT api.telegram.org:443 HTTP/1.1
3. Receive: HTTP/1.1 200 Connection established
4. TLS handshake over the tunnel (esp_tls)
5. Send/receive HTTPS data through the tunnel
Used by all outbound HTTPS calls: Telegram API, Claude API, Brave Search API.
Compatible with Clash Verge, V2Ray, Squid, and other HTTP proxy servers.