MimicLaw vs OpenClaw

How does a $5 embedded agent compare to the full framework?

Feature parity

What's the same

FeatureMimicLawOpenClaw
ReAct agent loop with tool useagent_loop.c (max 10 iterations)auto-reply/run.ts
Tool use protocolAnthropic native tool_use/tool_resultSame
Memory systemMEMORY.md + daily notes (SPIFFS)MEMORY.md + daily notes (filesystem)
Session persistenceJSONL per chatJSONL per chat
Telegram channelLong polling via HTTPSLong polling via node-telegram-bot-api
Bootstrap filesSOUL.md, USER.mdSOUL.md, USER.md
Context builderLoads memory + tools into system promptSame pattern
WebSocket gatewayJSON protocol (port 18789)JSON protocol

What's different

AspectMimicLawOpenClaw
RuntimeBare-metal C / FreeRTOSNode.js / Python asyncio
LLM providerAnthropic only (direct API)LiteLLM (multi-provider)
API modeNon-streaming JSONStreaming SSE
RAM512 KB SRAM + 8 MB PSRAMGBs
Storage12 MB SPIFFS (flat filesystem)GBs disk (full filesystem)
ConcurrencyFreeRTOS tasks + queues (dual-core)async/await event loop
Message busFreeRTOS xQueue (depth 8)Async queues
ConfigBuild-time secrets + NVS via CLIYAML / .env files
CLIesp_console (serial, 17 commands)Terminal (stdin)
OTAesp_https_ota (dual OTA slots)Git pull / package manager
Binary size~1.5 MB firmware~100 MB (Node.js + deps)
Power~0.5 W (runs 24/7 on USB)~10-50 W (server)

What's missing in MimicLaw

Not yet implemented (from TODO.md):

FeaturePriorityNotes
Skills systemP1Pluggable capabilities (SKILL.md files)
SubagentsP0Background task spawning
Cron / heartbeatP2Scheduled tasks
Media handlingP1Telegram photos, voice, files
Voice transcriptionP2Whisper API integration
Multi-LLM supportP2OpenAI, Gemini, OpenRouter
Telegram allowlistP1User authentication (allow_from)
Markdown → HTMLP1Better Telegram rendering
Streaming tokensP2WebSocket per-token push
WhatsApp / FeishuP2Additional channels

Deliberately omitted (hardware limitations):

  • Local LLM inference (not enough RAM)
  • Audio I/O (no microphone/speaker on reference board)
  • Vision / camera input
  • Full POSIX filesystem (SPIFFS is flat, no directories)

Design trade-offs

Non-streaming API responses

MimicLaw uses non-streaming JSON responses from the Claude API instead of SSE streaming.

Why: Simpler parsing (single JSON object), lower memory overhead (no streaming state machine), easier error handling.

Trade-off: Users see no output until the full response arrives. For long responses, this means a noticeable wait.

JSONL instead of SQLite

Session history uses JSONL (one JSON object per line) instead of a database.

Why: Human-readable, no SQLite port needed (saves ~500 KB flash), append-only writes are flash-friendly.

Trade-off: No indexing, must scan entire file to load history. Mitigated by the ring buffer (only last 20 messages loaded).

Direct Anthropic API instead of LiteLLM

MimicLaw calls the Anthropic Messages API directly with esp_http_client.

Why: LiteLLM requires a Python/Node.js runtime. Direct calls eliminate that dependency entirely.

Trade-off: Locked to Anthropic. Switching to OpenAI or Gemini requires code changes to the request/response format.

HTTP CONNECT proxy instead of SOCKS5

Proxy support uses HTTP CONNECT tunneling.

Why: Simpler protocol (HTTP-based), widely supported by common proxy tools (Clash Verge, V2Ray, Squid).

Trade-off: Not compatible with SOCKS5-only proxies.

Dual-core task separation

Core 0 handles all I/O (Telegram, WebSocket, serial). Core 1 runs the agent loop exclusively.

Why: Prevents network I/O latency from blocking agent processing. The agent can run a full ReAct iteration while Telegram polls in the background.

Trade-off: Slightly more complex task coordination (queues instead of direct calls).

20-message session ring buffer

Only the last 20 messages per conversation are loaded into the LLM context.

Why: Bounds memory usage. 20 messages at ~200 tokens each ≈ 4,000 tokens, well within Claude's context window.

Trade-off: Long conversations lose early context. Important information should be saved to MEMORY.md by the agent.

Architecture mapping

How MimicLaw modules map to OpenClaw modules:

MimicLawOpenClawNotes
bus/message_bus.csrc/channels/FreeRTOS queues vs async channels
telegram/telegram_bot.cextensions/telegram/Direct HTTP vs node-telegram-bot-api
llm/llm_proxy.csrc/agents/pi-embedded-runner/Direct Anthropic vs LiteLLM
agent/agent_loop.csrc/auto-reply/run.tsSame ReAct pattern
agent/context_builder.csrc/agents/pi-embedded-runner/build-prompt.tsSame structure
memory/memory_store.csrc/memory/SPIFFS vs filesystem
memory/session_mgr.csrc/sessions/JSONL on both
tools/tool_registry.csrc/tools/Same tool schema format
gateway/ws_server.csrc/gateway/Same JSON protocol
cli/serial_cli.cUnique to embedded
proxy/http_proxy.cUnique to embedded
ota/ota_manager.cUnique to embedded