MimicLaw vs OpenClaw
How does a $5 embedded agent compare to the full framework?
Feature parity
What's the same
What's different
What's missing in MimicLaw
Not yet implemented (from TODO.md):
Deliberately omitted (hardware limitations):
- Local LLM inference (not enough RAM)
- Audio I/O (no microphone/speaker on reference board)
- Vision / camera input
- Full POSIX filesystem (SPIFFS is flat, no directories)
Design trade-offs
Non-streaming API responses
MimicLaw uses non-streaming JSON responses from the Claude API instead of SSE streaming.
Why: Simpler parsing (single JSON object), lower memory overhead (no streaming state machine), easier error handling.
Trade-off: Users see no output until the full response arrives. For long responses, this means a noticeable wait.
JSONL instead of SQLite
Session history uses JSONL (one JSON object per line) instead of a database.
Why: Human-readable, no SQLite port needed (saves ~500 KB flash), append-only writes are flash-friendly.
Trade-off: No indexing, must scan entire file to load history. Mitigated by the ring buffer (only last 20 messages loaded).
Direct Anthropic API instead of LiteLLM
MimicLaw calls the Anthropic Messages API directly with esp_http_client.
Why: LiteLLM requires a Python/Node.js runtime. Direct calls eliminate that dependency entirely.
Trade-off: Locked to Anthropic. Switching to OpenAI or Gemini requires code changes to the request/response format.
HTTP CONNECT proxy instead of SOCKS5
Proxy support uses HTTP CONNECT tunneling.
Why: Simpler protocol (HTTP-based), widely supported by common proxy tools (Clash Verge, V2Ray, Squid).
Trade-off: Not compatible with SOCKS5-only proxies.
Dual-core task separation
Core 0 handles all I/O (Telegram, WebSocket, serial). Core 1 runs the agent loop exclusively.
Why: Prevents network I/O latency from blocking agent processing. The agent can run a full ReAct iteration while Telegram polls in the background.
Trade-off: Slightly more complex task coordination (queues instead of direct calls).
20-message session ring buffer
Only the last 20 messages per conversation are loaded into the LLM context.
Why: Bounds memory usage. 20 messages at ~200 tokens each ≈ 4,000 tokens, well within Claude's context window.
Trade-off: Long conversations lose early context. Important information should be saved to MEMORY.md by the agent.
Architecture mapping
How MimicLaw modules map to OpenClaw modules: