ai-memory speaks MCP — the open Model Context Protocol. Any compliant host autodiscovers all 43 tools. Plug in once, share memory across every AI you use.
Each card below shows the exact setup snippet for that client. Tier 1 (cyan) means tested every release with deep integration. Tier 2 (orange) means tested release-cycle. Tier 3 (purple) means works-via-MCP-spec but not formally tested per-release.
Anthropic's official CLI. ai-memory runs as a stdio MCP subprocess; the memory_session_start hook fires every conversation start, returning recent-context-aware memories.
# claude mcp add claude mcp add ai-memory --scope user --transport stdio -- ai-memory mcp --tier autonomousSetup guide →
VS Code-fork IDE with native MCP support. ai-memory autodiscovered. Add via Settings > Tools & MCP, restart for green-dot status indicator.
# ~/.cursor/mcp.json { "mcpServers": { "ai-memory": { "command": "ai-memory", "args": ["mcp", "--tier", "semantic"] } } }
First-party fork of xAI's Grok CLI maintained by AlphaOne. Native ai-memory autoload. Memory feels native; no extra config.
# Built-in. Just install grok-cli; ai-memory tools appear. grok --memory-tier autonomous
OpenAI's developer-CLI. TOML config with underscored mcp_servers key. Use /mcp in TUI to view server status.
# ~/.codex/config.toml (or %APPDATA% on Windows) [mcp_servers.ai_memory] command = "ai-memory" args = ["mcp", "--tier", "semantic"] startup_timeout_sec = 10
Codeium's agentic IDE. Settings > Cascade > MCP Servers. ${env:VAR} interpolation supported. 100-tool limit.
# ~/.codeium/windsurf/mcp_config.json { "mcpServers": { "ai-memory": { "command": "ai-memory", "args": ["mcp", "--tier", "smart"] } } }
VS Code / JetBrains agent. MCP only in agent mode. Project-level .continue/mcpServers/ directory auto-detects sibling configs.
# .continue/mcpServers/ai-memory.json { "name": "ai-memory", "command": "ai-memory", "args": ["mcp"] }
Google's CLI. Use hyphens (no underscores) in server names. Tools auto-prefix as mcp_memory_*. Sanitizes *TOKEN*/*SECRET* env unless declared.
# ~/.config/gemini/settings.json { "mcpServers": { "ai-memory": { "command": "ai-memory", "args": ["mcp", "--tier", "semantic"], "trust": true } } }
AlphaOne's 24/7 multi-machine agent runner. Stacks under ai-memory for "behaviorally autonomous" deployments. Cross-host federation native.
# openclaw.toml [memory] backend = "ai-memory" endpoint = "https://memory.local:9077" mtls_cert = "/etc/openclaw/peer.pem"
Auto-generated-skill agent framework. Uses ai-memory as its long-term store; skills can reference memories by ID across sessions.
# hermes.yml
memory:
driver: ai-memory-mcp
tier: autonomous
Direct xAI API integration via the HTTP daemon. Authorization header passes through; per-agent_id quotas enforced server-side (v0.7).
# Reach the HTTP daemon directly curl https://memory.local:9077/api/v1/recall \ -H "x-api-key: $MEMORY_KEY" \ -H "x-agent-id: ai:grok@host:pid-1"
Local LLM hosts. ai-memory uses Ollama itself for the smart/autonomous tier; same Ollama instance can also be the agent host.
# Smart tier requires Ollama running with gemma4:e2b
ollama pull gemma4:e2b
ai-memory mcp --tier smart
No native MCP, but ai-memory's HTTP daemon is reachable. Use a custom GPT with Action pointing at /api/v1. Local-only; tunnel optional.
# Custom GPT Action OpenAPI spec points to: servers: - url: https://your-tunnel.example.com/api/v1
Any host implementing the Model Context Protocol v2026-04 or later auto-discovers all 26 ai-memory tools. No special integration required.
# Standard MCP launcher signature ai-memory mcp [--tier <keyword|semantic|smart|autonomous>]
Roll-your-own agents talk to the HTTP daemon. 42 endpoints. Or implement an MCP host shim using the official spec libraries.
# Python example import requests r = requests.post("http://localhost:9077/api/v1/recall", json={"context": "...query..."})
MCP is open, vendor-neutral, transport-agnostic. By implementing MCP once, ai-memory automatically works with every AI host that implements MCP — including future ones we haven't heard of yet. The integration cost per-AI is zero.
ai-memory mcp --tier semantic as a stdio subprocessinitialize with protocolVersion + clientInfotools/listtools/call memory_store ...No custom integration code per-host. The protocol IS the integration.