Every AI. Every IDE.

ai-memory speaks MCP — the open Model Context Protocol. Any compliant host autodiscovers all 43 tools. Plug in once, share memory across every AI you use.

13 clients tested 3 tiers (first-class · supported · works-with-MCP) All MCP-compliant hosts work
Tested · Verified

13 AI clients in production use.

Each card below shows the exact setup snippet for that client. Tier 1 (cyan) means tested every release with deep integration. Tier 2 (orange) means tested release-cycle. Tier 3 (purple) means works-via-MCP-spec but not formally tested per-release.

3
Tier 1 · first-class
7
Tier 2 · supported
3
Tier 3 · works-via-MCP
43
Tools auto-discovered
🦾
Claude Code ▸ TIER 1 · DEEP INTEGRATION

Anthropic's official CLI. ai-memory runs as a stdio MCP subprocess; the memory_session_start hook fires every conversation start, returning recent-context-aware memories.

# claude mcp add
claude mcp add ai-memory --scope user --transport stdio -- ai-memory mcp --tier autonomous
stdio session_start hook all 43 tools
Setup guide →
📝
Cursor ▸ TIER 1 · DEEP INTEGRATION

VS Code-fork IDE with native MCP support. ai-memory autodiscovered. Add via Settings > Tools & MCP, restart for green-dot status indicator.

# ~/.cursor/mcp.json
{
  "mcpServers": {
    "ai-memory": {
      "command": "ai-memory",
      "args": ["mcp", "--tier", "semantic"]
    }
  }
}
~40 tool limit (we use 43) envFile support
🦊
Grok CLI (AlphaOne fork) ▸ TIER 1 · DEEP INTEGRATION

First-party fork of xAI's Grok CLI maintained by AlphaOne. Native ai-memory autoload. Memory feels native; no extra config.

# Built-in. Just install grok-cli; ai-memory tools appear.
grok --memory-tier autonomous
auto-detected no config needed
🤖
Codex (OpenAI) ▸ TIER 2 · TESTED

OpenAI's developer-CLI. TOML config with underscored mcp_servers key. Use /mcp in TUI to view server status.

# ~/.codex/config.toml (or %APPDATA% on Windows)
[mcp_servers.ai_memory]
command = "ai-memory"
args = ["mcp", "--tier", "semantic"]
startup_timeout_sec = 10
WSL: set CODEX_HOME enabled_tools allowlist
🌬️
Windsurf ▸ TIER 2 · TESTED

Codeium's agentic IDE. Settings > Cascade > MCP Servers. ${env:VAR} interpolation supported. 100-tool limit.

# ~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "ai-memory": {
      "command": "ai-memory",
      "args": ["mcp", "--tier", "smart"]
    }
  }
}
⚙️
Continue.dev ▸ TIER 2 · TESTED

VS Code / JetBrains agent. MCP only in agent mode. Project-level .continue/mcpServers/ directory auto-detects sibling configs.

# .continue/mcpServers/ai-memory.json
{
  "name": "ai-memory",
  "command": "ai-memory",
  "args": ["mcp"]
}
auto-detects Claude/Cursor configs
💎
Gemini CLI ▸ TIER 2 · TESTED

Google's CLI. Use hyphens (no underscores) in server names. Tools auto-prefix as mcp_memory_*. Sanitizes *TOKEN*/*SECRET* env unless declared.

# ~/.config/gemini/settings.json
{
  "mcpServers": {
    "ai-memory": {
      "command": "ai-memory",
      "args": ["mcp", "--tier", "semantic"],
      "trust": true
    }
  }
}
🐢
OpenClaw ▸ TIER 2 · TESTED

AlphaOne's 24/7 multi-machine agent runner. Stacks under ai-memory for "behaviorally autonomous" deployments. Cross-host federation native.

# openclaw.toml
[memory]
backend = "ai-memory"
endpoint = "https://memory.local:9077"
mtls_cert = "/etc/openclaw/peer.pem"
Hermes Agent (Nous) ▸ TIER 2 · TESTED

Auto-generated-skill agent framework. Uses ai-memory as its long-term store; skills can reference memories by ID across sessions.

# hermes.yml
memory:
  driver: ai-memory-mcp
  tier: autonomous
🚀
Grok API (xAI) ▸ TIER 2 · TESTED

Direct xAI API integration via the HTTP daemon. Authorization header passes through; per-agent_id quotas enforced server-side (v0.7).

# Reach the HTTP daemon directly
curl https://memory.local:9077/api/v1/recall \
  -H "x-api-key: $MEMORY_KEY" \
  -H "x-agent-id: ai:grok@host:pid-1"
🦙
Llama / Ollama ▸ TIER 2 · TESTED

Local LLM hosts. ai-memory uses Ollama itself for the smart/autonomous tier; same Ollama instance can also be the agent host.

# Smart tier requires Ollama running with gemma4:e2b
ollama pull gemma4:e2b
ai-memory mcp --tier smart
🌐
ChatGPT (web) ▸ TIER 3 · VIA HTTP API

No native MCP, but ai-memory's HTTP daemon is reachable. Use a custom GPT with Action pointing at /api/v1. Local-only; tunnel optional.

# Custom GPT Action OpenAPI spec points to:
servers:
  - url: https://your-tunnel.example.com/api/v1
🔌
Any MCP-compliant host ▸ TIER 3 · SPEC-CONFORMANT

Any host implementing the Model Context Protocol v2026-04 or later auto-discovers all 26 ai-memory tools. No special integration required.

# Standard MCP launcher signature
ai-memory mcp [--tier <keyword|semantic|smart|autonomous>]
🧱
Custom agents (Python/JS/Go) ▸ TIER 3 · DIY

Roll-your-own agents talk to the HTTP daemon. 42 endpoints. Or implement an MCP host shim using the official spec libraries.

# Python example
import requests
r = requests.post("http://localhost:9077/api/v1/recall",
  json={"context": "...query..."})
Why MCP wins

One protocol. Every AI host. Forever.

MCP is open, vendor-neutral, transport-agnostic. By implementing MCP once, ai-memory automatically works with every AI host that implements MCP — including future ones we haven't heard of yet. The integration cost per-AI is zero.

The MCP autodiscovery handshake

  1. Host launches ai-memory mcp --tier semantic as a stdio subprocess
  2. Host sends JSON-RPC initialize with protocolVersion + clientInfo
  3. ai-memory responds with capabilities + serverInfo
  4. Host sends tools/list
  5. ai-memory responds with all 43 tool definitions (name, description, JSON Schema)
  6. Host registers the tools with its model, presents to user as available actions
  7. User says "remember that I prefer..." → host calls tools/call memory_store ...
  8. ai-memory does the work, returns the result

No custom integration code per-host. The protocol IS the integration.