The first multi-tenant tier. One ai-memory process, but a swarm of agents — typically 10 concurrent — each writing into its own namespace, recalling with scope visibility filters, and gated by per-namespace governance with a pending-approval queue.
This is the canonical shape for a workstation running a planner, a coder, a reviewer, and a handful of skills as separate Claude Code agents — all sharing the same memory store but each siloed by namespace.
Every agent has a namespace position — typically agents/<role> or org/team/role. Memories are written with that namespace, and the indexed scope_idx generated column captures the agent's visibility scope (private / team / unit / org / collective).
When agent agents/planner calls memory_recall and passes as_agent=agents/planner, compute_visibility_prefixes() (src/db.rs:26-38) walks the namespace ancestors and returns [agents/planner, agents/, '']. The recall SQL then WHEREs on scope_idx IN (...) — the planner sees its own private memories plus anything scoped team or collective that an ancestor namespace publishes.
This is the same machinery T1 has — but at T2 it's actually doing work.
Set a policy with memory_namespace_set_standard:
ai-memory namespace set-standard agents/secops --governance '{
"write": "approve",
"promote": "owner",
"delete": "approve"
}'
Now any agent writing into agents/secops triggers governance::check() which returns Pending(action_id). The write isn't committed — it's parked in pending_actions. The owning operator (or a designated approver agent) calls memory_pending_list, inspects the diff, and either memory_pending_approve or memory_pending_reject. Approved writes get committed; rejected ones are dropped with an audit row.
Per-namespace policy means a strict-write namespace can sit next to a permissive one. The hierarchy lets you set a default at org/ and override at org/team/.
Before an agent invokes a skill, it can ask: "is auto-tag actually wired here?"
$ ai-memory capabilities --json
{
"version": "0.6.3",
"tier": "semantic",
"features": {
"hybrid_recall": true,
"auto_tagging": false,
"contradiction_analysis": false,
"approval_workflow": true,
"kg_temporal": true
},
"permissions": ["read", "write", "promote", "approve"],
"hooks": ["pre_store", "post_recall"],
"approval": {"queue_depth": 7, "policies": 3}
}
Capabilities v2 ships in v0.6.3 — agents discover what's available at runtime instead of hard-coding assumptions.
# Run as a long-lived HTTP daemon on the workstation
ai-memory --db /var/lib/ai-memory/store.db serve \
--bind 127.0.0.1:9077 \
--tier semantic
# Each agent is a separate MCP client (or HTTP client) pointing at the same store
# In .mcp.json for each agent's Claude Code config:
{
"mcpServers": {
"memory": {
"command": "curl",
"args": ["-s", "-X", "POST", "http://127.0.0.1:9077/api/v1/mcp"],
"env": {"AI_MEMORY_AGENT_ID": "agents/planner"}
}
}
}
# Agents pass as_agent=agents/<role> on every recall
# Per-namespace policies set once, enforced forever:
ai-memory namespace set-standard agents/secops \
--governance '{"write":"approve","delete":"approve"}'
GovernancePolicy (src/models.rs:405-409) per namespace, hierarchical with parent inheritance, supports any / registered / owner / approve for each of write / promote / delete. Namespace metadata lives in namespace_meta and is the source of truth for the gate.mcp__memory__memory_agent_register). Auto-tagging, contradiction detection, query expansion, and consolidation all run through the same scope visibility — a skill at skills/auto-tag with team scope only sees memories its team published. Skills can be subject to the same governance policies as humans-in-the-loop.memory_links.signature (schema v15) is populated with claimed identity at v0.6.3; v0.7 will sign with the agent's keypair and verify on read.| Dimension | T2 ceiling | When it bites |
|---|---|---|
| Concurrent writers | ~10–20 before mutex contention shows up | Bulk imports starve real-time agents |
| Total memories | ~10⁶ before HNSW RAM cost is noticeable | Vector index lives in-process |
| Write throughput | ~500-2000 writes/sec (single SQLite writer) | Bulk operations should chunk |
| Recall p95 | sub-10ms at 10⁵ memories with HNSW | Scales to 10⁶ with care |
| Network exposure | loopback by default, mTLS available | Set up TLS before binding non-loopback |
| Cross-machine sharing | none | Walk to Tier 3 |
src/db.rs:26-38 — compute_visibility_prefixes() (scope visibility)src/db.rs:91-104 — visibility_clause() (SQL filter)src/db.rs:309-330 — namespace_meta table for per-namespace policysrc/models.rs:351-365 — PendingAction structsrc/models.rs:405-409 — GovernancePolicysrc/mcp.rs:119 — as_agent parameter on recalldocs/AI_DEVELOPER_GOVERNANCE.md — governance contractdocs/CHANGELOG.md — v0.6.2 (governance GA), v0.6.3 (taxonomy + capabilities v2)