Same binary.
Every scale.

From a solo developer's laptop to a federation of state-government data centers — the same Rust binary runs the same way. Below is the value story for every audience size, with deployment patterns, ROI math, and procurement-ready specs.

👤
01
SOLO · INDIVIDUAL

The solo developer.

One person. One laptop. Multiple AIs. Memory that vanishes every conversation.

1 user · 1 box · < 5 GB RAM

▼ The Pain

"Claude forgets between sessions. Cursor doesn't know what ChatGPT learned. I re-paste the same project context 30 times a day. My most expensive cognitive asset — the context I've built with these tools — evaporates every 4 hours."

▲ The Fix

One brew install ai-memory. Three lines in your MCP config. Every AI you use plugs into the same memory. Persistent across reboots, machines, AI vendors. Local-first — your data never leaves your laptop unless you explicitly federate.

▼ The ROI (estimated, single user)

  • ~30 conversations/day × 365 days = ~11,000 cold-start contexts/year
  • 5-10 minutes/day saved on re-explaining project state to AIs
  • $0 SaaS spend — zero per-month memory subscription
  • 100% data ownership — no vendor lock-in, no privacy spillover

▼ Deployment Pattern

brew install ai-memory
# Add ai-memory to Claude / Cursor / Codex MCP config
# That's it.

# Optional: bump to autonomous tier for self-curating memory
ollama pull gemma4:e2b
ai-memory mcp --tier autonomous
👥
02
SCALE · STARTUP

The 2–25 person team.

A small group of humans + a growing fleet of AI sessions. Tribal knowledge dies when someone leaves.

2-25 users · 1-3 nodes · shared engineering memory

▼ The Pain

"Senior engineer leaves. Six months of context goes with them. New hires re-discover landmines we already paid to find. Each engineer's Claude has different memory. The AI knows the codebase but can't share what it learned."

▲ The Fix

Federated 3-node ai-memory cluster on your VPC. Per-namespace shared memory: onboarding/, decisions/, infra/. Every team member's AI plugs in and shares understanding. mTLS handshake on every cross-node call. Zero per-seat cost.

▼ The ROI

  • 40-60% faster onboarding — new hire's AI walks into project conversation already in progress
  • ~$0 / seat / month vs $20-50/seat for SaaS memory products at this size
  • Decision audit trail — every "we decided X because Y" memory survives staff turnover
  • SOC2-friendly from day 1 — local data, audit logs, governance hooks

▼ Architecture

3 nodes mTLS-federated VPC cluster, Linux x64
W=2 / N=3 quorum write — survives 1-node loss
tier=smart Ollama gemma4:e2b for auto-tag + contradict
webhook SIEM → Slack on contradiction-detected
backup Daily VACUUM INTO + S3 sync of *.db.gz
🏢
03
SCALE · MID-MARKET

The 25–500 person org.

Multi-team. Multi-namespace. Procurement caught on. Compliance is calling.

25-500 users · 5-15 nodes · per-team isolation

▼ The Pain

"AI usage exploded across teams. Each one signed up for a different SaaS memory product. Different vendors. Different DPAs. Different security postures. Audit asks 'where does our data live?' and we don't have one answer."

▲ The Fix

One ai-memory deployment per business unit. Per-team namespace. Governance hooks intercept destructive ops. Webhook integration ships to corporate SIEM. Single source of truth for data residency: "on this VPC, that's it."

▼ The ROI

  • Centralized memory governance — one DPA, not 7
  • Per-team compliance scope — finance memories never reach engineering
  • SaaS spend elimination: typical $30K-200K/year on per-team memory products → $0
  • Faster security review: Apache 2.0 OSS source, single Rust binary, no SBOM mystery

▼ Architecture

per-BU Each business unit gets its own 3-5 node cluster
SIEM Webhook → Splunk / Datadog / Sumo on every write
RBAC Per-namespace governance (Allow/Deny/Pending)
IAM mTLS client certs from corp PKI
backup Snapshot on every change-of-state, retain 90d
🏛️
04
SCALE · LARGE ENTERPRISE

The 500–50,000 corporation.

Multi-region. Multi-jurisdiction. Multi-language compliance. Air-gapped business units.

500-50K users · 20-100 nodes · multi-region federation

▼ The Pain

"Each BU wants AI. Each lawyer wants air-gapped. Each auditor wants logs. Each region has different data-residency rules. Our AI vendors keep getting acquired. Every approved tool gets deprecated 18 months later."

▲ The Fix

Per-BU federated cluster. Cross-region quorum-aware sync. PII redaction hooks. Backup + restore + retention. Air-gap operable for sensitive BUs. Vendor-acquisition immune — Apache 2.0 OSS, you have the source.

▼ The ROI

  • Air-gap deployments for sensitive BUs without losing AI capability
  • Per-BU policy enforcement via governance + hooks; one platform, N policies
  • Hooks for DLP integration — your existing data-loss-prevention pipeline plugs in
  • Vendor-risk-zero — Apache 2.0, full source, single binary, no SaaS dependency
  • FIPS path via v0.7 attested identity (ed25519 → HSM-backed)

▼ Architecture

multi-region AWS / Azure / GCP / on-prem; cross-region mTLS
air-gap No outbound calls; internal-only mirror of crates.io for updates
DLP hooks pre_store hook redacts PII before persist
audit Sidechain transcripts (v0.7) — every memory has full conversation
DR Cross-region quorum + nightly backup → glacier
🇺🇸
05
SCALE · GOVERNMENT

Federal · State · Local · Municipal.

AI mandate is real. Cloud ban is real. Foreign vendor scrutiny is real. Audit trail is mandatory.

Sovereign · Air-gapped · FIPS / IL5 path

▼ The Pain

"AI is a White House priority. Cloud is forbidden by policy. Most vendors are foreign. Every commercial product wants telemetry. Every offering needs FedRAMP authorization. We're caught between the modernization mandate and the security mandate."

▲ The Fix

Apache 2.0 OSS. Single Rust binary. Zero outbound calls. Public source code, auditable from git clone. Hardware-attested keys (v0.7 → HSM, TPM, Secure Enclave). FedRAMP / IL5 path via the AgenticMem Sovereign tier. Domestic supplier (US-based AlphaOne LLC) for procurement.

▼ The ROI

  • 100% air-gap operable — no exceptions, no special "offline mode" — just doesn't make outbound calls
  • Public source-code audit — your security team reads the entire codebase in a week
  • No vendor lock-in — single Rust binary, single SQLite file, copy is backup
  • FedRAMP / IL5 path via AgenticMem Sovereign commercial tier (Q3 2026 GA)
  • Domestic supplier — US-incorporated, no foreign-ownership concerns
  • SBOM clean — Cargo.toml is the SBOM; cargo audit runs in CI

▼ Deployment Pattern

air-gap Build from source on the secure subnet; no internet
SQLCipher Encryption-at-rest via --features sqlcipher
FIPS v0.7 attested-identity for FIPS 140-3 path
HSM Hardware key custody via AgenticMem Attest commercial tier
audit Append-only signed event log; no operator can rewrite history
domestic AlphaOne LLC (US, Delaware) is the sole responsible party
Universal

What every audience gets, automatically.

Some properties of ai-memory are non-negotiable benefits — they apply at every scale, with no upgrade or paid tier required.

Local-first

SQLite file on disk. WAL-mode for concurrent reads. cp is a backup. Your data never leaves your hardware unless you explicitly turn on federation.

Sub-100ms session-start

42ms p95 measured. Published budget. CI guard fails any PR that breaks it. Latency contract enforced, not aspirational.

Apache 2.0 OSS

Commercial-friendly license. Patent grant included. You can read every line, fork it, sell what you build on top.

93.84% test coverage

1,886 lib tests + 49+ integration tests (v0.6.3.1; +281 net from v0.6.3 baseline). Zero ignored. CI matrix: macOS / Ubuntu / Windows. Every PR runs the full matrix.

Single binary

No Python install. No Node toolchain. No Docker required. Just one ~30MB Rust binary. Five distribution targets (macOS x64/arm, Linux x64/arm, Windows x64).

MCP-native

Open Model Context Protocol. Every MCP-compliant host autodiscovers all 26 ai-memory tools. Future AI hosts work for free.

Reversibility

Soft-delete to archive tier. Restore anytime. Hard delete only via explicit purge with governance approval.

Self-curating

Curator daemon auto-tags, detects contradictions, consolidates similar memories. Your memory store stays tidy without manual hygiene.

Auditable

Pending actions queue. Webhook event stream. (v0.7) Ed25519 signatures on every write. Compliance officers see what they need.

Pick your scale. Same binary.

Whether you're a solo developer or a federal agency, the install path is the same.

Install Guide GitHub → At a Glance