Built on open source.

ai-memory is Apache 2.0, but it stands on a much taller stack of open-source contributions. This page is the acknowledgement — every model, every crate, every project whose maintainers chose to ship code free for others to build on. Without these, ai-memory does not exist.

▸ Spotlight

Thank you, Google — for open-sourcing Gemma 4.

The entire autonomous tier of ai-memory — auto-tagging, auto-consolidation, query expansion, contradiction detection, memory reflection, session-start summaries — runs on Google's Gemma 4 family. Effective 2B (~1 GB Q4) drives the Smart tier; Effective 4B (~2.3 GB Q4) drives the Autonomous tier. Both ship under an open weights license. Without Gemma 4, every autonomous feature would require a paid hosted API and would send your memory contents to a third-party service. Because Gemma is open, ai-memory's autonomous tier is local, free, and private.

The decision to ship Gemma 4 open is not free for Google — it represents real engineering investment, training cost, and ongoing maintenance shared with the community at no charge. The local-first agent ecosystem is materially larger because of that decision. ai-memory is grateful, and so are its operators.

→ ai.google.dev/gemma

Models

The neural networks ai-memory uses.

Every model is loaded locally — none of these calls cross the network unless the operator explicitly configures a remote endpoint. License is shown for each.

▸ LLMs
Gemma 4 Effective 2B
Google
Gemma Terms — open weights
Smart tier LLM. ~1 GB at Q4 quantization. Drives auto-tag, consolidate, expand-query, contradiction-detect.
Gemma 4 Effective 4B
Google
Gemma Terms — open weights
Autonomous tier LLM. ~2.3 GB at Q4 quantization. Stronger reasoning for memory_reflection + session_start.
▸ Embeddings
all-MiniLM-L6-v2
Microsoft / Hugging Face
Apache 2.0
384-dim sentence embedding. Default for the Semantic tier. ~90 MB. Fast, broadly competent.
nomic-embed-text-v1.5
Nomic AI
Apache 2.0
768-dim sentence embedding. Default for Smart + Autonomous tiers. ~270 MB. Excellent semantic recall on long-form memories.
▸ Reranker
cross-encoder/ms-marco-MiniLM-L-6-v2
Microsoft / Hugging Face / sentence-transformers community
Apache 2.0
Cross-encoder reranker. Scores top-K recall candidates against the query for precision. Powers the Autonomous-tier memory_reflection feature.
Runtimes

The systems that run the models.

Ollama
Ollama Inc.
MIT
Local LLM serving. ai-memory talks to Ollama over HTTP for every Gemma 4 inference. Made running open LLMs locally a one-line install.
Candle
Hugging Face
MIT / Apache 2.0
Pure-Rust ML framework. ai-memory uses Candle to run MiniLM and nomic-embed locally without Python — keeps the daemon a single static binary.
tokenizers
Hugging Face
Apache 2.0
High-performance tokenizer for the embedding pipeline. Same tokenization the source models were trained with.
hf-hub
Hugging Face
Apache 2.0
Rust client for downloading models from the Hugging Face Hub on first start.
Storage + indexing

The bits that hold the memories.

SQLite
D. Richard Hipp + the SQLite team
Public Domain
The bedrock storage engine. Battle-tested for decades, in literally everywhere — phones, browsers, planes, ai-memory. A gift to humanity.
FTS5
SQLite team
Public Domain
SQLite full-text search extension. Powers ai-memory's keyword-tier recall. Comes for free with SQLite.
SQLCipher
Zetetic LLC
BSD-3-Clause
Drop-in SQLite replacement that adds AES-256 transparent encryption. Makes at-rest encryption a one-PRAGMA configuration.
instant-distance
instant-distance contributors
MIT / Apache 2.0
Pure-Rust HNSW (Hierarchical Navigable Small World) implementation. Powers ai-memory's vector index. No FFI to native code, no Python deps.
rusqlite
rusqlite contributors
MIT
Rust bindings to SQLite. Every db::* function in ai-memory ultimately goes through rusqlite.
sqlx
launchbadge
MIT / Apache 2.0
Async SQL for Rust. Used by the SAL Postgres adapter (--features sal).
Web framework + transport

HTTP, async, TLS.

tokio
tokio contributors
MIT
Async runtime for Rust. Every async fn in ai-memory runs on tokio.
axum
tokio-rs / axum contributors
MIT
HTTP framework. Powers ai-memory's REST API surface. ergonomic, type-safe, fast.
reqwest
seanmonstar
MIT / Apache 2.0
HTTP client. Federation peer calls, Ollama RPC, webhook dispatch — all reqwest.
rustls
rustls contributors
Apache 2.0 / ISC / MIT
Rust-native TLS. ai-memory uses rustls (not OpenSSL) for federation mTLS. Smaller dependency footprint, memory-safe.
tower / tower-http
tower-rs
MIT
Middleware layer for axum. CORS, tracing, request limits.
Crypto + serialization

The primitives that keep things honest.

hmac + sha2
RustCrypto
MIT / Apache 2.0
HMAC-SHA256 implementation. Powers webhook signing.
serde + serde_json
dtolnay + serde contributors
MIT / Apache 2.0
Serialization framework. Every JSON in/out of ai-memory passes through serde.
uuid
uuid-rs
Apache 2.0 / MIT
UUID generation. Every memory id is a uuid_v4 from this crate.
chrono
chrono-rs
MIT / Apache 2.0
Date/time handling. Every created_at, expires_at, updated_at goes through chrono.
Observability + DX

Logs, traces, errors, ergonomics.

tracing + tracing-subscriber
tokio-rs
MIT
Structured logging. Every info!, warn!, error! in ai-memory goes through tracing. EnvFilter for runtime control.
anyhow + thiserror
dtolnay
MIT / Apache 2.0
Error handling. Result<T> is anyhow. Boundary errors are thiserror-derived enums.
clap
clap-rs
MIT / Apache 2.0
CLI argument parser. Every ai-memory subcommand is a clap derive.
The Rust language itself

Without Rust, none of this.

ai-memory is written in Rust because Rust gives memory safety, fearless concurrency, and zero-cost abstractions in one language. Every guarantee about no use-after-free, no data races, no buffer overflows traces back to the Rust compiler doing its job. Thanks to the Rust team, the Rust Foundation, and every contributor whose work made cargo build reliable.

Rust is licensed Apache 2.0 / MIT. The full Cargo.lock dependency graph is shipped with every release as the SBOM (CycloneDX format). Operators who need the complete attribution list can extract it from the SBOM.