ai-memory uses the tracing crate with an EnvFilter-driven subscriber. 176 log call sites across 11 modules. Every error path is named; every fanout failure is logged with the peer id; every governance verdict is loud. AI clients can grep the canonical lines below to reconstruct what the daemon was doing.
The daemon installs a tracing_subscriber::fmt layer with the standard EnvFilter chain. Default directives keep ai_memory + tower_http at info while honoring any RUST_LOG override.
To turn the volume up:
Counts pulled from grep -rn 'tracing::error!|warn!|info!|debug!' src/. Handlers and federation dominate — they are where user-visible behavior happens.
Every HTTP/MCP write goes through one of these handlers. Errors get an error!("handler error: {e}") and the response code; governance verdicts get an error!("governance error: {e}").
Quorum fanout (W of N) is the noisiest module. Every peer success is silent; every peer failure logs warn!("federation: peer {peer_id} failed for {id}: {reason}") so the operator can see where the network dropped traffic.
Bootstrap, config, listener init, signal handling. Mostly info! startup banners + warn-level config fallback notices.
MCP protocol layer. Logs malformed JSON-RPC, unknown tool names, and method-not-found responses for protocol-level debugging.
SQLite-specific issues: migration logs, FTS5 index rebuild events, vacuum completion. Most db::* calls are silent on success.
Webhook dispatch. Logs SSRF guard rejections (W11 hardening), URL parse failures, and HTTP-client build/send errors so a misconfigured webhook doesn't disappear silently.
Cycle-driven curator (consolidation, taxonomy refresh). Logs when a cycle starts, ends, or skips because of a recent run.
Ollama bridge — logs info!("Pulling Ollama embedding model …") on first run and the success line when the pull completes.
SAL Postgres adapter (--features sal). Logs adapter-bound issues distinct from SQLite.
Cross-encoder reranker. Logs lock-poisoning and tokenizer-fallback events.
Agent-id resolution chain. Logs once when the precedence chain falls through to anonymous.
The daemon does not emit debug! or trace! at the call sites — those are reserved for ad-hoc instrumentation under RUST_LOG. Production output is the three levels below.
Bootstrap milestones, model pulls completing, catchup cycle summary. Quiet by default — doesn't drown the operator.
Pulling Ollama embedding model 'nomic-embed-text'... Embedding model 'nomic-embed-text' pulled successfully catchup: pulled 142 memories from 3 peers in 2.1sDaemon kept running, request still succeeded (or fell back). Operator should look but no incident yet.
embedding generation failed: connection refused recall: embedder query failed, falling back to keyword-only: timeout SSRF guard rejected webhook URL https://10.0.0.1: private network federation: peer node-3 failed for abc-123: connect timeout register_agent fanout error (local committed): node-2 502The HTTP/MCP response is also an error. Logs the same condition the client gets back so server-side reconstruction is possible from logs alone.
handler error: validation failed: title cannot be empty governance error: agent not registered execute pending error: target memory not found detect_contradictions list error: db locked export error: tar archive write failedWhen an AI client is reading logs to figure out what happened, these are the lines to look for. Each pattern matches an exact tracing::*! call site in the source.
anyhow chain. Validators, db, governance, and federation all bubble into this single phrase. The HTTP response is 4xx/5xx; the log carries the full reason.
Pending verdict is silent — only Deny and replay failures log. Look for these when an AI client says "the write was rejected and I don't know why."
federation: peer {peer_id} failed for {memory_id}: {reason}. Quorum still completes if W of N succeed; this line tells you which N-W peers were missing.
register_agent, link, consolidate. The local row is durable; sync_pull from those peers will reconcile.
SSRF guard rejected webhook URL {url}: {reason}. Reasons include "private network," "loopback resolved," "scheme not http/https." Two SSRF defects were closed in commit 9eeb453 for v0.6.3.
unparseable memory from peer) when a peer ships malformed data — those rows are skipped, not retried.
approved but replay couldn't complete (deleted target, namespace gone, validator now rejects). Status remains approved; the row is loud in logs but the user-visible state is unchanged.
webhook POST to {url} failed: {error}. Subscription-side; not retried at the daemon (the recipient is responsible for its own retry policy).
v0.7 brings the Hook Pipeline (arch-spec §2) and Sidechain Transcripts (§6). Both extend the trace surface — every hook fire becomes a span, every transcript line is durable. Once those land:
info_span!("hook", event=..., subscriber=...) wrapping each hook callback so the trace tree shows which hook ran wheretracing-opentelemetry bridge so traces can ship to a collectorhooks.registered_count, transcripts.total_count, etc. so AI clients can answer "what's enforced here?" honestly