A single entry point for everyone who asks "is this ai-memory release tested?" Aggregates the four-phase ship-gate (release testing) and the 42-scenario ai2ai-gate (A2A integration) into per-release evidence pages with cross-references to the underlying run artifacts.
The gate repos do the work — provision DigitalOcean infrastructure, run scenarios, capture artifacts. This hub aggregates and presents the results so a release evidence page is one click from the ai-memory homepage.
Four-phase release testing harness on DigitalOcean. Each phase has its own script and result format.
4-node DigitalOcean harness exercising ai-memory through IronClaw / Hermes / OpenClaw agents communicating agent-to-agent. The umbrella keeps the specification (testbook, scenario contracts, v1-GA criteria); per-release execution lives in ai-memory-a2a-v<version> repos.
Per-release evidence pages aggregating the gates above + cross-references. No tests run here — it's the presentation layer.
Each release gets one evidence page with the gate-by-gate results and the verdict. The current in-flight release is at the top. The pattern going forward is one ai-memory-a2a-v<version> repo per release; the umbrella ai-memory-ai2ai-gate stays the spec.
Per-release A2A campaign on a 4-node DigitalOcean mesh. Subject under test: ai-memory v0.6.3.1 (tag pinned 2026-04-30, schema v19). Phase ladder 0→5: pre-flight, substrate cert (S1–S24), AI orchestration, autonomous NHI playbook (4 scenarios × 4 arms × n=3 = 48 runs), meta-analysis, verdict. S23 (#507 tilde-expansion) and S24 (#318 MCP stdio fanout) are expected-RED on v0.6.3.1; expected GREEN on Patch 2. Findings funnel into Patch 2 umbrella #511.
Hierarchy + KG + Capabilities v2 + 93.08% coverage. Ship-gate 4 phases pass · a2a-gate 48 scenarios pass · all 5 distribution channels live (GitHub Release · Homebrew tap · ghcr.io · Fedora COPR · crates.io).
view evidence →A2A-CERTIFIED. 214 passing scenarios across 6 cells. Federation fanout correctness + S40 catchup hardening.
a2a-gate runs →SAL Postgres adapter + 5 pre-tag SAL blocker punchlist closed (#293).
a2a-gate runs →16+ ship-gate soak runs (soak-v0.6.0-r1 .. r16+). World-class documentation sprint, SAL track-B PR1.
ship-gate runs →Existing harnesses already support per-run isolation. We can fan a release campaign across 17 agents at peak coordinated through ai-memory itself — eating our own dogfood. Two pages explain the architecture:
What collapses inside each stage. Sequential vs parallel time math, line by line. Constraints, costs ($5-15/campaign), and the two execution options (sequential A vs orchestrated B). Net savings: ~9.5h per campaign.
The architecture that makes the parallelism work. ai-memory itself as the coordination bus. Workers fan out across DigitalOcean droplets + GitHub Actions runners. Orchestrator drains memory_inbox, updates evidence in real time. Reusable for v0.7+.
A release tag must pass both gates before it ships. Ship-gate runs first — a regression in any phase blocks the release. A2a-gate runs only after ship-gate phases 1-4 are green; full certification requires the matrix cells the operator selects (full v0.6.2 was 6 cells).
memory_capabilities.