../ runs index

Campaign a2a-hermes-v3r21-mtls-release-v0.6.2 FAIL

Agent group
hermes (homogeneous)
ai-memory ref
release/v0.6.2
Completed at
2026-04-23T02:23:05Z
Overall pass
false
Skipped reports
1

Infrastructure

Provider
digitalocean
Region
nyc3
Droplet size
s-2vcpu-4gb
Topology
4-node federation mesh (W=2/N=4)
Scenarios started
2026-04-23T02:08:54Z
Scenarios ended
2026-04-23T02:23:04Z
Dispatched by
alphaonedev
Harness SHA
4db02086d905
Workflow run
https://github.com/alphaonedev/ai-memory-ai2ai-gate/actions/runs/24812179796

Node roster

#RoleAgent IDPublic IPPrivate IP
1agentai:alice159.65.190.24310.11.2.4
2agentai:bob64.225.15.7310.11.2.2
3agentai:charlie104.248.1.210.11.2.3
4memory-only134.122.14.20010.11.2.5

Baseline attestation BASELINE OK

Per the authoritative baseline spec, every agent node must emit a self-attestation before any scenario is permitted to run. This run's attestation:

Spec version: 1.4.0 — see authoritative baseline.

NodeAgentFrameworkAuthenticMCP ai-memoryxAI cfgxAI defaultAgent IDFederationUFW offiptablesdead-manF1 xAIF2a substrateF2b agent (non-gating)Config SHAPass
node-1ai:alicehermes Hermes Agent v0.10.0 (2026.4.16)fa358f9a9059PASS
node-2ai:bobhermes Hermes Agent v0.10.0 (2026.4.16)21635cf63640PASS
node-3ai:charliehermes Hermes Agent v0.10.0 (2026.4.16)ce52d772ef5aPASS
a2a-baseline.json
{
	"baseline_pass": true,
	"per_node": [
		{
			"spec_version": "1.4.0",
			"agent_type": "hermes",
			"agent_id": "ai:alice",
			"node_index": "1",
			"framework_version": "Hermes Agent v0.10.0 (2026.4.16)",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "https://10.11.2.2:9077,https://10.11.2.3:9077,https://10.11.2.5:9077",
			"config_file_sha256": "fa358f9a90597243fb96224babd541399bd7b1e972f364605308ab1e2d9dd2c7",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "I'msorry,butImustdeclinerequeststhatappear",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "f4ea2d23-8c28-4eef-90c9-ad297cfc6169",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "118bfade-e6cf-4e92-a6c9-14bedb10b25f",
				"agent_canary_response_head": "Traceback (most recent call last):   File \"/usr/local/bin/hermes\", line 11, in <module>     main()   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 8819, in main     args.func(args)   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 1145, in cmd_chat     from cli import main as cli_main   File \"/root/.hermes/hermes-agent/cli.py\", line 43, in <module>     from prompt_toolkit.history import FileHistory ModuleNotFoundError: No module named 'prompt_toolkit' ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.11.2.2:9077:OK,10.11.2.3:9077:OK,10.11.2.5:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "mtls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "f4ea2d23-8c28-4eef-90c9-ad297cfc6169",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "hermes",
			"agent_id": "ai:bob",
			"node_index": "2",
			"framework_version": "Hermes Agent v0.10.0 (2026.4.16)",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "https://10.11.2.4:9077,https://10.11.2.3:9077,https://10.11.2.5:9077",
			"config_file_sha256": "21635cf6364057fd2a004d28aac89abf8438671d85f9fd2ed1e654d812d23ff1",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "d89a9437-cafb-430d-b52e-674bf0c4d0c6",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "9dcbe926-7882-40a7-8ec6-4e14a52cb1de",
				"agent_canary_response_head": "Traceback (most recent call last):   File \"/usr/local/bin/hermes\", line 11, in <module>     main()   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 8819, in main     args.func(args)   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 1145, in cmd_chat     from cli import main as cli_main   File \"/root/.hermes/hermes-agent/cli.py\", line 43, in <module>     from prompt_toolkit.history import FileHistory ModuleNotFoundError: No module named 'prompt_toolkit' ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.11.2.4:9077:OK,10.11.2.3:9077:OK,10.11.2.5:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "mtls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "d89a9437-cafb-430d-b52e-674bf0c4d0c6",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "hermes",
			"agent_id": "ai:charlie",
			"node_index": "3",
			"framework_version": "Hermes Agent v0.10.0 (2026.4.16)",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "https://10.11.2.4:9077,https://10.11.2.2:9077,https://10.11.2.5:9077",
			"config_file_sha256": "ce52d772ef5a00968db29fb80eea7a14206b0a258a00ff2165db725405474618",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "78118071-2a1a-4e40-a7df-288c09b31fe4",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "56e00212-c69d-4c2e-b8bf-735a4d6bd445",
				"agent_canary_response_head": "Traceback (most recent call last):   File \"/usr/local/bin/hermes\", line 11, in <module>     main()   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 8819, in main     args.func(args)   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 1145, in cmd_chat     from cli import main as cli_main   File \"/root/.hermes/hermes-agent/cli.py\", line 43, in <module>     from prompt_toolkit.history import FileHistory ModuleNotFoundError: No module named 'prompt_toolkit' ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.11.2.4:9077:OK,10.11.2.2:9077:OK,10.11.2.5:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "mtls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "78118071-2a1a-4e40-a7df-288c09b31fe4",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		}
	]
}

raw file

F3 — peer A2A via shared memory F3 OK

Workflow-level probe answering "can agents communicate through ai-memory?". Writer ai:alice posted canary UUID 0e1c3a34-2fee-4816-ae75-f83e7709abef to namespace _baseline_peer_canary via node-1's local ai-memory serve HTTP. After W=2 fanout settle, probe confirmed the canary on each of the 3 peer nodes via their local GET /api/v1/memories.

f3-peer-a2a.json
{
	"probe": "F3",
	"name": "peer-a2a-via-shared-memory",
	"description": "Writer agent posts a canary via local ai-memory HTTP on node-1; verifies the row propagates to the 3 peer nodes (W=2/N=4 quorum) before scenarios run.",
	"canary_uuid": "0e1c3a34-2fee-4816-ae75-f83e7709abef",
	"canary_namespace": "_baseline_peer_canary",
	"writer_agent": "ai:alice",
	"pass": true
}

raw file

Run focus

mTLS v0.6.2 fails semantic search, bulk ops, delta sync, and notifications.

What this campaign tested: Exercised 35 scenarios covering basic sharing, semantic/keyword search, bulk imports, delta sync, notifications, rules, linking, and recovery under mTLS transport in 4-node federation with semantic tier primitives.

What it demonstrated: Demonstrated reliable basic memory sharing and replication but exposed failures in semantic recall, bulk fanout, delta synchronization, inbox delivery, pending approvals, and rule inheritance.

AI NHI analysis · Claude Opus 4.7

mTLS v0.6.2 fails semantic search, bulk ops, delta sync, and notifications.

FAIL — 6/35 scenarios red, 1 skipped (S23).

For three audiences

Non-technical end users

In this secure setup, AI agents mostly shared memories reliably across the group, including during network issues. However, they failed to properly find memories through searches, handle large batches of information, or deliver some messages and updates over time.

C-level decision makers

High risk posture from regressions in bulk processing and delta sync, undermining production readiness for high-volume use cases; customer claims of seamless agent memory sharing are not viable yet; failures increased vs. prior runs, likely due to mTLS overhead or fanout bugs.

Engineers & architects

Failures include S18 (semantic query missed alice/bob memories, probable embedding or indexing issue), S32 (notify not delivered to bob's inbox), S34 (approve 403/reject 404, auth misconfig; charlie missed approved row), S35 (parent/child rule layering broken), S39 (delta returned 0/6 markers, sync incomplete), S40 (bulk fanout stuck at 200/500 per node, likely replication throttle). Primitives impacted: semantic search, bulk insert, delta API, notify, pending queue, rules. Skipped S23 due to unparseable report.

What changes going into the next campaign

Prioritize fixing bulk fanout (S40) with increased replication timeouts and debug tracing before next campaign.

Tests performed in this run

Every scenario that produced a JSON report in this campaign, in testbook order. Click a row's scenario id to jump to its full report below. See the Every test performed page for the authoritative catalog.

IDTitleResultReason
S1Per-agent write + read (MCP stdio)PASS
S1bPer-agent write + read (HTTP)PASS
S2Shared-context handoffPASS
S4Federation-aware concurrent writesPASS
S5Consolidation + curationPASS
S6Contradiction detectionPASS
S9Mutation round-tripPASS
S10Deletion propagationPASS
S11Link integrityPASS
S12Agent registrationPASS
S13Concurrent write contentionPASS
S14Partition tolerancePASS
S15Read-your-writesPASS
S16Tier promotionPASS
S17Stats consistencyPASS
S18Semantic query expansion?semantic query did not surface alice's memory; semantic query did not surface bob's memory
S20mTLS happy-pathPASS
S21Anonymous client rejectedPASS
S22Identity spoofing resistancePASS
S23Malicious content fuzz?
S24Byzantine peerPASS
S25Clock skew tolerancePASS
S28memory_search keywordPASS
S29memory_archive lifecyclePASS
S30memory_capabilities handshakePASS
S31memory_gc quiescencePASS
S32memory_inbox + notify?bob's inbox did not deliver alice's notify
S33memory_subscribe pub/subPASS
S34memory_pending governance?approve returned HTTP 403; reject returned HTTP 404; charlie did not see approved row
S35memory_namespace standards?parent rule not layered into child's standard view; child rule missing from standard view
S36memory_session_startPASS
S37memory_get_links bidirectionalPASS
S38/export + /importPASS
S39/sync/since delta?delta returned 0/6 expected markers — delta-sync incomplete
S40/memories/bulk?node-2 saw 200/500 bulk rows after fanout; node-3 saw 200/500 bulk rows after fanout; node-4 saw 200/500 bulk rows after fanout
S41/metrics PrometheusPASS
S42/namespaces enumerationPASS

Scenario 1 — Per-agent write + read (MCP stdio) PASS

scenario-1.json (report)
{
	"agent_group": "hermes",
	"expected_per_reader": 20,
	"pass": true,
	"per_agent": {
		"ai:alice": {
			"recall": 20
		},
		"ai:bob": {
			"recall": 20
		},
		"ai:charlie": {
			"recall": 20
		}
	},
	"per_namespace_node4": {
		"scenario1-ai:alice": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1-ai:bob": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1-ai:charlie": {
			"count": 10,
			"wrong_agent_id": 0
		}
	},
	"reasons": [],
	"scenario": "1",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-1.log (console trace)
phase A: each agent writes 10 memories via MCP
  ai:alice on 159.65.190.243
  ai:bob on 64.225.15.73
  ai:charlie on 104.248.1.2
settle 15s for W=2/N=4 convergence
phase B: each agent counts rows in the OTHER two namespaces
  ai:alice recalled 20 rows from the other two namespaces
  ai:bob recalled 20 rows from the other two namespaces
  ai:charlie recalled 20 rows from the other two namespaces
phase C: cross-cluster identity check on node-4
  ns=scenario1-ai:alice count=10 wrong_agent_id=0
  ns=scenario1-ai:bob count=10 wrong_agent_id=0
  ns=scenario1-ai:charlie count=10 wrong_agent_id=0

raw file

Scenario 1b — Per-agent write + read (HTTP) PASS

scenario-1b.json (report)
{
	"agent_group": "hermes",
	"expected_per_reader": 20,
	"pass": true,
	"path": "serve-http",
	"per_agent": {
		"ai:alice": {
			"recall": 20
		},
		"ai:bob": {
			"recall": 20
		},
		"ai:charlie": {
			"recall": 20
		}
	},
	"per_namespace_node4": {
		"scenario1b-ai:alice": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1b-ai:bob": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1b-ai:charlie": {
			"count": 10,
			"wrong_agent_id": 0
		}
	},
	"reasons": [],
	"scenario": "1b",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-1b.log (console trace)
phase A: each agent POSTs 10 memories to local serve
  ai:alice on 159.65.190.243
  ai:bob on 64.225.15.73
  ai:charlie on 104.248.1.2
settle 15s for W=2/N=4 convergence
phase B: count rows in other two namespaces via local serve HTTP
  ai:alice sees 20 rows from the other two namespaces
  ai:bob sees 20 rows from the other two namespaces
  ai:charlie sees 20 rows from the other two namespaces
phase C: cross-cluster identity check on node-4
  ns=scenario1b-ai:alice count=10 wrong_agent_id=0
  ns=scenario1b-ai:bob count=10 wrong_agent_id=0
  ns=scenario1b-ai:charlie count=10 wrong_agent_id=0

raw file

Scenario 2 — Shared-context handoff PASS

scenario-2.json (report)
{
	"ack_uuid": "a-b9fd4543a3c4422eba048579ffb6c0fb",
	"agent_group": "hermes",
	"handoff_uuid": "h-97cea220436048d2a0b1510ba403ad63",
	"pass": true,
	"path": "serve-http",
	"per_agent": {
		"ai:alice": {
			"sees_ack": 1
		},
		"ai:bob": {
			"sees_handoff": 1
		}
	},
	"reasons": [],
	"scenario": "2",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-2.log (console trace)
phase A: ai:alice writes handoff to ai:bob (uuid=h-97cea220436048d2a0b1510ba403ad63)
settle 8s for quorum fanout
phase B: ai:bob reads handoff on node-2
  ai:bob sees 1 handoff memories from ai:alice
phase C: ai:bob writes acknowledgement (uuid=a-b9fd4543a3c4422eba048579ffb6c0fb)
settle 8s for reverse-direction fanout
phase D: ai:alice reads ack on node-1
  ai:alice sees 1 ack memories from ai:bob

raw file

Scenario 4 — Federation-aware concurrent writes PASS

scenario-4.json (report)
{
	"agent_group": "hermes",
	"expected_per_agent": 30,
	"pass": true,
	"per_agent": {
		"ai:alice": {
			"count": 30,
			"wrong_agent_id": 0
		},
		"ai:bob": {
			"count": 30,
			"wrong_agent_id": 0
		},
		"ai:charlie": {
			"count": 30,
			"wrong_agent_id": 0
		}
	},
	"reasons": [],
	"scenario": "4",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-4.log (console trace)
phase A: launching concurrent 30-row bursts from 3 agents
  ai:alice burst ok=30/30
  ai:bob burst ok=30/30
  ai:charlie burst ok=30/30
settle 20s for W=2 fanout convergence
phase B: querying node-4 aggregator for per-agent counts
  ai:alice: count=30 (expected 30) wrong_agent_id=0
  ai:bob: count=30 (expected 30) wrong_agent_id=0
  ai:charlie: count=30 (expected 30) wrong_agent_id=0

raw file

Scenario 5 — Consolidation + curation PASS

scenario-5.json (report)
{
	"agent_group": "hermes",
	"consolidate_http_code": 201,
	"consolidated_from_agents": [
		"ai:charlie",
		"ai:bob",
		"ai:alice"
	],
	"consolidated_id": "4cedfc1c-3bc1-4316-b573-77c46eec62ac",
	"pass": true,
	"reasons": [],
	"scenario": "5",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-5.log (console trace)
phase A: each agent writes 3 related memories
  ai:alice on 159.65.190.243
  ai:bob on 64.225.15.73
  ai:charlie on 104.248.1.2
settle 8s for quorum fanout
phase B: collect source ids on node-1, then trigger consolidate
  source ids (count=9): ['1046faf7-e41d-4858-a4e8-b619434000e6', '9fb786da-f3ee-4612-8c6c-90ad4b8e6ef7', '3853fb0e-fbec-4668-8583-04bad635af68', '1db95808-4cda-49cd-8a88-13b5e38bf211', '68fee138-d975-4c06-9cbc-b4029bf29755']...
  consolidate HTTP 201, consolidated_id=4cedfc1c-3bc1-4316-b573-77c46eec62ac
settle 10s for consolidation fanout
phase C: verifying consolidated_from_agents on node-4
  consolidated_from_agents=['ai:charlie', 'ai:bob', 'ai:alice']

raw file

Scenario 6 — Contradiction detection PASS

scenario-6.json (report)
{
	"agent_group": "hermes",
	"alice_id": "7d67c8cc-54fc-4ad3-b236-e2b8a39ca7d4",
	"bob_id": "4b08bbb8-001b-4f4a-9da9-676908b65461",
	"charlie_sees_both_memories": true,
	"charlie_sees_contradicts_link": true,
	"detect_http_code": 200,
	"pass": true,
	"reasons": [],
	"scenario": "6",
	"skipped": false,
	"tls_mode": "mtls",
	"topic": "sky-color-b1b95994"
}

raw file

scenario-6.log (console trace)
alice writes claim: "sky-color-b1b95994 is blue" on node-1
bob writes contradicting claim: "sky-color-b1b95994 is red" on node-2
  alice.id=7d67c8cc-54fc-4ad3-b236-e2b8a39ca7d4 bob.id=4b08bbb8-001b-4f4a-9da9-676908b65461
settle 10s for quorum fanout + contradiction indexing
charlie queries /api/v1/contradictions on node-3
  HTTP 200
  sees both memories: True; sees contradicts link: True

raw file

Scenario 9 — Mutation round-trip PASS

scenario-9.json (report)
{
	"agent_group": "hermes",
	"charlie_view": {
		"agent_id": "ai:alice",
		"content": "v2-03722f51d3f54206b950f26fafa89d72"
	},
	"m1_id": "a72c4460-467f-408e-bb3a-864d6ec6a845",
	"pass": true,
	"put_http_code": 200,
	"reasons": [],
	"scenario": "9",
	"skipped": false,
	"tls_mode": "mtls",
	"v1_uuid": "v1-e6926015b8af4773b955ca1fe74d2e10",
	"v2_uuid": "v2-03722f51d3f54206b950f26fafa89d72"
}

raw file

scenario-9.log (console trace)
alice writes M1 content=v1-e6926015b8af4773b955ca1fe74d2e10 on node-1
  M1 id=a72c4460-467f-408e-bb3a-864d6ec6a845
settle 5s for initial replication
bob updates M1 content=v2-03722f51d3f54206b950f26fafa89d72 on node-2 via PUT
  PUT returned HTTP 200
settle 8s for update fanout
charlie reads M1 on node-3 and checks content + provenance
  charlie sees content="v2-03722f51d3f54206b950f26fafa89d72" agent_id="ai:alice"

raw file

Scenario 10 — Deletion propagation PASS

scenario-10.json (report)
{
	"agent_group": "hermes",
	"delete_http_code": 200,
	"m1_id": "53735150-8077-453e-88ce-e0b2ac405023",
	"pass": true,
	"post_delete_hits": {
		"node-2": 0,
		"node-3": 0,
		"node-4": 0
	},
	"post_delete_still_visible_peers": 0,
	"pre_delete_visible_peers": 3,
	"reasons": [],
	"scenario": "10",
	"skipped": false,
	"tls_mode": "mtls",
	"uuid": "d-bb25ba79645443458d3249bc5e83cf37"
}

raw file

scenario-10.log (console trace)
alice writes M1 content=d-bb25ba79645443458d3249bc5e83cf37 on node-1
  created memory id=53735150-8077-453e-88ce-e0b2ac405023
settle 8s for pre-delete fanout
pre-delete: verifying M1 is visible on all peers
  pre-delete node-2 sees 1
  pre-delete node-3 sees 1
  pre-delete node-4 sees 1
alice deletes M1 on node-1
  DELETE returned HTTP 200
settle 15s for tombstone propagation
post-delete: verifying M1 is GONE from all peers
  post-delete node-2 sees 0 (expected 0)
  post-delete node-3 sees 0 (expected 0)
  post-delete node-4 sees 0 (expected 0)

raw file

Scenario 11 — Link integrity PASS

scenario-11.json (report)
{
	"agent_group": "hermes",
	"charlie_sees_link": 1,
	"link_http_code": 201,
	"m1_id": "08ca13e6-a669-4083-aef6-be8efe6f8099",
	"m2_id": "7196b79e-3a62-4154-91a1-7f77e22d3c76",
	"pass": true,
	"reasons": [],
	"relation": "related_to",
	"scenario": "11",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-11.log (console trace)
alice writes M1 on node-1
bob writes M2 on node-2
  M1=08ca13e6-a669-4083-aef6-be8efe6f8099 M2=7196b79e-3a62-4154-91a1-7f77e22d3c76
settle 5s for pre-link replication
alice links M1 -> M2 with relation=related_to
  link POST returned HTTP 201
settle 8s for link fanout
charlie queries links of M1 on node-3
  charlie sees M1->M2 link: 1 (expected >=1)

raw file

Scenario 12 — Agent registration PASS

scenario-12.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"peers_see": {
		"node_2": 1,
		"node_3": 1,
		"node_4": 1
	},
	"reasons": [],
	"register_http_code": 201,
	"registered_agent": "ai:dave-probe-916704a4",
	"scenario": "12",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-12.log (console trace)
alice registers new agent ai:dave-probe-916704a4 on node-1
  POST /api/v1/agents returned HTTP 201
settle 10s for agent-list fanout
  node-2 sees ai:dave-probe-916704a4: 1 (expected >=1)
  node-3 sees ai:dave-probe-916704a4: 1 (expected >=1)
  node-4 sees ai:dave-probe-916704a4: 1 (expected >=1)

raw file

Scenario 13 — Concurrent write contention PASS

scenario-13.json (report)
{
	"agent_group": "hermes",
	"m1_id": "c7d80d45-30eb-4407-980f-f958defb137b",
	"pass": true,
	"peer_view": {
		"node_1": "vb-f890e5f288804d0991f593725feaa5b9",
		"node_2": "vb-f890e5f288804d0991f593725feaa5b9",
		"node_3": "vb-f890e5f288804d0991f593725feaa5b9",
		"node_4": "vb-f890e5f288804d0991f593725feaa5b9"
	},
	"reasons": [],
	"scenario": "13",
	"skipped": false,
	"submitted": {
		"v0": "v0-8c58b4076ee249c1a9f4a5c2ba6ca737",
		"vA_alice": "va-2a1cb7fe81014d3c8d8b13eccaca2e4e",
		"vB_bob": "vb-f890e5f288804d0991f593725feaa5b9"
	},
	"tls_mode": "mtls"
}

raw file

scenario-13.log (console trace)
alice writes M1 content=v0-8c58b4076ee249c1a9f4a5c2ba6ca737 on node-1
  M1 id=c7d80d45-30eb-4407-980f-f958defb137b
settle 5s for initial replication
alice + bob issue concurrent PUTs (vA=va-2a1cb7fe81014d3c8d8b13eccaca2e4e from alice, vB=vb-f890e5f288804d0991f593725feaa5b9 from bob)
  concurrent PUT results: [(0, {'body': {'access_count': 0, 'confidence': 1.0, 'content': 'va-2a1cb7fe81014d3c8d8b13eccaca2e4e', 'created_at': '2026-04-23T02:14:19.301340761+00:00', 'expires_at': '2026-04-30T02:14:19.301340761+00:00', 'id': 'c7d80d45-30eb-4407-980f-f958defb137b', 'metadata': {'agent_id': 'ai:alice', 'scenario': '13'}, 'namespace': 'scenario13-contention', 'priority': 5, 'source': 'api', 'tags': [], 'tier': 'mid', 'title': 'm1', 'updated_at': '2026-04-23T02:14:25.079141661+00:00'}, 'http_code': 200}), (0, {'body': {'access_count': 0, 'confidence': 1.0, 'content': 'vb-f890e5f288804d0991f593725feaa5b9', 'created_at': '2026-04-23T02:14:19.301340761+00:00', 'expires_at': '2026-04-30T02:14:19.301340761+00:00', 'id': 'c7d80d45-30eb-4407-980f-f958defb137b', 'metadata': {'agent_id': 'ai:alice', 'scenario': '13'}, 'namespace': 'scenario13-contention', 'priority': 5, 'source': 'api', 'tags': [], 'tier': 'mid', 'title': 'm1', 'updated_at': '2026-04-23T02:14:25.165985045+00:00'}, 'http_code': 200})]
settle 10s for quorum convergence
  node-1 sees content=vb-f890e5f288804d0991f593725feaa5b9
  node-2 sees content=vb-f890e5f288804d0991f593725feaa5b9
  node-3 sees content=vb-f890e5f288804d0991f593725feaa5b9
  node-4 sees content=vb-f890e5f288804d0991f593725feaa5b9

raw file

Scenario 14 — Partition tolerance PASS

scenario-14.json (report)
{
	"agent_group": "hermes",
	"expected_post_recovery": 20,
	"node3_saw": 20,
	"partition_target": "node-3",
	"pass": true,
	"reasons": [],
	"scenario": "14",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-14.log (console trace)
suspending ai-memory on node-3 (SIGSTOP)
  !! ssh timeout (15s): root@104.248.1.2 pgrep -f 'ai-memory serve' | xargs -r kill -STOP
settle 2s for process-suspend observe
writing 10 memories each from alice + bob during node-3 outage
resuming ai-memory on node-3 (SIGCONT)
settle 20s for post-partition catchup
checking node-3 caught up
  node-3 sees 20 memories in scenario14-partition (expected 20)

raw file

Scenario 15 — Read-your-writes PASS

scenario-15.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"reasons": [],
	"scenario": "15",
	"skipped": false,
	"tls_mode": "mtls",
	"uuid": "ryw-dffbdf2a2a054826963a74209f1591d3",
	"writer_sees_own_write": 1
}

raw file

scenario-15.log (console trace)
alice writes + immediately reads M1 on node-1 (uuid=ryw-dffbdf2a2a054826963a74209f1591d3)
  alice sees 1 (expected 1) immediately after write

raw file

Scenario 16 — Tier promotion PASS

scenario-16.json (report)
{
	"agent_group": "hermes",
	"bob_sees_tier": "long",
	"m1_id": "41c07a7f-4926-4140-91f6-29d4bfdb3c13",
	"pass": true,
	"promote_http_code": 200,
	"reasons": [],
	"scenario": "16",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-16.log (console trace)
alice writes M1 tier=short on node-1
  M1 id=41c07a7f-4926-4140-91f6-29d4bfdb3c13
settle 5s for pre-promote replication
alice promotes M1 to tier=long
  promote returned HTTP 200
settle 8s for promotion fanout
  bob sees tier=long (expected long)

raw file

Scenario 17 — Stats consistency PASS

scenario-17.json (report)
{
	"agent_group": "hermes",
	"expected_count": 15,
	"pass": true,
	"per_peer": {
		"node_1": 15,
		"node_2": 15,
		"node_3": 15,
		"node_4": 15
	},
	"reasons": [],
	"scenario": "17",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-17.log (console trace)
phase A: each of 3 agents writes 5 memories to scenario17-stats
  ai:alice on 159.65.190.243
  ai:bob on 64.225.15.73
  ai:charlie on 104.248.1.2
settle 15s for W=2 fanout
phase B: querying count on every peer
  node-1 count=15 (expected 15)
  node-2 count=15 (expected 15)
  node-3 count=15 (expected 15)
  node-4 count=15 (expected 15)

raw file

Scenario 18 — Semantic query expansion FAIL

Reasons: semantic query did not surface alice's memory | semantic query did not surface bob's memory

scenario-18.json (report)
{
	"agent_group": "hermes",
	"pass": false,
	"query": "morning outdoor exercise routine",
	"reason": "semantic query did not surface alice's memory; semantic query did not surface bob's memory",
	"reasons": [
		"semantic query did not surface alice's memory",
		"semantic query did not surface bob's memory"
	],
	"scenario": "18",
	"skipped": false,
	"tls_mode": "mtls",
	"writers": [
		{
			"agent": "ai:alice",
			"marker": "alice-sunrise-f321edfc",
			"seen_by_charlie": 0
		},
		{
			"agent": "ai:bob",
			"marker": "bob-daybreak-b94f3eb2",
			"seen_by_charlie": 0
		}
	]
}

raw file

scenario-18.log (console trace)
alice writes A on node-1
bob writes B on node-2
settle 15s for fanout + index rebuild
charlie queries on node-3 with semantically-related prompt
  charlie sees alice's memory: 0 (expected >=1)
  charlie sees bob's memory: 0 (expected >=1)

raw file

Scenario 20 — mTLS happy-path PASS

scenario-20.json (report)
{
	"agent_group": "hermes",
	"marker": "mtls-b75a1e2b7cb844b3929c231fa085d234",
	"pass": true,
	"peers_see": {
		"node_2": 1,
		"node_3": 1
	},
	"reasons": [],
	"scenario": "20",
	"skipped": false,
	"tls_mode": "mtls",
	"write_http_code": 201
}

raw file

scenario-20.log (console trace)
alice writes HTTPS + client cert on node-1
  write returned HTTP 201
settle 12s for W=2/N=4 quorum
  node-2 sees marker: 1
  node-3 sees marker: 1

raw file

Scenario 21 — Anonymous client rejected PASS

scenario-21.json (report)
{
	"agent_group": "hermes",
	"anonymous_probe": {
		"curl_message": "OpenSSL SSL_read: OpenSSL/3.0.13: error:0A00045C:SSL routines::tlsv13 alert certificate required, errno 0",
		"http_code": "curl: (56) OpenSSL SSL_read: OpenSSL/3.0.13: error:0A00045C:SSL routines::tlsv13 alert certificate required, errno 0\n000"
	},
	"namespace_count_after_attempt": 0,
	"pass": true,
	"reasons": [],
	"scenario": "21",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-21.log (console trace)
attempting anonymous HTTPS POST to node-1 (must be rejected)
  anonymous probe result: code=curl: (56) OpenSSL SSL_read: OpenSSL/3.0.13: error:0A00045C:SSL routines::tlsv13 alert certificate required, errno 0
000 msg=OpenSSL SSL_read: OpenSSL/3.0.13: error:0A00045C:SSL routines::tlsv13 alert certificate required, errno 0
settle 3s for let any leak land before checking namespace
  post-probe count for namespace=scenario21: 0 (must be 0)

raw file

Scenario 22 — Identity spoofing resistance PASS

scenario-22.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"reasons": [],
	"scenario": "22",
	"skipped": false,
	"tests": {
		"body_vs_header_conflict": {
			"acceptable": [
				"ai:body-wins",
				"ai:attacker"
			],
			"stored_agent_id": "ai:attacker"
		},
		"header_only": {
			"expected": "ai:alice",
			"stored_agent_id": "ai:alice"
		}
	},
	"tls_mode": "mtls"
}

raw file

scenario-22.log (console trace)
test 1: header-only X-Agent-Id=ai:alice
settle 2s for read-settle
  stored metadata.agent_id for header-only write: ai:alice (expected ai:alice)
test 2: body.metadata.agent_id=ai:body-wins vs X-Agent-Id=ai:attacker
settle 2s for read-settle
  stored metadata.agent_id for body+header conflict: ai:attacker

raw file

Scenario 23 — Malicious content fuzz UNKNOWN

scenario-23.json (report)

          

raw file

scenario-23.log (console trace)
payload sql: 61 bytes
payload html: 66 bytes
payload oversize: 1048576 bytes
Traceback (most recent call last):
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/scenarios/23_malicious_content_fuzz.py", line 106, in <module>
    main()
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/scenarios/23_malicious_content_fuzz.py", line 49, in main
    rc, write_doc = h.write_memory(
                    ^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 202, in write_memory
    return self.http_on(node_ip, "POST", "/api/v1/memories",
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 157, in http_on
    result = self.ssh_exec(node_ip, remote_cmd, timeout=timeout)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 103, in ssh_exec
    return self._run(cmd, timeout=timeout, stdin=stdin)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 87, in _run
    return subprocess.run(
           ^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 548, in run
    with Popen(*popenargs, **kwargs) as process:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 1026, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/usr/lib/python3.12/subprocess.py", line 1955, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
OSError: [Errno 7] Argument list too long: 'ssh'

raw file

Scenario 24 — Byzantine peer PASS

scenario-24.json (report)
{
	"agent_group": "hermes",
	"byzantine_marker": "bz-b850215d981746ca88ac489247b1d1b0",
	"pass": true,
	"reasons": [],
	"scenario": "24",
	"skipped": false,
	"stored_metadata_agent_id": "REJECTED_BY_SERVER",
	"sync_push_http_code": "422",
	"tls_mode": "mtls"
}

raw file

scenario-24.log (console trace)
node-2 sends sync_push to node-3 claiming sender_agent_id=ai:alice
  sync_push returned HTTP 422
settle 5s for server-side sync apply
  node-3 stored metadata.agent_id=ABSENT (declared: ai:alice)
  sync_push rejected HTTP 422 — stricter-than-spec, acceptable

raw file

Scenario 25 — Clock skew tolerance PASS

scenario-25.json (report)
{
	"agent_group": "hermes",
	"clock_offset_seconds": 300,
	"marker": "ck-0618cf30d1424949b2aa9242df3f032e",
	"pass": true,
	"reasons": [],
	"scenario": "25",
	"seen_on": {
		"node_1": 1,
		"node_3": 1
	},
	"skipped": false,
	"target_node": "node-3",
	"tls_mode": "mtls"
}

raw file

scenario-25.log (console trace)
shifting node-3 clock +300s (NTP disabled for the duration)
  node-3 now reports: Thu Apr 23 02:21:57 UTC 2026
alice writes on node-1 (normal clock); waiting for quorum fanout to skewed node-3
settle 15s for skewed-peer convergence
  node-3 (+300s clock) sees marker: 1 (expected >=1)
  node-1 sees marker: 1 (expected >=1)
reverting node-3 clock

raw file

Scenario 28 — memory_search keyword PASS

scenario-28.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"peer_hits": {
		"node_2": 1,
		"node_3": 1
	},
	"reasons": [],
	"scenario": "28",
	"skipped": false,
	"tls_mode": "mtls",
	"token": "kwsearch44930bf4c3"
}

raw file

scenario-28.log (console trace)
alice writes a row containing unique token=kwsearch44930bf4c3
settle 8s for search index populate + fanout
bob + charlie call /api/v1/search with the exact token
  node-2 keyword search returned 1 hits
  node-3 keyword search returned 1 hits

raw file

Scenario 29 — memory_archive lifecycle PASS

scenario-29.json (report)
{
	"agent_group": "hermes",
	"archive_http_code": 200,
	"bob_sees_archived": true,
	"m1_id": "53b7b325-4e5b-4667-a349-cabc8b0c3351",
	"node4_active_rows": 1,
	"pass": true,
	"reasons": [],
	"restore_http_code": 200,
	"scenario": "29",
	"skipped": false,
	"stats_shape_ok": true,
	"tls_mode": "mtls"
}

raw file

scenario-29.log (console trace)
alice writes M1 on node-1
  M1 id=53b7b325-4e5b-4667-a349-cabc8b0c3351
settle 5s for pre-archive replication
alice archives M1 via POST /api/v1/archive (ai-memory-mcp PR #361)
  archive (POST) returned HTTP 200
settle 5s for archive propagation
bob queries /api/v1/archive on node-2
  bob sees M1 in archive: True
charlie restores M1 via /api/v1/archive/{id}/restore on node-3
  restore returned HTTP 200
settle 5s for restore propagation
node-4 aggregator: M1 must be active again
  node-4 active rows matching marker: 1
fetch /api/v1/archive/stats on node-4

raw file

Scenario 30 — memory_capabilities handshake PASS

scenario-30.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"peer_views": {
		"node_1": {
			"_path": "/api/v1/capabilities",
			"features": {
				"auto_consolidation": false,
				"auto_tagging": false,
				"contradiction_analysis": false,
				"cross_encoder_reranking": false,
				"embedder_loaded": true,
				"hybrid_recall": true,
				"keyword_search": true,
				"memory_reflection": false,
				"query_expansion": false,
				"semantic_search": true
			},
			"models": {
				"cross_encoder": "none",
				"embedding": "sentence-transformers/all-MiniLM-L6-v2",
				"embedding_dim": 384,
				"llm": "none"
			},
			"tier": "semantic",
			"version": "0.6.2"
		},
		"node_2": {
			"_path": "/api/v1/capabilities",
			"features": {
				"auto_consolidation": false,
				"auto_tagging": false,
				"contradiction_analysis": false,
				"cross_encoder_reranking": false,
				"embedder_loaded": true,
				"hybrid_recall": true,
				"keyword_search": true,
				"memory_reflection": false,
				"query_expansion": false,
				"semantic_search": true
			},
			"models": {
				"cross_encoder": "none",
				"embedding": "sentence-transformers/all-MiniLM-L6-v2",
				"embedding_dim": 384,
				"llm": "none"
			},
			"tier": "semantic",
			"version": "0.6.2"
		},
		"node_3": {
			"_path": "/api/v1/capabilities",
			"features": {
				"auto_consolidation": false,
				"auto_tagging": false,
				"contradiction_analysis": false,
				"cross_encoder_reranking": false,
				"embedder_loaded": true,
				"hybrid_recall": true,
				"keyword_search": true,
				"memory_reflection": false,
				"query_expansion": false,
				"semantic_search": true
			},
			"models": {
				"cross_encoder": "none",
				"embedding": "sentence-transformers/all-MiniLM-L6-v2",
				"embedding_dim": 384,
				"llm": "none"
			},
			"tier": "semantic",
			"version": "0.6.2"
		},
		"node_4": {
			"_path": "/api/v1/capabilities",
			"features": {
				"auto_consolidation": false,
				"auto_tagging": false,
				"contradiction_analysis": false,
				"cross_encoder_reranking": false,
				"embedder_loaded": true,
				"hybrid_recall": true,
				"keyword_search": true,
				"memory_reflection": false,
				"query_expansion": false,
				"semantic_search": true
			},
			"models": {
				"cross_encoder": "none",
				"embedding": "sentence-transformers/all-MiniLM-L6-v2",
				"embedding_dim": 384,
				"llm": "none"
			},
			"tier": "semantic",
			"version": "0.6.2"
		}
	},
	"reasons": [],
	"scenario": "30",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-30.log (console trace)
  node-1 capabilities: ['features', 'models', 'tier', 'version', '_path']
  node-2 capabilities: ['features', 'models', 'tier', 'version', '_path']
  node-3 capabilities: ['features', 'models', 'tier', 'version', '_path']
  node-4 capabilities: ['features', 'models', 'tier', 'version', '_path']

raw file

Scenario 31 — memory_gc quiescence PASS

scenario-31.json (report)
{
	"agent_group": "hermes",
	"expected_live": 2,
	"forget_http_code": 400,
	"gc_http_code": 200,
	"live_markers_per_peer": {
		"node_1": 2,
		"node_2": 2,
		"node_3": 2,
		"node_4": 2
	},
	"pass": true,
	"reasons": [],
	"scenario": "31",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-31.log (console trace)
alice writes 4 memories
settle 6s for pre-gc replication
alice forgets 2 via /api/v1/forget
  forget returned HTTP 400
settle 5s for forget propagation
bob triggers /api/v1/gc on node-2
  gc returned HTTP 200
settle 8s for post-gc settle
verify remaining 2 markers are still readable on every peer
  node-1 sees 2/2 live markers
  node-2 sees 2/2 live markers
  node-3 sees 2/2 live markers
  node-4 sees 2/2 live markers

raw file

Scenario 32 — memory_inbox + notify FAIL

Reasons: bob's inbox did not deliver alice's notify

scenario-32.json (report)
{
	"agent_group": "hermes",
	"bob_inbox_count": 1,
	"bob_sees_marker": false,
	"charlie_inbox_count": 0,
	"charlie_sees_marker": false,
	"marker": "inb-ecacf50631484f8dbf642156b93422af",
	"notify_http_code": 201,
	"pass": false,
	"reason": "bob's inbox did not deliver alice's notify",
	"reasons": [
		"bob's inbox did not deliver alice's notify"
	],
	"scenario": "32",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-32.log (console trace)
alice calls /api/v1/notify → target=ai:bob
  notify returned HTTP 201
settle 6s for notification fanout
bob queries his inbox on node-2
  bob inbox has 1 messages; sees marker: False
charlie queries his inbox on node-3 (must NOT see it)
  charlie inbox has 0 messages; sees marker: False

raw file

Scenario 33 — memory_subscribe pub/sub PASS

scenario-33.json (report)
{
	"agent_group": "hermes",
	"m1_delivered": 1,
	"namespace": "scenario33-pubsub-52589d",
	"ns_in_subs_after": false,
	"ns_in_subs_before": true,
	"pass": true,
	"reasons": [],
	"scenario": "33",
	"skipped": false,
	"subscribe_http_code": 201,
	"subscriptions_after_count": 0,
	"subscriptions_before_count": 1,
	"tls_mode": "mtls",
	"unsubscribe_http_code": 200
}

raw file

scenario-33.log (console trace)
bob subscribes to namespace scenario33-pubsub-52589d on node-2
  subscribe returned HTTP 201
settle 2s for subscription settle
  bob subscriptions: 1 entries; contains ns: True
alice writes M1 into the subscribed namespace
settle 6s for write fanout to subscribers
  bob sees M1 in subscribed namespace: 1
bob unsubscribes from scenario33-pubsub-52589d
  unsubscribe returned HTTP 200
settle 2s for unsubscribe settle
  bob subscriptions after unsubscribe: ns still present = False
alice writes M2 post-unsubscribe (may still replicate via federation but subscription list excludes ns)
settle 5s for post-unsubscribe settle

raw file

Scenario 34 — memory_pending governance FAIL

Reasons: approve returned HTTP 403 | reject returned HTTP 404 | charlie did not see approved row

scenario-34.json (report)
{
	"agent_group": "hermes",
	"approve_http_code": 403,
	"charlie_sees": {
		"approved": 0,
		"rejected": 0
	},
	"namespace": "scenario34-pending-aa3f08",
	"pass": false,
	"pending_queue_count": 0,
	"reason": "approve returned HTTP 403; reject returned HTTP 404; charlie did not see approved row",
	"reasons": [
		"approve returned HTTP 403",
		"reject returned HTTP 404",
		"charlie did not see approved row"
	],
	"reject_http_code": 404,
	"scenario": "34",
	"set_standard_http_code": 201,
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-34.log (console trace)
alice sets namespace standard on scenario34-pending-aa3f08: write=approve, approver=ai:bob
  set-standard returned HTTP 201
settle 2s for standard settle
alice writes two memories into the governed namespace (should land in pending)
  p1=ce0d0491-6def-434b-bed7-b935b7b1760c p2=4ba1b3b6-a583-4e2a-8825-76d5ceaa929f
settle 4s for pending queue settle
bob lists pending on node-2
  pending queue has 0 entries
bob approves p1, rejects p2
  approve HTTP 403; reject HTTP 404
settle 5s for decision fanout
charlie reads the namespace — expects ONLY approved marker
  charlie sees approved=0 rejected=0

raw file

Scenario 35 — memory_namespace standards FAIL

Reasons: parent rule not layered into child's standard view | child rule missing from standard view

scenario-35.json (report)
{
	"agent_group": "hermes",
	"child_ns": "scenario35-parent-0e74e8/child",
	"clear_http_code": 200,
	"get_standard_http_code": 200,
	"parent_ns": "scenario35-parent-0e74e8",
	"pass": false,
	"post_clear_has_child_rule": false,
	"reason": "parent rule not layered into child's standard view; child rule missing from standard view",
	"reasons": [
		"parent rule not layered into child's standard view",
		"child rule missing from standard view"
	],
	"scenario": "35",
	"sees_child_rule": false,
	"sees_parent_rule": false,
	"set_child_http_code": 201,
	"set_parent_http_code": 201,
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-35.log (console trace)
alice writes parent-standard-memory on node-1
alice sets namespace standard on scenario35-parent-0e74e8
  set-parent returned HTTP 201
alice writes child-standard-memory on node-1
alice sets namespace standard on scenario35-parent-0e74e8/child with parent=scenario35-parent-0e74e8
  set-child returned HTTP 201
settle 4s for standard fanout
bob gets standard for scenario35-parent-0e74e8/child on node-2 (expects layered parent+child)
  get-standard returned HTTP 200
  parent-rule visible=False; child-rule visible=False
alice clears standard on scenario35-parent-0e74e8/child
  clear returned HTTP 200
settle 3s for clear settle

raw file

Scenario 36 — memory_session_start PASS

scenario-36.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"reasons": [],
	"scenario": "36",
	"session_id": "783e1874-2298-4f46-acf5-4e767c8f3dde",
	"session_tagged_rows_on_bob": 2,
	"skipped": false,
	"start_http_code": 200,
	"tls_mode": "mtls"
}

raw file

scenario-36.log (console trace)
alice starts a session on node-1
  session_start returned HTTP 200, session_id=783e1874-2298-4f46-acf5-4e767c8f3dde
alice writes 2 memories tagged with session_id
settle 6s for session-tagged fanout
bob lists on node-2 filtered by session_id=783e1874-2298-4f46-acf5-4e767c8f3dde
  bob sees 2 rows tagged session_id=783e1874-2298-4f46-acf5-4e767c8f3dde (expected 2)

raw file

Scenario 37 — memory_get_links bidirectional PASS

scenario-37.json (report)
{
	"agent_group": "hermes",
	"forward_has_target": true,
	"m1": "64af7907-519a-4c59-b615-4c07d75f6210",
	"m2": "b52a7a8e-498e-406c-84c2-df5464d13e75",
	"pass": true,
	"reasons": [],
	"reverse_has_source": true,
	"scenario": "37",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-37.log (console trace)
alice writes M1 + M2 + links M1→M2
  M1=64af7907-519a-4c59-b615-4c07d75f6210 M2=b52a7a8e-498e-406c-84c2-df5464d13e75
settle 6s for link fanout
charlie queries /api/v1/links/M1 (forward)
charlie queries /api/v1/links/M2 (reverse)

raw file

Scenario 38 — /export + /import PASS

scenario-38.json (report)
{
	"agent_group": "hermes",
	"dst_ns": "scenario38-dst-0b68b8",
	"expected_rows": 5,
	"export_http_code": 200,
	"import_http_code": 200,
	"markers_preserved": 5,
	"pass": true,
	"reasons": [],
	"rows_exported": 5,
	"rows_in_destination": 5,
	"scenario": "38",
	"skipped": false,
	"src_ns": "scenario38-src-0b68b8",
	"tls_mode": "mtls"
}

raw file

scenario-38.log (console trace)
alice writes 5 rows into scenario38-src-0b68b8
settle 4s for pre-export replication
alice exports on node-1 (endpoint has no namespace filter; filter client-side)
  export returned HTTP 200, total_rows=230
  rewrote 5 memories from scenario38-src-0b68b8 -> scenario38-dst-0b68b8
bob imports the payload into scenario38-dst-0b68b8 on node-2
  import returned HTTP 200
settle 6s for import + fanout
verify row counts match on destination
  scenario38-dst-0b68b8 has 5 rows (expected 5)
  markers preserved in destination: 5/5

raw file

Scenario 39 — /sync/since delta FAIL

Reasons: delta returned 0/6 expected markers — delta-sync incomplete

scenario-39.json (report)
{
	"agent_group": "hermes",
	"checkpoint": "2026-04-23T02:20:03+00:00",
	"diag_earliest_updated_at": null,
	"diag_latest_updated_at": null,
	"diag_updated_since": null,
	"expected_markers": 6,
	"markers_present": 0,
	"namespace": "scenario39-delta-a4874b",
	"pass": false,
	"reason": "delta returned 0/6 expected markers — delta-sync incomplete",
	"reasons": [
		"delta returned 0/6 expected markers — delta-sync incomplete"
	],
	"rows_returned": 0,
	"rows_returned_raw": 0,
	"scenario": "39",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-39.log (console trace)
checkpoint = 2026-04-23T02:20:03+00:00
suspending ai-memory on node-3
  !! ssh timeout (30s): root@104.248.1.2 pgrep -f 'ai-memory serve' | xargs -r kill -STOP
alice + bob write 6 rows while node-3 is out
resuming ai-memory on node-3
settle 4s for process resume
node-3 asks node-1 /api/v1/sync/since?since=2026-04-23T02:20:03+00:00
  !! ssh timeout (30s): root@104.248.1.2 curl -sS --cacert /etc/ai-memory-a2a/tls/ca.pem --cert /etc/ai-memory-a2a/tls/client.pem --key /etc/ai-memory-a2a/tls/client.key 'https://159.65.190.243:9077/api/v1/sync/since?since=2026-04-23T02%3A20%3A03%2B00%3A00&limit=500'
  /sync/since raw=0 ns-filtered=0; 0/6 match our markers
  diag: updated_since=None earliest=None latest=None

raw file

Scenario 40 — /memories/bulk FAIL

Reasons: node-2 saw 200/500 bulk rows after fanout | node-3 saw 200/500 bulk rows after fanout | node-4 saw 200/500 bulk rows after fanout

scenario-40.json (report)
{
	"agent_group": "hermes",
	"bulk_http_code": "200",
	"bulk_size": 500,
	"namespace": "scenario40-bulk-ddcab6",
	"pass": false,
	"per_peer_count": {
		"node_2": 200,
		"node_3": 200,
		"node_4": 200
	},
	"reason": "node-2 saw 200/500 bulk rows after fanout; node-3 saw 200/500 bulk rows after fanout; node-4 saw 200/500 bulk rows after fanout",
	"reasons": [
		"node-2 saw 200/500 bulk rows after fanout",
		"node-3 saw 200/500 bulk rows after fanout",
		"node-4 saw 200/500 bulk rows after fanout"
	],
	"scenario": "40",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-40.log (console trace)
constructing 500-row bulk payload
staging bulk payload on node-1 /tmp, then POST /api/v1/memories/bulk
  bulk POST returned HTTP 200
settle 20s for bulk fanout across 3 peers + aggregator
  node-2 count=200 (expected 500)
  node-3 count=200 (expected 500)
  node-4 count=200 (expected 500)

raw file

Scenario 41 — /metrics Prometheus PASS

scenario-41.json (report)
{
	"activity_namespace": "scenario41-activity-3013de",
	"agent_group": "hermes",
	"pass": true,
	"per_peer": {
		"node_1": {
			"counters_t0": 8,
			"counters_t1": 8,
			"regressed_keys": 0
		},
		"node_2": {
			"counters_t0": 8,
			"counters_t1": 8,
			"regressed_keys": 0
		},
		"node_3": {
			"counters_t0": 7,
			"counters_t1": 7,
			"regressed_keys": 0
		}
	},
	"reasons": [],
	"scenario": "41",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-41.log (console trace)
scrape T0
  node-1 T0 parsed 8 memory counters
  node-2 T0 parsed 8 memory counters
  node-3 T0 parsed 7 memory counters
settle 5s for counter update
scrape T1
  node-1 T1 parsed 8 memory counters
  node-2 T1 parsed 8 memory counters
  node-3 T1 parsed 7 memory counters

raw file

Scenario 42 — /namespaces enumeration PASS

scenario-42.json (report)
{
	"agent_group": "hermes",
	"namespaces": [
		"scenario42-8060de-0",
		"scenario42-8060de-1",
		"scenario42-8060de-2"
	],
	"pass": true,
	"per_peer": {
		"node_1": {
			"scenario42-8060de-0": 2,
			"scenario42-8060de-1": 2,
			"scenario42-8060de-2": 2
		},
		"node_2": {
			"scenario42-8060de-0": 2,
			"scenario42-8060de-1": 2,
			"scenario42-8060de-2": 2
		},
		"node_3": {
			"scenario42-8060de-0": 2,
			"scenario42-8060de-1": 2,
			"scenario42-8060de-2": 2
		},
		"node_4": {
			"scenario42-8060de-0": 2,
			"scenario42-8060de-1": 2,
			"scenario42-8060de-2": 2
		}
	},
	"reasons": [],
	"scenario": "42",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-42.log (console trace)
alice writes into 3 distinct namespaces: ['scenario42-8060de-0', 'scenario42-8060de-1', 'scenario42-8060de-2']
settle 10s for namespace index fanout
  node-1 sees 3/3 target namespaces, counts: {'scenario42-8060de-0': 2, 'scenario42-8060de-1': 2, 'scenario42-8060de-2': 2}
  node-2 sees 3/3 target namespaces, counts: {'scenario42-8060de-0': 2, 'scenario42-8060de-1': 2, 'scenario42-8060de-2': 2}
  node-3 sees 3/3 target namespaces, counts: {'scenario42-8060de-0': 2, 'scenario42-8060de-1': 2, 'scenario42-8060de-2': 2}
  node-4 sees 3/3 target namespaces, counts: {'scenario42-8060de-0': 2, 'scenario42-8060de-1': 2, 'scenario42-8060de-2': 2}

raw file

All artifacts