../ runs index

Campaign a2a-hermes-v0.6.2-patch2-r23c-mtls FAIL

Agent group
hermes (homogeneous)
ai-memory ref
release/v0.6.2
Completed at
2026-04-23T18:03:50Z
Overall pass
false
Skipped reports
1

Infrastructure

Provider
digitalocean
Region
nyc3
Droplet size
s-2vcpu-4gb
Topology
4-node federation mesh (W=2/N=4)
Scenarios started
2026-04-23T17:51:30Z
Scenarios ended
2026-04-23T18:03:50Z
Dispatched by
alphaonedev
Harness SHA
8715b5d46730
Workflow run
https://github.com/alphaonedev/ai-memory-ai2ai-gate/actions/runs/24849119434

Node roster

#RoleAgent IDPublic IPPrivate IP
1agentai:alice134.209.160.15810.11.2.5
2agentai:bob159.203.123.16110.11.2.2
3agentai:charlie45.55.168.25510.11.2.4
4memory-only138.197.84.2810.11.2.3

Baseline attestation BASELINE OK

Per the authoritative baseline spec, every agent node must emit a self-attestation before any scenario is permitted to run. This run's attestation:

Spec version: 1.4.0 — see authoritative baseline.

NodeAgentFrameworkAuthenticMCP ai-memoryxAI cfgxAI defaultAgent IDFederationUFW offiptablesdead-manF1 xAIF2a substrateF2b agent (non-gating)Config SHAPass
node-1ai:alicehermes Hermes Agent v0.10.0 (2026.4.16)fa358f9a9059PASS
node-2ai:bobhermes Hermes Agent v0.10.0 (2026.4.16)21635cf63640PASS
node-3ai:charliehermes Hermes Agent v0.10.0 (2026.4.16)ce52d772ef5aPASS
a2a-baseline.json
{
	"baseline_pass": true,
	"per_node": [
		{
			"spec_version": "1.4.0",
			"agent_type": "hermes",
			"agent_id": "ai:alice",
			"node_index": "1",
			"framework_version": "Hermes Agent v0.10.0 (2026.4.16)",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "https://10.11.2.2:9077,https://10.11.2.4:9077,https://10.11.2.3:9077",
			"config_file_sha256": "fa358f9a90597243fb96224babd541399bd7b1e972f364605308ab1e2d9dd2c7",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "401f21a7-0f62-4cfd-a4af-f32917eef38b",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "96766d6b-5408-4a55-b8b3-b54c2fb39fcd",
				"agent_canary_response_head": "Traceback (most recent call last):   File \"/usr/local/bin/hermes\", line 11, in <module>     main()   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 8859, in main     args.func(args)   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 1159, in cmd_chat     from cli import main as cli_main   File \"/root/.hermes/hermes-agent/cli.py\", line 43, in <module>     from prompt_toolkit.history import FileHistory ModuleNotFoundError: No module named 'prompt_toolkit' ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.11.2.2:9077:OK,10.11.2.4:9077:OK,10.11.2.3:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "mtls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "401f21a7-0f62-4cfd-a4af-f32917eef38b",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "hermes",
			"agent_id": "ai:bob",
			"node_index": "2",
			"framework_version": "Hermes Agent v0.10.0 (2026.4.16)",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "https://10.11.2.5:9077,https://10.11.2.4:9077,https://10.11.2.3:9077",
			"config_file_sha256": "21635cf6364057fd2a004d28aac89abf8438671d85f9fd2ed1e654d812d23ff1",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "Icannotcomplywiththisrequestasitappearsto",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "b14d0bd6-66b5-486e-a04f-dfff43139200",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "aec20b2f-b602-43e7-9ed1-a3b308e45cdc",
				"agent_canary_response_head": "Traceback (most recent call last):   File \"/usr/local/bin/hermes\", line 11, in <module>     main()   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 8859, in main     args.func(args)   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 1159, in cmd_chat     from cli import main as cli_main   File \"/root/.hermes/hermes-agent/cli.py\", line 43, in <module>     from prompt_toolkit.history import FileHistory ModuleNotFoundError: No module named 'prompt_toolkit' ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.11.2.5:9077:OK,10.11.2.4:9077:OK,10.11.2.3:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "mtls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "b14d0bd6-66b5-486e-a04f-dfff43139200",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "hermes",
			"agent_id": "ai:charlie",
			"node_index": "3",
			"framework_version": "Hermes Agent v0.10.0 (2026.4.16)",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "https://10.11.2.5:9077,https://10.11.2.2:9077,https://10.11.2.3:9077",
			"config_file_sha256": "ce52d772ef5a00968db29fb80eea7a14206b0a258a00ff2165db725405474618",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "ca066ca5-1a21-4e3d-8d1c-61b7983157f5",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "0f43c6c2-411e-4e5f-b903-ae498daa1b8b",
				"agent_canary_response_head": "Traceback (most recent call last):   File \"/usr/local/bin/hermes\", line 11, in <module>     main()   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 8859, in main     args.func(args)   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 1159, in cmd_chat     from cli import main as cli_main   File \"/root/.hermes/hermes-agent/cli.py\", line 43, in <module>     from prompt_toolkit.history import FileHistory ModuleNotFoundError: No module named 'prompt_toolkit' ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.11.2.5:9077:OK,10.11.2.2:9077:OK,10.11.2.3:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "mtls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "ca066ca5-1a21-4e3d-8d1c-61b7983157f5",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		}
	]
}

raw file

F3 — peer A2A via shared memory F3 OK

Workflow-level probe answering "can agents communicate through ai-memory?". Writer ai:alice posted canary UUID ea725a57-ec94-41f6-afb3-21a474cde31e to namespace _baseline_peer_canary via node-1's local ai-memory serve HTTP. After W=2 fanout settle, probe confirmed the canary on each of the 3 peer nodes via their local GET /api/v1/memories.

f3-peer-a2a.json
{
	"probe": "F3",
	"name": "peer-a2a-via-shared-memory",
	"description": "Writer agent posts a canary via local ai-memory HTTP on node-1; verifies the row propagates to the 3 peer nodes (W=2/N=4 quorum) before scenarios run.",
	"canary_uuid": "ea725a57-ec94-41f6-afb3-21a474cde31e",
	"canary_namespace": "_baseline_peer_canary",
	"writer_agent": "ai:alice",
	"pass": true
}

raw file

Run focus

mTLS federation stable except semantic search and delta-sync failures.

What this campaign tested: Exercised memory sharing, semantic recall, delta-sync, linking, versioning, bulk ops, and auth in 4-node mTLS federation over HTTP transport with semantic primitives.

What it demonstrated: Proved reliable core propagation and mTLS security but exposed gaps in semantic query accuracy and delta-sync completeness.

AI NHI analysis · Claude Opus 4.7

mTLS federation stable except semantic search and delta-sync failures.

PARTIAL — 32/34 scenarios passed, 2 failed, 1 skipped.

For three audiences

Non-technical end users

In this secure setup, AI agents mostly shared memories reliably across the network. However, searching for similar ideas sometimes missed key memories, and syncing recent changes didn't capture everything as expected. Overall, basic sharing works well, but advanced search needs improvement.

C-level decision makers

Risk remains low for basic memory federation under mTLS, but semantic search and delta-sync failures block production readiness for those features. Customer claims on reliable semantic recall or efficient syncing are not viable yet. This run shows regression in delta-sync compared to prior patches, with new semantic issues emerging.

Engineers & architects

Scenario 18 failed due to semantic queries missing Alice's and Bob's markers, likely from embedding model inconsistencies or index corruption (probe F# semantic-recall). Scenario 39 delta-sync returned 0/6 expected markers, pointing to possible quorum read failures or timestamp filtering bugs (probe S# delta-incomplete). mTLS auth primitives held strong in S20-25; other failures absent, suggesting isolated issues in semantic tier.

What changes going into the next campaign

Debug and patch delta-sync timestamp handling before re-running S39.

Tests performed in this run

Every scenario that produced a JSON report in this campaign, in testbook order. Click a row's scenario id to jump to its full report below. See the Every test performed page for the authoritative catalog.

IDTitleResultReason
S1Per-agent write + read (MCP stdio)PASS
S1bPer-agent write + read (HTTP)PASS
S2Shared-context handoffPASS
S4Federation-aware concurrent writesPASS
S5Consolidation + curationPASS
S6Contradiction detectionPASS
S9Mutation round-tripPASS
S10Deletion propagationPASS
S11Link integrityPASS
S12Agent registrationPASS
S13Concurrent write contentionPASS
S14Partition tolerancePASS
S15Read-your-writesPASS
S16Tier promotionPASS
S17Stats consistencyPASS
S18Semantic query expansion?semantic query did not surface alice's memory; semantic query did not surface bob's memory
S20mTLS happy-pathPASS
S21Anonymous client rejectedPASS
S22Identity spoofing resistancePASS
S23Malicious content fuzz?
S24Byzantine peerPASS
S25Clock skew tolerancePASS
S28memory_search keywordPASS
S29memory_archive lifecyclePASS
S30memory_capabilities handshakePASS
S31memory_gc quiescencePASS
S32memory_inbox + notifyPASS
S33memory_subscribe pub/subPASS
S34memory_pending governancePASS
S35memory_namespace standardsPASS
S36memory_session_startPASS
S37memory_get_links bidirectionalPASS
S38/export + /importPASS
S39/sync/since delta?delta returned 0/6 expected markers — delta-sync incomplete
S40/memories/bulkPASS
S41/metrics PrometheusPASS
S42/namespaces enumerationPASS

Scenario 1 — Per-agent write + read (MCP stdio) PASS

scenario-1.json (report)
{
	"agent_group": "hermes",
	"expected_per_reader": 20,
	"pass": true,
	"per_agent": {
		"ai:alice": {
			"recall": 20
		},
		"ai:bob": {
			"recall": 20
		},
		"ai:charlie": {
			"recall": 20
		}
	},
	"per_namespace_node4": {
		"scenario1-ai:alice": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1-ai:bob": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1-ai:charlie": {
			"count": 10,
			"wrong_agent_id": 0
		}
	},
	"reasons": [],
	"scenario": "1",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-1.log (console trace)
phase A: each agent writes 10 memories via MCP
  ai:alice on 134.209.160.158
  ai:bob on 159.203.123.161
  ai:charlie on 45.55.168.255
settle 15s for W=2/N=4 convergence
phase B: each agent counts rows in the OTHER two namespaces
  ai:alice recalled 20 rows from the other two namespaces
  ai:bob recalled 20 rows from the other two namespaces
  ai:charlie recalled 20 rows from the other two namespaces
phase C: cross-cluster identity check on node-4
  ns=scenario1-ai:alice count=10 wrong_agent_id=0
  ns=scenario1-ai:bob count=10 wrong_agent_id=0
  ns=scenario1-ai:charlie count=10 wrong_agent_id=0

raw file

Scenario 1b — Per-agent write + read (HTTP) PASS

scenario-1b.json (report)
{
	"agent_group": "hermes",
	"expected_per_reader": 20,
	"pass": true,
	"path": "serve-http",
	"per_agent": {
		"ai:alice": {
			"recall": 20
		},
		"ai:bob": {
			"recall": 20
		},
		"ai:charlie": {
			"recall": 20
		}
	},
	"per_namespace_node4": {
		"scenario1b-ai:alice": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1b-ai:bob": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1b-ai:charlie": {
			"count": 10,
			"wrong_agent_id": 0
		}
	},
	"reasons": [],
	"scenario": "1b",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-1b.log (console trace)
phase A: each agent POSTs 10 memories to local serve
  ai:alice on 134.209.160.158
  ai:bob on 159.203.123.161
  ai:charlie on 45.55.168.255
settle 15s for W=2/N=4 convergence
phase B: count rows in other two namespaces via local serve HTTP
  ai:alice sees 20 rows from the other two namespaces
  ai:bob sees 20 rows from the other two namespaces
  ai:charlie sees 20 rows from the other two namespaces
phase C: cross-cluster identity check on node-4
  ns=scenario1b-ai:alice count=10 wrong_agent_id=0
  ns=scenario1b-ai:bob count=10 wrong_agent_id=0
  ns=scenario1b-ai:charlie count=10 wrong_agent_id=0

raw file

Scenario 2 — Shared-context handoff PASS

scenario-2.json (report)
{
	"ack_uuid": "a-68bcdb077a4d4895a0143bb8fa783924",
	"agent_group": "hermes",
	"handoff_uuid": "h-2d1f5578c9dd474882b44cd3008f6b23",
	"pass": true,
	"path": "serve-http",
	"per_agent": {
		"ai:alice": {
			"sees_ack": 1
		},
		"ai:bob": {
			"sees_handoff": 1
		}
	},
	"reasons": [],
	"scenario": "2",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-2.log (console trace)
phase A: ai:alice writes handoff to ai:bob (uuid=h-2d1f5578c9dd474882b44cd3008f6b23)
settle 8s for quorum fanout
phase B: ai:bob reads handoff on node-2
  ai:bob sees 1 handoff memories from ai:alice
phase C: ai:bob writes acknowledgement (uuid=a-68bcdb077a4d4895a0143bb8fa783924)
settle 8s for reverse-direction fanout
phase D: ai:alice reads ack on node-1
  ai:alice sees 1 ack memories from ai:bob

raw file

Scenario 4 — Federation-aware concurrent writes PASS

scenario-4.json (report)
{
	"agent_group": "hermes",
	"expected_per_agent": 30,
	"pass": true,
	"per_agent": {
		"ai:alice": {
			"count": 30,
			"wrong_agent_id": 0
		},
		"ai:bob": {
			"count": 30,
			"wrong_agent_id": 0
		},
		"ai:charlie": {
			"count": 30,
			"wrong_agent_id": 0
		}
	},
	"reasons": [],
	"scenario": "4",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-4.log (console trace)
phase A: launching concurrent 30-row bursts from 3 agents
  ai:alice burst ok=30/30
  ai:bob burst ok=30/30
  ai:charlie burst ok=30/30
settle 20s for W=2 fanout convergence
phase B: querying node-4 aggregator for per-agent counts
  ai:alice: count=30 (expected 30) wrong_agent_id=0
  ai:bob: count=30 (expected 30) wrong_agent_id=0
  ai:charlie: count=30 (expected 30) wrong_agent_id=0

raw file

Scenario 5 — Consolidation + curation PASS

scenario-5.json (report)
{
	"agent_group": "hermes",
	"consolidate_http_code": 201,
	"consolidated_from_agents": [
		"ai:charlie",
		"ai:bob",
		"ai:alice"
	],
	"consolidated_id": "d1ee37d3-1ed4-44d7-8551-d308608cd819",
	"pass": true,
	"reasons": [],
	"scenario": "5",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-5.log (console trace)
phase A: each agent writes 3 related memories
  ai:alice on 134.209.160.158
  ai:bob on 159.203.123.161
  ai:charlie on 45.55.168.255
settle 8s for quorum fanout
phase B: collect source ids on node-1, then trigger consolidate
  source ids (count=9): ['3bc9a62f-a51c-4d1a-86de-7dad40b3204b', '13055578-f441-45a6-887e-5e056c67939c', 'aba193e2-802c-4983-b892-a0d3a3e8f10a', '6ef48e71-bba3-4847-9394-8b5261bea657', '1ae57c80-f691-4d9f-8788-257d9b61ca66']...
  consolidate HTTP 201, consolidated_id=d1ee37d3-1ed4-44d7-8551-d308608cd819
settle 10s for consolidation fanout
phase C: verifying consolidated_from_agents on node-4
  consolidated_from_agents=['ai:charlie', 'ai:bob', 'ai:alice']

raw file

Scenario 6 — Contradiction detection PASS

scenario-6.json (report)
{
	"agent_group": "hermes",
	"alice_id": "1d6e2fb2-c87e-4031-b9f5-1ad6712d9f5c",
	"bob_id": "e7856701-ec6f-4e66-abc0-12cb235ac6a6",
	"charlie_sees_both_memories": true,
	"charlie_sees_contradicts_link": true,
	"detect_http_code": 200,
	"pass": true,
	"reasons": [],
	"scenario": "6",
	"skipped": false,
	"tls_mode": "mtls",
	"topic": "sky-color-cbcc3de7"
}

raw file

scenario-6.log (console trace)
alice writes claim: "sky-color-cbcc3de7 is blue" on node-1
bob writes contradicting claim: "sky-color-cbcc3de7 is red" on node-2
  alice.id=1d6e2fb2-c87e-4031-b9f5-1ad6712d9f5c bob.id=e7856701-ec6f-4e66-abc0-12cb235ac6a6
settle 10s for quorum fanout + contradiction indexing
charlie queries /api/v1/contradictions on node-3
  HTTP 200
  sees both memories: True; sees contradicts link: True

raw file

Scenario 9 — Mutation round-trip PASS

scenario-9.json (report)
{
	"agent_group": "hermes",
	"charlie_view": {
		"agent_id": "ai:alice",
		"content": "v2-03c49987b8ea4153b9376482e8562cd9"
	},
	"m1_id": "df1dfe9a-57bb-4ca7-a6d5-35de52afd978",
	"pass": true,
	"put_http_code": 200,
	"reasons": [],
	"scenario": "9",
	"skipped": false,
	"tls_mode": "mtls",
	"v1_uuid": "v1-d121f7317720415ea2f240d64a4ef40c",
	"v2_uuid": "v2-03c49987b8ea4153b9376482e8562cd9"
}

raw file

scenario-9.log (console trace)
alice writes M1 content=v1-d121f7317720415ea2f240d64a4ef40c on node-1
  M1 id=df1dfe9a-57bb-4ca7-a6d5-35de52afd978
settle 5s for initial replication
bob updates M1 content=v2-03c49987b8ea4153b9376482e8562cd9 on node-2 via PUT
  PUT returned HTTP 200
settle 8s for update fanout
charlie reads M1 on node-3 and checks content + provenance
  charlie sees content="v2-03c49987b8ea4153b9376482e8562cd9" agent_id="ai:alice"

raw file

Scenario 10 — Deletion propagation PASS

scenario-10.json (report)
{
	"agent_group": "hermes",
	"delete_http_code": 200,
	"m1_id": "500c5a53-5e37-42ec-8256-7a456b30971b",
	"pass": true,
	"post_delete_hits": {
		"node-2": 0,
		"node-3": 0,
		"node-4": 0
	},
	"post_delete_still_visible_peers": 0,
	"pre_delete_visible_peers": 3,
	"reasons": [],
	"scenario": "10",
	"skipped": false,
	"tls_mode": "mtls",
	"uuid": "d-79bd589ef5964f079eefb71de2d98af4"
}

raw file

scenario-10.log (console trace)
alice writes M1 content=d-79bd589ef5964f079eefb71de2d98af4 on node-1
  created memory id=500c5a53-5e37-42ec-8256-7a456b30971b
settle 8s for pre-delete fanout
pre-delete: verifying M1 is visible on all peers
  pre-delete node-2 sees 1
  pre-delete node-3 sees 1
  pre-delete node-4 sees 1
alice deletes M1 on node-1
  DELETE returned HTTP 200
settle 15s for tombstone propagation
post-delete: verifying M1 is GONE from all peers
  post-delete node-2 sees 0 (expected 0)
  post-delete node-3 sees 0 (expected 0)
  post-delete node-4 sees 0 (expected 0)

raw file

Scenario 11 — Link integrity PASS

scenario-11.json (report)
{
	"agent_group": "hermes",
	"charlie_sees_link": 1,
	"link_http_code": 201,
	"m1_id": "401422bb-fd4c-4dcb-8a16-d72d689884ed",
	"m2_id": "286a5fe7-bf36-46ba-94fd-8068698477b4",
	"pass": true,
	"reasons": [],
	"relation": "related_to",
	"scenario": "11",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-11.log (console trace)
alice writes M1 on node-1
bob writes M2 on node-2
  M1=401422bb-fd4c-4dcb-8a16-d72d689884ed M2=286a5fe7-bf36-46ba-94fd-8068698477b4
settle 5s for pre-link replication
alice links M1 -> M2 with relation=related_to
  link POST returned HTTP 201
settle 8s for link fanout
charlie queries links of M1 on node-3
  charlie sees M1->M2 link: 1 (expected >=1)

raw file

Scenario 12 — Agent registration PASS

scenario-12.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"peers_see": {
		"node_2": 1,
		"node_3": 1,
		"node_4": 1
	},
	"reasons": [],
	"register_http_code": 201,
	"registered_agent": "ai:dave-probe-4c7719e2",
	"scenario": "12",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-12.log (console trace)
alice registers new agent ai:dave-probe-4c7719e2 on node-1
  POST /api/v1/agents returned HTTP 201
settle 10s for agent-list fanout
  node-2 sees ai:dave-probe-4c7719e2: 1 (expected >=1)
  node-3 sees ai:dave-probe-4c7719e2: 1 (expected >=1)
  node-4 sees ai:dave-probe-4c7719e2: 1 (expected >=1)

raw file

Scenario 13 — Concurrent write contention PASS

scenario-13.json (report)
{
	"agent_group": "hermes",
	"m1_id": "39d2d0f6-8726-4c5e-8e4f-d7b6b8b602ed",
	"pass": true,
	"peer_view": {
		"node_1": "vb-bd7585ecf89844729657f811ae5aa5b6",
		"node_2": "vb-bd7585ecf89844729657f811ae5aa5b6",
		"node_3": "vb-bd7585ecf89844729657f811ae5aa5b6",
		"node_4": "vb-bd7585ecf89844729657f811ae5aa5b6"
	},
	"reasons": [],
	"scenario": "13",
	"skipped": false,
	"submitted": {
		"v0": "v0-ee7fe4ff2d6c4d45b2b06d67dff1981b",
		"vA_alice": "va-d34beaae08e3488087b4ab47293fc796",
		"vB_bob": "vb-bd7585ecf89844729657f811ae5aa5b6"
	},
	"tls_mode": "mtls"
}

raw file

scenario-13.log (console trace)
alice writes M1 content=v0-ee7fe4ff2d6c4d45b2b06d67dff1981b on node-1
  M1 id=39d2d0f6-8726-4c5e-8e4f-d7b6b8b602ed
settle 5s for initial replication
alice + bob issue concurrent PUTs (vA=va-d34beaae08e3488087b4ab47293fc796 from alice, vB=vb-bd7585ecf89844729657f811ae5aa5b6 from bob)
  concurrent PUT results: [(0, {'body': {'access_count': 0, 'confidence': 1.0, 'content': 'va-d34beaae08e3488087b4ab47293fc796', 'created_at': '2026-04-23T17:56:03.734264377+00:00', 'expires_at': '2026-04-30T17:56:03.734264377+00:00', 'id': '39d2d0f6-8726-4c5e-8e4f-d7b6b8b602ed', 'metadata': {'agent_id': 'ai:alice', 'scenario': '13'}, 'namespace': 'scenario13-contention', 'priority': 5, 'source': 'api', 'tags': [], 'tier': 'mid', 'title': 'm1', 'updated_at': '2026-04-23T17:56:09.228480185+00:00'}, 'http_code': 200}), (0, {'body': {'access_count': 0, 'confidence': 1.0, 'content': 'vb-bd7585ecf89844729657f811ae5aa5b6', 'created_at': '2026-04-23T17:56:03.734264377+00:00', 'expires_at': '2026-04-30T17:56:03.734264377+00:00', 'id': '39d2d0f6-8726-4c5e-8e4f-d7b6b8b602ed', 'metadata': {'agent_id': 'ai:alice', 'scenario': '13'}, 'namespace': 'scenario13-contention', 'priority': 5, 'source': 'api', 'tags': [], 'tier': 'mid', 'title': 'm1', 'updated_at': '2026-04-23T17:56:09.254900772+00:00'}, 'http_code': 200})]
settle 10s for quorum convergence
  node-1 sees content=vb-bd7585ecf89844729657f811ae5aa5b6
  node-2 sees content=vb-bd7585ecf89844729657f811ae5aa5b6
  node-3 sees content=vb-bd7585ecf89844729657f811ae5aa5b6
  node-4 sees content=vb-bd7585ecf89844729657f811ae5aa5b6

raw file

Scenario 14 — Partition tolerance PASS

scenario-14.json (report)
{
	"agent_group": "hermes",
	"expected_post_recovery": 20,
	"node3_saw": 20,
	"partition_target": "node-3",
	"pass": true,
	"reasons": [],
	"scenario": "14",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-14.log (console trace)
suspending ai-memory on node-3 (SIGSTOP)
  !! ssh timeout (15s): root@45.55.168.255 pgrep -f 'ai-memory serve' | xargs -r kill -STOP
settle 2s for process-suspend observe
writing 10 memories each from alice + bob during node-3 outage
resuming ai-memory on node-3 (SIGCONT)
settle 20s for post-partition catchup
checking node-3 caught up
  node-3 sees 20 memories in scenario14-partition (expected 20)

raw file

Scenario 15 — Read-your-writes PASS

scenario-15.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"reasons": [],
	"scenario": "15",
	"skipped": false,
	"tls_mode": "mtls",
	"uuid": "ryw-2c6eaa16f7cb4e44ad4e3c242c79494f",
	"writer_sees_own_write": 1
}

raw file

scenario-15.log (console trace)
alice writes + immediately reads M1 on node-1 (uuid=ryw-2c6eaa16f7cb4e44ad4e3c242c79494f)
  alice sees 1 (expected 1) immediately after write

raw file

Scenario 16 — Tier promotion PASS

scenario-16.json (report)
{
	"agent_group": "hermes",
	"bob_sees_tier": "long",
	"m1_id": "c832f0d3-893a-440d-9f99-ef36303d1705",
	"pass": true,
	"promote_http_code": 200,
	"reasons": [],
	"scenario": "16",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-16.log (console trace)
alice writes M1 tier=short on node-1
  M1 id=c832f0d3-893a-440d-9f99-ef36303d1705
settle 5s for pre-promote replication
alice promotes M1 to tier=long
  promote returned HTTP 200
settle 8s for promotion fanout
  bob sees tier=long (expected long)

raw file

Scenario 17 — Stats consistency PASS

scenario-17.json (report)
{
	"agent_group": "hermes",
	"expected_count": 15,
	"pass": true,
	"per_peer": {
		"node_1": 15,
		"node_2": 15,
		"node_3": 15,
		"node_4": 15
	},
	"reasons": [],
	"scenario": "17",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-17.log (console trace)
phase A: each of 3 agents writes 5 memories to scenario17-stats
  ai:alice on 134.209.160.158
  ai:bob on 159.203.123.161
  ai:charlie on 45.55.168.255
settle 15s for W=2 fanout
phase B: querying count on every peer
  node-1 count=15 (expected 15)
  node-2 count=15 (expected 15)
  node-3 count=15 (expected 15)
  node-4 count=15 (expected 15)

raw file

Scenario 18 — Semantic query expansion FAIL

Reasons: semantic query did not surface alice's memory | semantic query did not surface bob's memory

scenario-18.json (report)
{
	"agent_group": "hermes",
	"pass": false,
	"query": "morning outdoor exercise routine",
	"reason": "semantic query did not surface alice's memory; semantic query did not surface bob's memory",
	"reasons": [
		"semantic query did not surface alice's memory",
		"semantic query did not surface bob's memory"
	],
	"scenario": "18",
	"skipped": false,
	"tls_mode": "mtls",
	"writers": [
		{
			"agent": "ai:alice",
			"marker": "alice-sunrise-b48bf242",
			"seen_by_charlie": 0
		},
		{
			"agent": "ai:bob",
			"marker": "bob-daybreak-89d46a26",
			"seen_by_charlie": 0
		}
	]
}

raw file

scenario-18.log (console trace)
alice writes A on node-1
bob writes B on node-2
settle 15s for fanout + index rebuild
charlie queries on node-3 with semantically-related prompt
  charlie sees alice's memory: 0 (expected >=1)
  charlie sees bob's memory: 0 (expected >=1)

raw file

Scenario 20 — mTLS happy-path PASS

scenario-20.json (report)
{
	"agent_group": "hermes",
	"marker": "mtls-a3ef78d878534e0b85fe2dc43ef84dd3",
	"pass": true,
	"peers_see": {
		"node_2": 1,
		"node_3": 1
	},
	"reasons": [],
	"scenario": "20",
	"skipped": false,
	"tls_mode": "mtls",
	"write_http_code": 201
}

raw file

scenario-20.log (console trace)
alice writes HTTPS + client cert on node-1
  write returned HTTP 201
settle 12s for W=2/N=4 quorum
  node-2 sees marker: 1
  node-3 sees marker: 1

raw file

Scenario 21 — Anonymous client rejected PASS

scenario-21.json (report)
{
	"agent_group": "hermes",
	"anonymous_probe": {
		"curl_message": "OpenSSL SSL_read: OpenSSL/3.0.13: error:0A00045C:SSL routines::tlsv13 alert certificate required, errno 0",
		"http_code": "curl: (56) OpenSSL SSL_read: OpenSSL/3.0.13: error:0A00045C:SSL routines::tlsv13 alert certificate required, errno 0\n000"
	},
	"namespace_count_after_attempt": 0,
	"pass": true,
	"reasons": [],
	"scenario": "21",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-21.log (console trace)
attempting anonymous HTTPS POST to node-1 (must be rejected)
  anonymous probe result: code=curl: (56) OpenSSL SSL_read: OpenSSL/3.0.13: error:0A00045C:SSL routines::tlsv13 alert certificate required, errno 0
000 msg=OpenSSL SSL_read: OpenSSL/3.0.13: error:0A00045C:SSL routines::tlsv13 alert certificate required, errno 0
settle 3s for let any leak land before checking namespace
  post-probe count for namespace=scenario21: 0 (must be 0)

raw file

Scenario 22 — Identity spoofing resistance PASS

scenario-22.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"reasons": [],
	"scenario": "22",
	"skipped": false,
	"tests": {
		"body_vs_header_conflict": {
			"acceptable": [
				"ai:body-wins",
				"ai:attacker"
			],
			"stored_agent_id": "ai:attacker"
		},
		"header_only": {
			"expected": "ai:alice",
			"stored_agent_id": "ai:alice"
		}
	},
	"tls_mode": "mtls"
}

raw file

scenario-22.log (console trace)
test 1: header-only X-Agent-Id=ai:alice
settle 2s for read-settle
  stored metadata.agent_id for header-only write: ai:alice (expected ai:alice)
test 2: body.metadata.agent_id=ai:body-wins vs X-Agent-Id=ai:attacker
settle 2s for read-settle
  stored metadata.agent_id for body+header conflict: ai:attacker

raw file

Scenario 23 — Malicious content fuzz UNKNOWN

scenario-23.json (report)

          

raw file

scenario-23.log (console trace)
payload sql: 61 bytes
payload html: 66 bytes
payload oversize: 1048576 bytes
Traceback (most recent call last):
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/scenarios/23_malicious_content_fuzz.py", line 106, in <module>
    main()
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/scenarios/23_malicious_content_fuzz.py", line 49, in main
    rc, write_doc = h.write_memory(
                    ^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 202, in write_memory
    return self.http_on(node_ip, "POST", "/api/v1/memories",
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 157, in http_on
    result = self.ssh_exec(node_ip, remote_cmd, timeout=timeout)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 103, in ssh_exec
    return self._run(cmd, timeout=timeout, stdin=stdin)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 87, in _run
    return subprocess.run(
           ^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 548, in run
    with Popen(*popenargs, **kwargs) as process:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 1026, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/usr/lib/python3.12/subprocess.py", line 1955, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
OSError: [Errno 7] Argument list too long: 'ssh'

raw file

Scenario 24 — Byzantine peer PASS

scenario-24.json (report)
{
	"agent_group": "hermes",
	"byzantine_marker": "bz-46f787d6cd6c4950ba37046321efd904",
	"pass": true,
	"reasons": [],
	"scenario": "24",
	"skipped": false,
	"stored_metadata_agent_id": "REJECTED_BY_SERVER",
	"sync_push_http_code": "422",
	"tls_mode": "mtls"
}

raw file

scenario-24.log (console trace)
node-2 sends sync_push to node-3 claiming sender_agent_id=ai:alice
  sync_push returned HTTP 422
settle 5s for server-side sync apply
  node-3 stored metadata.agent_id=ABSENT (declared: ai:alice)
  sync_push rejected HTTP 422 — stricter-than-spec, acceptable

raw file

Scenario 25 — Clock skew tolerance PASS

scenario-25.json (report)
{
	"agent_group": "hermes",
	"clock_offset_seconds": 300,
	"marker": "ck-ae157477f088450887b394362bc891e5",
	"pass": true,
	"reasons": [],
	"scenario": "25",
	"seen_on": {
		"node_1": 1,
		"node_3": 1
	},
	"skipped": false,
	"target_node": "node-3",
	"tls_mode": "mtls"
}

raw file

scenario-25.log (console trace)
shifting node-3 clock +300s (NTP disabled for the duration)
  node-3 now reports: Thu Apr 23 18:03:22 UTC 2026
alice writes on node-1 (normal clock); waiting for quorum fanout to skewed node-3
settle 15s for skewed-peer convergence
  node-3 (+300s clock) sees marker: 1 (expected >=1)
  node-1 sees marker: 1 (expected >=1)
reverting node-3 clock

raw file

Scenario 28 — memory_search keyword PASS

scenario-28.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"peer_hits": {
		"node_2": 1,
		"node_3": 1
	},
	"reasons": [],
	"scenario": "28",
	"skipped": false,
	"tls_mode": "mtls",
	"token": "kwsearchd3a0f463e6"
}

raw file

scenario-28.log (console trace)
alice writes a row containing unique token=kwsearchd3a0f463e6
settle 8s for search index populate + fanout
bob + charlie call /api/v1/search with the exact token
  node-2 keyword search returned 1 hits
  node-3 keyword search returned 1 hits

raw file

Scenario 29 — memory_archive lifecycle PASS

scenario-29.json (report)
{
	"agent_group": "hermes",
	"archive_http_code": 200,
	"bob_sees_archived": true,
	"m1_id": "fcd52dfb-ac1f-42e3-8146-9da504d88529",
	"node4_active_rows": 1,
	"pass": true,
	"reasons": [],
	"restore_http_code": 200,
	"scenario": "29",
	"skipped": false,
	"stats_shape_ok": true,
	"tls_mode": "mtls"
}

raw file

scenario-29.log (console trace)
alice writes M1 on node-1
  M1 id=fcd52dfb-ac1f-42e3-8146-9da504d88529
settle 5s for pre-archive replication
alice archives M1 via POST /api/v1/archive (ai-memory-mcp PR #361)
  archive (POST) returned HTTP 200
settle 5s for archive propagation
bob queries /api/v1/archive on node-2
  bob sees M1 in archive: True
charlie restores M1 via /api/v1/archive/{id}/restore on node-3
  restore returned HTTP 200
settle 5s for restore propagation
node-4 aggregator: M1 must be active again
  node-4 active rows matching marker: 1
fetch /api/v1/archive/stats on node-4

raw file

Scenario 30 — memory_capabilities handshake PASS

scenario-30.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"peer_views": {
		"node_1": {
			"_path": "/api/v1/capabilities",
			"features": {
				"auto_consolidation": false,
				"auto_tagging": false,
				"contradiction_analysis": false,
				"cross_encoder_reranking": false,
				"embedder_loaded": true,
				"hybrid_recall": true,
				"keyword_search": true,
				"memory_reflection": false,
				"query_expansion": false,
				"semantic_search": true
			},
			"models": {
				"cross_encoder": "none",
				"embedding": "sentence-transformers/all-MiniLM-L6-v2",
				"embedding_dim": 384,
				"llm": "none"
			},
			"tier": "semantic",
			"version": "0.6.2"
		},
		"node_2": {
			"_path": "/api/v1/capabilities",
			"features": {
				"auto_consolidation": false,
				"auto_tagging": false,
				"contradiction_analysis": false,
				"cross_encoder_reranking": false,
				"embedder_loaded": true,
				"hybrid_recall": true,
				"keyword_search": true,
				"memory_reflection": false,
				"query_expansion": false,
				"semantic_search": true
			},
			"models": {
				"cross_encoder": "none",
				"embedding": "sentence-transformers/all-MiniLM-L6-v2",
				"embedding_dim": 384,
				"llm": "none"
			},
			"tier": "semantic",
			"version": "0.6.2"
		},
		"node_3": {
			"_path": "/api/v1/capabilities",
			"features": {
				"auto_consolidation": false,
				"auto_tagging": false,
				"contradiction_analysis": false,
				"cross_encoder_reranking": false,
				"embedder_loaded": true,
				"hybrid_recall": true,
				"keyword_search": true,
				"memory_reflection": false,
				"query_expansion": false,
				"semantic_search": true
			},
			"models": {
				"cross_encoder": "none",
				"embedding": "sentence-transformers/all-MiniLM-L6-v2",
				"embedding_dim": 384,
				"llm": "none"
			},
			"tier": "semantic",
			"version": "0.6.2"
		},
		"node_4": {
			"_path": "/api/v1/capabilities",
			"features": {
				"auto_consolidation": false,
				"auto_tagging": false,
				"contradiction_analysis": false,
				"cross_encoder_reranking": false,
				"embedder_loaded": true,
				"hybrid_recall": true,
				"keyword_search": true,
				"memory_reflection": false,
				"query_expansion": false,
				"semantic_search": true
			},
			"models": {
				"cross_encoder": "none",
				"embedding": "sentence-transformers/all-MiniLM-L6-v2",
				"embedding_dim": 384,
				"llm": "none"
			},
			"tier": "semantic",
			"version": "0.6.2"
		}
	},
	"reasons": [],
	"scenario": "30",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-30.log (console trace)
  node-1 capabilities: ['features', 'models', 'tier', 'version', '_path']
  node-2 capabilities: ['features', 'models', 'tier', 'version', '_path']
  node-3 capabilities: ['features', 'models', 'tier', 'version', '_path']
  node-4 capabilities: ['features', 'models', 'tier', 'version', '_path']

raw file

Scenario 31 — memory_gc quiescence PASS

scenario-31.json (report)
{
	"agent_group": "hermes",
	"expected_live": 2,
	"forget_http_code": 400,
	"gc_http_code": 200,
	"live_markers_per_peer": {
		"node_1": 2,
		"node_2": 2,
		"node_3": 2,
		"node_4": 2
	},
	"pass": true,
	"reasons": [],
	"scenario": "31",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-31.log (console trace)
alice writes 4 memories
settle 6s for pre-gc replication
alice forgets 2 via /api/v1/forget
  forget returned HTTP 400
settle 5s for forget propagation
bob triggers /api/v1/gc on node-2
  gc returned HTTP 200
settle 8s for post-gc settle
verify remaining 2 markers are still readable on every peer
  node-1 sees 2/2 live markers
  node-2 sees 2/2 live markers
  node-3 sees 2/2 live markers
  node-4 sees 2/2 live markers

raw file

Scenario 32 — memory_inbox + notify PASS

scenario-32.json (report)
{
	"agent_group": "hermes",
	"bob_inbox_count": 1,
	"bob_sees_marker": true,
	"charlie_inbox_count": 0,
	"charlie_sees_marker": false,
	"marker": "inb-86210d6fb07e4110ad6903f68b2fc2d0",
	"notify_http_code": 201,
	"pass": true,
	"reasons": [],
	"scenario": "32",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-32.log (console trace)
alice calls /api/v1/notify → target=ai:bob
  notify returned HTTP 201
settle 6s for notification fanout
bob queries his inbox on node-2
  bob inbox has 1 messages; sees marker: True
charlie queries his inbox on node-3 (must NOT see it)
  charlie inbox has 0 messages; sees marker: False

raw file

Scenario 33 — memory_subscribe pub/sub PASS

scenario-33.json (report)
{
	"agent_group": "hermes",
	"m1_delivered": 1,
	"namespace": "scenario33-pubsub-0ccbbc",
	"ns_in_subs_after": false,
	"ns_in_subs_before": true,
	"pass": true,
	"reasons": [],
	"scenario": "33",
	"skipped": false,
	"subscribe_http_code": 201,
	"subscriptions_after_count": 0,
	"subscriptions_before_count": 1,
	"tls_mode": "mtls",
	"unsubscribe_http_code": 200
}

raw file

scenario-33.log (console trace)
bob subscribes to namespace scenario33-pubsub-0ccbbc on node-2
  subscribe returned HTTP 201
settle 2s for subscription settle
  bob subscriptions: 1 entries; contains ns: True
alice writes M1 into the subscribed namespace
settle 6s for write fanout to subscribers
  bob sees M1 in subscribed namespace: 1
bob unsubscribes from scenario33-pubsub-0ccbbc
  unsubscribe returned HTTP 200
settle 2s for unsubscribe settle
  bob subscriptions after unsubscribe: ns still present = False
alice writes M2 post-unsubscribe (may still replicate via federation but subscription list excludes ns)
settle 5s for post-unsubscribe settle

raw file

Scenario 34 — memory_pending governance PASS

scenario-34.json (report)
{
	"agent_group": "hermes",
	"approve_http_code": 200,
	"charlie_sees": {
		"approved": 1,
		"rejected": 0
	},
	"namespace": "scenario34-pending-d4bb04",
	"pass": true,
	"pending_queue_count": 2,
	"reasons": [],
	"reject_http_code": 200,
	"scenario": "34",
	"set_standard_http_code": 201,
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-34.log (console trace)
alice sets namespace standard on scenario34-pending-d4bb04: write=approve, approver=ai:bob
  set-standard returned HTTP 201
settle 2s for standard settle
alice writes two memories into the governed namespace (should land in pending)
  p1=a17fa069-c3b1-4d33-b989-fa1319a3aa25 p2=d71ac45b-5d61-457e-9739-ca104b1cf7cc
settle 4s for pending queue settle
bob lists pending on node-2
  pending queue has 2 entries
bob approves p1, rejects p2
  approve HTTP 200; reject HTTP 200
settle 5s for decision fanout
charlie reads the namespace — expects ONLY approved marker
  charlie sees approved=1 rejected=0

raw file

Scenario 35 — memory_namespace standards PASS

scenario-35.json (report)
{
	"agent_group": "hermes",
	"child_ns": "scenario35-parent-367929/child",
	"clear_http_code": 200,
	"get_standard_http_code": 200,
	"parent_ns": "scenario35-parent-367929",
	"pass": true,
	"post_clear_has_child_rule": false,
	"reasons": [],
	"scenario": "35",
	"sees_child_rule": true,
	"sees_parent_rule": true,
	"set_child_http_code": 201,
	"set_parent_http_code": 201,
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-35.log (console trace)
alice writes parent-standard-memory on node-1
alice sets namespace standard on scenario35-parent-367929
  set-parent returned HTTP 201
alice writes child-standard-memory on node-1
alice sets namespace standard on scenario35-parent-367929/child with parent=scenario35-parent-367929
  set-child returned HTTP 201
settle 4s for standard fanout
bob gets standard for scenario35-parent-367929/child on node-2 (expects layered parent+child)
  get-standard returned HTTP 200
  parent-rule visible=True; child-rule visible=True
alice clears standard on scenario35-parent-367929/child
  clear returned HTTP 200
settle 3s for clear settle

raw file

Scenario 36 — memory_session_start PASS

scenario-36.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"reasons": [],
	"scenario": "36",
	"session_id": "30321d29-a55d-4f88-a3e4-bb6167dda0f3",
	"session_tagged_rows_on_bob": 2,
	"skipped": false,
	"start_http_code": 200,
	"tls_mode": "mtls"
}

raw file

scenario-36.log (console trace)
alice starts a session on node-1
  session_start returned HTTP 200, session_id=30321d29-a55d-4f88-a3e4-bb6167dda0f3
alice writes 2 memories tagged with session_id
settle 6s for session-tagged fanout
bob lists on node-2 filtered by session_id=30321d29-a55d-4f88-a3e4-bb6167dda0f3
  bob sees 2 rows tagged session_id=30321d29-a55d-4f88-a3e4-bb6167dda0f3 (expected 2)

raw file

Scenario 37 — memory_get_links bidirectional PASS

scenario-37.json (report)
{
	"agent_group": "hermes",
	"forward_has_target": true,
	"m1": "1c8661db-bfe8-4496-acbc-bc8b155137e0",
	"m2": "a2462cd2-8ca4-47aa-a2ac-6f3edd0b9da1",
	"pass": true,
	"reasons": [],
	"reverse_has_source": true,
	"scenario": "37",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-37.log (console trace)
alice writes M1 + M2 + links M1→M2
  M1=1c8661db-bfe8-4496-acbc-bc8b155137e0 M2=a2462cd2-8ca4-47aa-a2ac-6f3edd0b9da1
settle 6s for link fanout
charlie queries /api/v1/links/M1 (forward)
charlie queries /api/v1/links/M2 (reverse)

raw file

Scenario 38 — /export + /import PASS

scenario-38.json (report)
{
	"agent_group": "hermes",
	"dst_ns": "scenario38-dst-47e771",
	"expected_rows": 5,
	"export_http_code": 200,
	"import_http_code": 200,
	"markers_preserved": 5,
	"pass": true,
	"reasons": [],
	"rows_exported": 5,
	"rows_in_destination": 5,
	"scenario": "38",
	"skipped": false,
	"src_ns": "scenario38-src-47e771",
	"tls_mode": "mtls"
}

raw file

scenario-38.log (console trace)
alice writes 5 rows into scenario38-src-47e771
settle 4s for pre-export replication
alice exports on node-1 (endpoint has no namespace filter; filter client-side)
  export returned HTTP 200, total_rows=231
  rewrote 5 memories from scenario38-src-47e771 -> scenario38-dst-47e771
bob imports the payload into scenario38-dst-47e771 on node-2
  import returned HTTP 200
settle 6s for import + fanout
verify row counts match on destination
  scenario38-dst-47e771 has 5 rows (expected 5)
  markers preserved in destination: 5/5

raw file

Scenario 39 — /sync/since delta FAIL

Reasons: delta returned 0/6 expected markers — delta-sync incomplete

scenario-39.json (report)
{
	"agent_group": "hermes",
	"checkpoint": "2026-04-23T18:01:06+00:00",
	"diag_earliest_updated_at": null,
	"diag_latest_updated_at": null,
	"diag_updated_since": null,
	"expected_markers": 6,
	"markers_present": 0,
	"namespace": "scenario39-delta-e9490c",
	"pass": false,
	"reason": "delta returned 0/6 expected markers — delta-sync incomplete",
	"reasons": [
		"delta returned 0/6 expected markers — delta-sync incomplete"
	],
	"rows_returned": 0,
	"rows_returned_raw": 0,
	"scenario": "39",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-39.log (console trace)
checkpoint = 2026-04-23T18:01:06+00:00
suspending ai-memory on node-3
  !! ssh timeout (30s): root@45.55.168.255 pgrep -f 'ai-memory serve' | xargs -r kill -STOP
alice + bob write 6 rows while node-3 is out
resuming ai-memory on node-3
settle 4s for process resume
node-3 asks node-1 /api/v1/sync/since?since=2026-04-23T18:01:06+00:00
  !! ssh timeout (30s): root@45.55.168.255 curl -sS --cacert /etc/ai-memory-a2a/tls/ca.pem --cert /etc/ai-memory-a2a/tls/client.pem --key /etc/ai-memory-a2a/tls/client.key 'https://134.209.160.158:9077/api/v1/sync/since?since=2026-04-23T18%3A01%3A06%2B00%3A00&limit=500'
  /sync/since raw=0 ns-filtered=0; 0/6 match our markers
  diag: updated_since=None earliest=None latest=None

raw file

Scenario 40 — /memories/bulk PASS

scenario-40.json (report)
{
	"agent_group": "hermes",
	"bulk_http_code": "200",
	"bulk_size": 500,
	"namespace": "scenario40-bulk-e520f9",
	"pass": true,
	"per_peer_count": {
		"node_2": 500,
		"node_3": 500,
		"node_4": 500
	},
	"reasons": [],
	"scenario": "40",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-40.log (console trace)
constructing 500-row bulk payload
staging bulk payload on node-1 /tmp, then POST /api/v1/memories/bulk
  bulk POST returned HTTP 200
settle 20s for bulk fanout across 3 peers + aggregator
  node-2 count=500 (expected 500)
  node-3 count=500 (expected 500)
  node-4 count=500 (expected 500)

raw file

Scenario 41 — /metrics Prometheus PASS

scenario-41.json (report)
{
	"activity_namespace": "scenario41-activity-798053",
	"agent_group": "hermes",
	"pass": true,
	"per_peer": {
		"node_1": {
			"counters_t0": 8,
			"counters_t1": 8,
			"regressed_keys": 0
		},
		"node_2": {
			"counters_t0": 8,
			"counters_t1": 8,
			"regressed_keys": 0
		},
		"node_3": {
			"counters_t0": 7,
			"counters_t1": 7,
			"regressed_keys": 0
		}
	},
	"reasons": [],
	"scenario": "41",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-41.log (console trace)
scrape T0
  node-1 T0 parsed 8 memory counters
  node-2 T0 parsed 8 memory counters
  node-3 T0 parsed 7 memory counters
settle 5s for counter update
scrape T1
  node-1 T1 parsed 8 memory counters
  node-2 T1 parsed 8 memory counters
  node-3 T1 parsed 7 memory counters

raw file

Scenario 42 — /namespaces enumeration PASS

scenario-42.json (report)
{
	"agent_group": "hermes",
	"namespaces": [
		"scenario42-ab7e64-0",
		"scenario42-ab7e64-1",
		"scenario42-ab7e64-2"
	],
	"pass": true,
	"per_peer": {
		"node_1": {
			"scenario42-ab7e64-0": 2,
			"scenario42-ab7e64-1": 2,
			"scenario42-ab7e64-2": 2
		},
		"node_2": {
			"scenario42-ab7e64-0": 2,
			"scenario42-ab7e64-1": 2,
			"scenario42-ab7e64-2": 2
		},
		"node_3": {
			"scenario42-ab7e64-0": 2,
			"scenario42-ab7e64-1": 2,
			"scenario42-ab7e64-2": 2
		},
		"node_4": {
			"scenario42-ab7e64-0": 2,
			"scenario42-ab7e64-1": 2,
			"scenario42-ab7e64-2": 2
		}
	},
	"reasons": [],
	"scenario": "42",
	"skipped": false,
	"tls_mode": "mtls"
}

raw file

scenario-42.log (console trace)
alice writes into 3 distinct namespaces: ['scenario42-ab7e64-0', 'scenario42-ab7e64-1', 'scenario42-ab7e64-2']
settle 10s for namespace index fanout
  node-1 sees 3/3 target namespaces, counts: {'scenario42-ab7e64-0': 2, 'scenario42-ab7e64-1': 2, 'scenario42-ab7e64-2': 2}
  node-2 sees 3/3 target namespaces, counts: {'scenario42-ab7e64-0': 2, 'scenario42-ab7e64-1': 2, 'scenario42-ab7e64-2': 2}
  node-3 sees 3/3 target namespaces, counts: {'scenario42-ab7e64-0': 2, 'scenario42-ab7e64-1': 2, 'scenario42-ab7e64-2': 2}
  node-4 sees 3/3 target namespaces, counts: {'scenario42-ab7e64-0': 2, 'scenario42-ab7e64-1': 2, 'scenario42-ab7e64-2': 2}

raw file

All artifacts