../ runs index

Campaign a2a-hermes-v3r17-tls-release-v0.6.2 FAIL

Agent group
hermes (homogeneous)
ai-memory ref
release/v0.6.2
Completed at
2026-04-22T22:49:22Z
Overall pass
false
Skipped reports
1

Infrastructure

Provider
digitalocean
Region
nyc3
Droplet size
s-2vcpu-4gb
Topology
4-node federation mesh (W=2/N=4)
Scenarios started
2026-04-22T22:37:58Z
Scenarios ended
2026-04-22T22:49:22Z
Dispatched by
alphaonedev
Harness SHA
dba76cc25ac1
Workflow run
https://github.com/alphaonedev/ai-memory-ai2ai-gate/actions/runs/24805364978

Node roster

#RoleAgent IDPublic IPPrivate IP
1agentai:alice142.93.70.8410.11.1.2
2agentai:bob159.89.187.19110.11.1.3
3agentai:charlie157.245.1.9510.11.1.5
4memory-only68.183.147.1910.11.1.4

Baseline attestation BASELINE OK

Per the authoritative baseline spec, every agent node must emit a self-attestation before any scenario is permitted to run. This run's attestation:

Spec version: 1.4.0 — see authoritative baseline.

NodeAgentFrameworkAuthenticMCP ai-memoryxAI cfgxAI defaultAgent IDFederationUFW offiptablesdead-manF1 xAIF2a substrateF2b agent (non-gating)Config SHAPass
node-1ai:alicehermes Hermes Agent v0.10.0 (2026.4.16)fa358f9a9059PASS
node-2ai:bobhermes Hermes Agent v0.10.0 (2026.4.16)21635cf63640PASS
node-3ai:charliehermes Hermes Agent v0.10.0 (2026.4.16)ce52d772ef5aPASS
a2a-baseline.json
{
	"baseline_pass": true,
	"per_node": [
		{
			"spec_version": "1.4.0",
			"agent_type": "hermes",
			"agent_id": "ai:alice",
			"node_index": "1",
			"framework_version": "Hermes Agent v0.10.0 (2026.4.16)",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "https://10.11.1.3:9077,https://10.11.1.5:9077,https://10.11.1.4:9077",
			"config_file_sha256": "fa358f9a90597243fb96224babd541399bd7b1e972f364605308ab1e2d9dd2c7",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "8c34901c-6119-4cd8-8dda-8792544d3d9c",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "7e27a485-0fc3-4686-9e07-2ea77a8c5dee",
				"agent_canary_response_head": "Traceback (most recent call last):   File \"/usr/local/bin/hermes\", line 11, in <module>     main()   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 8813, in main     args.func(args)   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 1145, in cmd_chat     from cli import main as cli_main   File \"/root/.hermes/hermes-agent/cli.py\", line 43, in <module>     from prompt_toolkit.history import FileHistory ModuleNotFoundError: No module named 'prompt_toolkit' ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.11.1.3:9077:OK,10.11.1.5:9077:OK,10.11.1.4:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "tls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "8c34901c-6119-4cd8-8dda-8792544d3d9c",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "hermes",
			"agent_id": "ai:bob",
			"node_index": "2",
			"framework_version": "Hermes Agent v0.10.0 (2026.4.16)",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "https://10.11.1.2:9077,https://10.11.1.5:9077,https://10.11.1.4:9077",
			"config_file_sha256": "21635cf6364057fd2a004d28aac89abf8438671d85f9fd2ed1e654d812d23ff1",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "7837c857-32a1-4fba-afab-dc547adcf112",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "9ae958dd-9635-4b1e-985a-ac85f762bb10",
				"agent_canary_response_head": "Traceback (most recent call last):   File \"/usr/local/bin/hermes\", line 11, in <module>     main()   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 8813, in main     args.func(args)   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 1145, in cmd_chat     from cli import main as cli_main   File \"/root/.hermes/hermes-agent/cli.py\", line 43, in <module>     from prompt_toolkit.history import FileHistory ModuleNotFoundError: No module named 'prompt_toolkit' ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.11.1.2:9077:OK,10.11.1.5:9077:OK,10.11.1.4:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "tls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "7837c857-32a1-4fba-afab-dc547adcf112",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "hermes",
			"agent_id": "ai:charlie",
			"node_index": "3",
			"framework_version": "Hermes Agent v0.10.0 (2026.4.16)",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "https://10.11.1.2:9077,https://10.11.1.3:9077,https://10.11.1.4:9077",
			"config_file_sha256": "ce52d772ef5a00968db29fb80eea7a14206b0a258a00ff2165db725405474618",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "f404d7ae-739f-4999-8beb-8692bd09095c",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "45d377b2-2585-4749-a532-87a85842c225",
				"agent_canary_response_head": "Traceback (most recent call last):   File \"/usr/local/bin/hermes\", line 11, in <module>     main()   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 8813, in main     args.func(args)   File \"/root/.hermes/hermes-agent/hermes_cli/main.py\", line 1145, in cmd_chat     from cli import main as cli_main   File \"/root/.hermes/hermes-agent/cli.py\", line 43, in <module>     from prompt_toolkit.history import FileHistory ModuleNotFoundError: No module named 'prompt_toolkit' ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.11.1.2:9077:OK,10.11.1.3:9077:OK,10.11.1.4:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "tls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "f404d7ae-739f-4999-8beb-8692bd09095c",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		}
	]
}

raw file

F3 — peer A2A via shared memory F3 OK

Workflow-level probe answering "can agents communicate through ai-memory?". Writer ai:alice posted canary UUID eefba0bf-56de-4319-a0af-5aca9b9bf301 to namespace _baseline_peer_canary via node-1's local ai-memory serve HTTP. After W=2 fanout settle, probe confirmed the canary on each of the 3 peer nodes via their local GET /api/v1/memories.

f3-peer-a2a.json
{
	"probe": "F3",
	"name": "peer-a2a-via-shared-memory",
	"description": "Writer agent posts a canary via local ai-memory HTTP on node-1; verifies the row propagates to the 3 peer nodes (W=2/N=4 quorum) before scenarios run.",
	"canary_uuid": "eefba0bf-56de-4319-a0af-5aca9b9bf301",
	"canary_namespace": "_baseline_peer_canary",
	"writer_agent": "ai:alice",
	"pass": true
}

raw file

Run focus

Regressions in memory sharing and advanced features under TLS.

What this campaign tested: This campaign tested memory sharing, linking, deletion, semantic/keyword search, bulk operations, namespaces, pubsub, approvals, archiving, and federation resilience in a 4-node TLS mesh, covering HTTP transport, multi-cluster framework, and primitives like recall, link, and notify.

What it demonstrated: Results demonstrated reliable basic sharing in some HTTP paths but failures in core recall, identity checks, search, archiving, notifications, pubsub, approvals, namespace inheritance, sessions, delta sync, and bulk ops, indicating incomplete federation and endpoint issues.

AI NHI analysis · Claude Opus 4.7

Regressions in memory sharing and advanced features under TLS.

FAIL — 21/34 scenarios passed, 13 failed, 1 skipped, 1 unparseable.

For three audiences

Non-technical end users

The test checked if AI agents can reliably share and access each other's memories across a network. While some simple sharing worked, many times agents couldn't see or find shared information, and advanced tools like searching or notifications failed. Overall, the system doesn't yet allow agents to dependably exchange memories in all situations.

C-level decision makers

High risk posture due to failures in fundamental sharing and key features, rendering the system not production-ready and customer claims about multi-agent memory unreliable. Viability for external demos is low without fixes. Versus prior runs, this TLS-mode campaign exposes new regressions likely from release v0.6.2 or mesh config changes.

Engineers & architects

Core failures in S1 (MCP recall with identity mismatches), S12 (agent registration visibility), S18 (semantic query misses), S28 (keyword search misses on nodes 2-3), S29 (archive/restore HTTP errors), S30 (capabilities endpoint unexposed), S32 (notify delivery), S33 (pubsub subscribe/unsubscribe), S34 (approval/reject mechanics), S35 (namespace rule inheritance), S36 (session start), S39 (delta sync incomplete), S40 (bulk fanout); impacted primitives include recall, search, archive, notify, subscribe, approve, namespace ops, sessions, delta, bulk. Probable root causes: federation sync lags in TLS mesh, missing endpoint implementations (e.g., 404/405 errors), or v0.6.2 bugs; note S20 skipped (mTLS-only) and S23 unparseable.

What changes going into the next campaign

Rerun under mTLS mode to enable S20 and diagnose if stricter TLS resolves identity and sync issues.

Tests performed in this run

Every scenario that produced a JSON report in this campaign, in testbook order. Click a row's scenario id to jump to its full report below. See the Every test performed page for the authoritative catalog.

IDTitleResultReason
S1Per-agent write + read (MCP stdio)?ai:alice recalled 0 < 20 via MCP; ai:bob recalled 0 < 20 via MCP; ai:charlie recalled 0 < 20 via MCP; cross-cluster identity check failed — see per_ns
S1bPer-agent write + read (HTTP)PASS
S2Shared-context handoffPASS
S4Federation-aware concurrent writesPASS
S5Consolidation + curationPASS
S6Contradiction detectionPASS
S9Mutation round-tripPASS
S10Deletion propagationPASS
S11Link integrityPASS
S12Agent registration?node-3 did not see registered agent ai:dave-probe-ebe855c2; node-4 did not see registered agent ai:dave-probe-ebe855c2
S13Concurrent write contentionPASS
S14Partition tolerancePASS
S15Read-your-writesPASS
S16Tier promotionPASS
S17Stats consistencyPASS
S18Semantic query expansion?semantic query did not surface alice's memory; semantic query did not surface bob's memory
S20mTLS happy-pathSKIPscenario 20 only runs under tls_mode=mtls (actual: tls)
S22Identity spoofing resistancePASS
S23Malicious content fuzz?
S24Byzantine peerPASS
S25Clock skew tolerancePASS
S28memory_search keyword?node-2 did not find the unique token via /search; node-3 did not find the unique token via /search
S29memory_archive lifecycle?archive POST returned HTTP 405; bob did not see M1 in /api/v1/archive; restore returned HTTP 404
S30memory_capabilities handshake?no peer returned a capabilities response — endpoint may not be exposed
S31memory_gc quiescencePASS
S32memory_inbox + notify?notify returned HTTP 404; bob's inbox did not deliver alice's notify
S33memory_subscribe pub/sub?subscribe returned HTTP 404; bob's subscription list did not include the subscribed namespace; unsubscribe returned HTTP 404
S34memory_pending governance?set-standard returned HTTP 405; approve returned HTTP 403; reject returned HTTP 404; charlie saw rejected row — reject didn't prevent publication
S35memory_namespace standards?set-parent returned HTTP 405; set-child returned HTTP 405; clear-standard returned HTTP 405; parent rule not layered into child's standard view; child rule missing from standard view
S36memory_session_start?session_start returned HTTP 404
S37memory_get_links bidirectionalPASS
S38/export + /importPASS
S39/sync/since delta?delta returned 0/6 expected markers — delta-sync incomplete
S40/memories/bulk?bulk returned HTTP 422; node-2 saw 0/500 bulk rows after fanout; node-3 saw 0/500 bulk rows after fanout; node-4 saw 0/500 bulk rows after fanout
S41/metrics PrometheusPASS
S42/namespaces enumerationPASS

Scenario 1 — Per-agent write + read (MCP stdio) FAIL

Reasons: ai:alice recalled 0 < 20 via MCP | ai:bob recalled 0 < 20 via MCP | ai:charlie recalled 0 < 20 via MCP | cross-cluster identity check failed — see per_ns

scenario-1.json (report)
{
	"agent_group": "hermes",
	"expected_per_reader": 20,
	"pass": false,
	"per_agent": {
		"ai:alice": {
			"recall": 0
		},
		"ai:bob": {
			"recall": 0
		},
		"ai:charlie": {
			"recall": 0
		}
	},
	"per_namespace_node4": {
		"scenario1-ai:alice": {
			"count": 0,
			"wrong_agent_id": 0
		},
		"scenario1-ai:bob": {
			"count": 0,
			"wrong_agent_id": 0
		},
		"scenario1-ai:charlie": {
			"count": 0,
			"wrong_agent_id": 0
		}
	},
	"reason": "ai:alice recalled 0 < 20 via MCP; ai:bob recalled 0 < 20 via MCP; ai:charlie recalled 0 < 20 via MCP; cross-cluster identity check failed — see per_ns",
	"reasons": [
		"ai:alice recalled 0 < 20 via MCP",
		"ai:bob recalled 0 < 20 via MCP",
		"ai:charlie recalled 0 < 20 via MCP",
		"cross-cluster identity check failed — see per_ns"
	],
	"scenario": "1",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-1.log (console trace)
phase A: each agent writes 10 memories via MCP
  ai:alice on 142.93.70.84
  !! drive_agent store failed for ai:alice i=1: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:alice i=2: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:alice i=3: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:alice i=4: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:alice i=5: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:alice i=6: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:alice i=7: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:alice i=8: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:alice i=9: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:alice i=10: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  ai:bob on 159.89.187.191
  !! drive_agent store failed for ai:bob i=1: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:bob i=2: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:bob i=3: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:bob i=4: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:bob i=5: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:bob i=6: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:bob i=7: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:bob i=8: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:bob i=9: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:bob i=10: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  ai:charlie on 157.245.1.95
  !! drive_agent store failed for ai:charlie i=1: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:charlie i=2: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:charlie i=3: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:charlie i=4: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:charlie i=5: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:charlie i=6: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:charlie i=7: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:charlie i=8: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:charlie i=9: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
  !! drive_agent store failed for ai:charlie i=10: Traceback (most recent call last):
  File "/usr/local/bin/hermes", line 11, in <module>
    main()
  File "/root/.hermes/hermes-agent/hermes_cli/main.py", line 8813, in main
    args.func(args)
  File
settle 15s for W=2/N=4 convergence
phase B: each agent counts rows in the OTHER two namespaces
  ai:alice recalled 0 rows from the other two namespaces
  ai:bob recalled 0 rows from the other two namespaces
  ai:charlie recalled 0 rows from the other two namespaces
phase C: cross-cluster identity check on node-4
  ns=scenario1-ai:alice count=0 wrong_agent_id=0
  !! expected 10 rows, got 0
  ns=scenario1-ai:bob count=0 wrong_agent_id=0
  !! expected 10 rows, got 0
  ns=scenario1-ai:charlie count=0 wrong_agent_id=0
  !! expected 10 rows, got 0

raw file

Scenario 1b — Per-agent write + read (HTTP) PASS

scenario-1b.json (report)
{
	"agent_group": "hermes",
	"expected_per_reader": 20,
	"pass": true,
	"path": "serve-http",
	"per_agent": {
		"ai:alice": {
			"recall": 20
		},
		"ai:bob": {
			"recall": 20
		},
		"ai:charlie": {
			"recall": 20
		}
	},
	"per_namespace_node4": {
		"scenario1b-ai:alice": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1b-ai:bob": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1b-ai:charlie": {
			"count": 10,
			"wrong_agent_id": 0
		}
	},
	"reasons": [],
	"scenario": "1b",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-1b.log (console trace)
phase A: each agent POSTs 10 memories to local serve
  ai:alice on 142.93.70.84
  ai:bob on 159.89.187.191
  ai:charlie on 157.245.1.95
settle 15s for W=2/N=4 convergence
phase B: count rows in other two namespaces via local serve HTTP
  ai:alice sees 20 rows from the other two namespaces
  ai:bob sees 20 rows from the other two namespaces
  ai:charlie sees 20 rows from the other two namespaces
phase C: cross-cluster identity check on node-4
  ns=scenario1b-ai:alice count=10 wrong_agent_id=0
  ns=scenario1b-ai:bob count=10 wrong_agent_id=0
  ns=scenario1b-ai:charlie count=10 wrong_agent_id=0

raw file

Scenario 2 — Shared-context handoff PASS

scenario-2.json (report)
{
	"ack_uuid": "a-ed098d317f444c4cbee6d3c999f078c3",
	"agent_group": "hermes",
	"handoff_uuid": "h-a8561667dd954318a09f5dbd48105f68",
	"pass": true,
	"path": "serve-http",
	"per_agent": {
		"ai:alice": {
			"sees_ack": 1
		},
		"ai:bob": {
			"sees_handoff": 1
		}
	},
	"reasons": [],
	"scenario": "2",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-2.log (console trace)
phase A: ai:alice writes handoff to ai:bob (uuid=h-a8561667dd954318a09f5dbd48105f68)
settle 8s for quorum fanout
phase B: ai:bob reads handoff on node-2
  ai:bob sees 1 handoff memories from ai:alice
phase C: ai:bob writes acknowledgement (uuid=a-ed098d317f444c4cbee6d3c999f078c3)
settle 8s for reverse-direction fanout
phase D: ai:alice reads ack on node-1
  ai:alice sees 1 ack memories from ai:bob

raw file

Scenario 4 — Federation-aware concurrent writes PASS

scenario-4.json (report)
{
	"agent_group": "hermes",
	"expected_per_agent": 30,
	"pass": true,
	"per_agent": {
		"ai:alice": {
			"count": 30,
			"wrong_agent_id": 0
		},
		"ai:bob": {
			"count": 30,
			"wrong_agent_id": 0
		},
		"ai:charlie": {
			"count": 30,
			"wrong_agent_id": 0
		}
	},
	"reasons": [],
	"scenario": "4",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-4.log (console trace)
phase A: launching concurrent 30-row bursts from 3 agents
  ai:alice burst ok=30/30
  ai:bob burst ok=30/30
  ai:charlie burst ok=30/30
settle 20s for W=2 fanout convergence
phase B: querying node-4 aggregator for per-agent counts
  ai:alice: count=30 (expected 30) wrong_agent_id=0
  ai:bob: count=30 (expected 30) wrong_agent_id=0
  ai:charlie: count=30 (expected 30) wrong_agent_id=0

raw file

Scenario 5 — Consolidation + curation PASS

scenario-5.json (report)
{
	"agent_group": "hermes",
	"consolidate_http_code": 201,
	"consolidated_from_agents": [
		"ai:charlie",
		"ai:bob",
		"ai:alice"
	],
	"consolidated_id": "d26efbfe-5e79-43f5-b457-031b1018346f",
	"pass": true,
	"reasons": [],
	"scenario": "5",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-5.log (console trace)
phase A: each agent writes 3 related memories
  ai:alice on 142.93.70.84
  ai:bob on 159.89.187.191
  ai:charlie on 157.245.1.95
settle 8s for quorum fanout
phase B: collect source ids on node-1, then trigger consolidate
  source ids (count=9): ['facbc59f-735b-49bd-9a81-20eb8bbe22b8', 'cc2a80c0-9651-47d6-abd7-40f4192e6234', 'd97e3263-bf77-44e8-969c-2404a165a2ef', '84da5a32-929c-4cae-9667-8c61c434af1f', 'b26eb2d5-78c5-4212-8a72-b59917502da5']...
  consolidate HTTP 201, consolidated_id=d26efbfe-5e79-43f5-b457-031b1018346f
settle 10s for consolidation fanout
phase C: verifying consolidated_from_agents on node-4
  consolidated_from_agents=['ai:charlie', 'ai:bob', 'ai:alice']

raw file

Scenario 6 — Contradiction detection PASS

scenario-6.json (report)
{
	"agent_group": "hermes",
	"alice_id": "299bbcc5-8229-4719-b12b-e80e46dc5afa",
	"bob_id": "9f8f9abb-a67a-4ba2-a674-2980c1330e94",
	"charlie_sees_both_memories": true,
	"charlie_sees_contradicts_link": true,
	"detect_http_code": 200,
	"pass": true,
	"reasons": [],
	"scenario": "6",
	"skipped": false,
	"tls_mode": "tls",
	"topic": "sky-color-88920254"
}

raw file

scenario-6.log (console trace)
alice writes claim: "sky-color-88920254 is blue" on node-1
bob writes contradicting claim: "sky-color-88920254 is red" on node-2
  alice.id=299bbcc5-8229-4719-b12b-e80e46dc5afa bob.id=9f8f9abb-a67a-4ba2-a674-2980c1330e94
settle 10s for quorum fanout + contradiction indexing
charlie queries /api/v1/contradictions on node-3
  HTTP 200
  sees both memories: True; sees contradicts link: True

raw file

Scenario 9 — Mutation round-trip PASS

scenario-9.json (report)
{
	"agent_group": "hermes",
	"charlie_view": {
		"agent_id": "ai:alice",
		"content": "v2-eb6feea472d94cd2b3375e4cc37f6d77"
	},
	"m1_id": "f70d5e1e-303d-4c89-bf8e-d26dad5d85f7",
	"pass": true,
	"put_http_code": 200,
	"reasons": [],
	"scenario": "9",
	"skipped": false,
	"tls_mode": "tls",
	"v1_uuid": "v1-2ee0e2752d40470d9731a2b4c1afd1d0",
	"v2_uuid": "v2-eb6feea472d94cd2b3375e4cc37f6d77"
}

raw file

scenario-9.log (console trace)
alice writes M1 content=v1-2ee0e2752d40470d9731a2b4c1afd1d0 on node-1
  M1 id=f70d5e1e-303d-4c89-bf8e-d26dad5d85f7
settle 5s for initial replication
bob updates M1 content=v2-eb6feea472d94cd2b3375e4cc37f6d77 on node-2 via PUT
  PUT returned HTTP 200
settle 8s for update fanout
charlie reads M1 on node-3 and checks content + provenance
  charlie sees content="v2-eb6feea472d94cd2b3375e4cc37f6d77" agent_id="ai:alice"

raw file

Scenario 10 — Deletion propagation PASS

scenario-10.json (report)
{
	"agent_group": "hermes",
	"delete_http_code": 200,
	"m1_id": "e2d97f00-c60c-4f63-9bee-e226f691aa6b",
	"pass": true,
	"post_delete_hits": {
		"node-2": 0,
		"node-3": 0,
		"node-4": 0
	},
	"post_delete_still_visible_peers": 0,
	"pre_delete_visible_peers": 3,
	"reasons": [],
	"scenario": "10",
	"skipped": false,
	"tls_mode": "tls",
	"uuid": "d-98bdfcf1f73a421fab234a1b76503cd6"
}

raw file

scenario-10.log (console trace)
alice writes M1 content=d-98bdfcf1f73a421fab234a1b76503cd6 on node-1
  created memory id=e2d97f00-c60c-4f63-9bee-e226f691aa6b
settle 8s for pre-delete fanout
pre-delete: verifying M1 is visible on all peers
  pre-delete node-2 sees 1
  pre-delete node-3 sees 1
  pre-delete node-4 sees 1
alice deletes M1 on node-1
  DELETE returned HTTP 200
settle 15s for tombstone propagation
post-delete: verifying M1 is GONE from all peers
  post-delete node-2 sees 0 (expected 0)
  post-delete node-3 sees 0 (expected 0)
  post-delete node-4 sees 0 (expected 0)

raw file

Scenario 11 — Link integrity PASS

scenario-11.json (report)
{
	"agent_group": "hermes",
	"charlie_sees_link": 1,
	"link_http_code": 201,
	"m1_id": "5a57512a-71a7-42b0-9e49-9d5effae9629",
	"m2_id": "a8e02305-62a9-40d6-813f-636db98c2d09",
	"pass": true,
	"reasons": [],
	"relation": "related_to",
	"scenario": "11",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-11.log (console trace)
alice writes M1 on node-1
bob writes M2 on node-2
  M1=5a57512a-71a7-42b0-9e49-9d5effae9629 M2=a8e02305-62a9-40d6-813f-636db98c2d09
settle 5s for pre-link replication
alice links M1 -> M2 with relation=related_to
  link POST returned HTTP 201
settle 8s for link fanout
charlie queries links of M1 on node-3
  charlie sees M1->M2 link: 1 (expected >=1)

raw file

Scenario 12 — Agent registration FAIL

Reasons: node-3 did not see registered agent ai:dave-probe-ebe855c2 | node-4 did not see registered agent ai:dave-probe-ebe855c2

scenario-12.json (report)
{
	"agent_group": "hermes",
	"pass": false,
	"peers_see": {
		"node_2": 1,
		"node_3": 0,
		"node_4": 0
	},
	"reason": "node-3 did not see registered agent ai:dave-probe-ebe855c2; node-4 did not see registered agent ai:dave-probe-ebe855c2",
	"reasons": [
		"node-3 did not see registered agent ai:dave-probe-ebe855c2",
		"node-4 did not see registered agent ai:dave-probe-ebe855c2"
	],
	"register_http_code": 201,
	"registered_agent": "ai:dave-probe-ebe855c2",
	"scenario": "12",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-12.log (console trace)
alice registers new agent ai:dave-probe-ebe855c2 on node-1
  POST /api/v1/agents returned HTTP 201
settle 10s for agent-list fanout
  node-2 sees ai:dave-probe-ebe855c2: 1 (expected >=1)
  node-3 sees ai:dave-probe-ebe855c2: 0 (expected >=1)
  node-4 sees ai:dave-probe-ebe855c2: 0 (expected >=1)

raw file

Scenario 13 — Concurrent write contention PASS

scenario-13.json (report)
{
	"agent_group": "hermes",
	"m1_id": "fd716e62-da51-4155-9e25-9457e1af193b",
	"pass": true,
	"peer_view": {
		"node_1": "va-250dae4dfa934a47818a14c79c24b46a",
		"node_2": "va-250dae4dfa934a47818a14c79c24b46a",
		"node_3": "va-250dae4dfa934a47818a14c79c24b46a",
		"node_4": "va-250dae4dfa934a47818a14c79c24b46a"
	},
	"reasons": [],
	"scenario": "13",
	"skipped": false,
	"submitted": {
		"v0": "v0-91c458a08d1c49278b5636393729b354",
		"vA_alice": "va-250dae4dfa934a47818a14c79c24b46a",
		"vB_bob": "vb-e036da11271840c39984b9feafc98510"
	},
	"tls_mode": "tls"
}

raw file

scenario-13.log (console trace)
alice writes M1 content=v0-91c458a08d1c49278b5636393729b354 on node-1
  M1 id=fd716e62-da51-4155-9e25-9457e1af193b
settle 5s for initial replication
alice + bob issue concurrent PUTs (vA=va-250dae4dfa934a47818a14c79c24b46a from alice, vB=vb-e036da11271840c39984b9feafc98510 from bob)
  concurrent PUT results: [(0, {'body': {'access_count': 0, 'confidence': 1.0, 'content': 'va-250dae4dfa934a47818a14c79c24b46a', 'created_at': '2026-04-22T22:42:34.638818332+00:00', 'expires_at': '2026-04-29T22:42:34.638818332+00:00', 'id': 'fd716e62-da51-4155-9e25-9457e1af193b', 'metadata': {'agent_id': 'ai:alice', 'scenario': '13'}, 'namespace': 'scenario13-contention', 'priority': 5, 'source': 'api', 'tags': [], 'tier': 'mid', 'title': 'm1', 'updated_at': '2026-04-22T22:42:40.256829892+00:00'}, 'http_code': 200}), (0, {'body': {'access_count': 0, 'confidence': 1.0, 'content': 'vb-e036da11271840c39984b9feafc98510', 'created_at': '2026-04-22T22:42:34.638818332+00:00', 'expires_at': '2026-04-29T22:42:34.638818332+00:00', 'id': 'fd716e62-da51-4155-9e25-9457e1af193b', 'metadata': {'agent_id': 'ai:alice', 'scenario': '13'}, 'namespace': 'scenario13-contention', 'priority': 5, 'source': 'api', 'tags': [], 'tier': 'mid', 'title': 'm1', 'updated_at': '2026-04-22T22:42:40.142705494+00:00'}, 'http_code': 200})]
settle 10s for quorum convergence
  node-1 sees content=va-250dae4dfa934a47818a14c79c24b46a
  node-2 sees content=va-250dae4dfa934a47818a14c79c24b46a
  node-3 sees content=va-250dae4dfa934a47818a14c79c24b46a
  node-4 sees content=va-250dae4dfa934a47818a14c79c24b46a

raw file

Scenario 14 — Partition tolerance PASS

scenario-14.json (report)
{
	"agent_group": "hermes",
	"expected_post_recovery": 20,
	"node3_saw": 20,
	"partition_target": "node-3",
	"pass": true,
	"reasons": [],
	"scenario": "14",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-14.log (console trace)
suspending ai-memory on node-3 (SIGSTOP)
  !! ssh timeout (15s): root@157.245.1.95 pgrep -f 'ai-memory serve' | xargs -r kill -STOP
settle 2s for process-suspend observe
writing 10 memories each from alice + bob during node-3 outage
resuming ai-memory on node-3 (SIGCONT)
settle 20s for post-partition catchup
checking node-3 caught up
  node-3 sees 20 memories in scenario14-partition (expected 20)

raw file

Scenario 15 — Read-your-writes PASS

scenario-15.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"reasons": [],
	"scenario": "15",
	"skipped": false,
	"tls_mode": "tls",
	"uuid": "ryw-59443ca0c60c4cf892be4ec33771ba2e",
	"writer_sees_own_write": 1
}

raw file

scenario-15.log (console trace)
alice writes + immediately reads M1 on node-1 (uuid=ryw-59443ca0c60c4cf892be4ec33771ba2e)
  alice sees 1 (expected 1) immediately after write

raw file

Scenario 16 — Tier promotion PASS

scenario-16.json (report)
{
	"agent_group": "hermes",
	"bob_sees_tier": "long",
	"m1_id": "6aca95ba-4076-41bf-b164-161f7757a051",
	"pass": true,
	"promote_http_code": 200,
	"reasons": [],
	"scenario": "16",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-16.log (console trace)
alice writes M1 tier=short on node-1
  M1 id=6aca95ba-4076-41bf-b164-161f7757a051
settle 5s for pre-promote replication
alice promotes M1 to tier=long
  promote returned HTTP 200
settle 8s for promotion fanout
  bob sees tier=long (expected long)

raw file

Scenario 17 — Stats consistency PASS

scenario-17.json (report)
{
	"agent_group": "hermes",
	"expected_count": 15,
	"pass": true,
	"per_peer": {
		"node_1": 15,
		"node_2": 15,
		"node_3": 15,
		"node_4": 15
	},
	"reasons": [],
	"scenario": "17",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-17.log (console trace)
phase A: each of 3 agents writes 5 memories to scenario17-stats
  ai:alice on 142.93.70.84
  ai:bob on 159.89.187.191
  ai:charlie on 157.245.1.95
settle 15s for W=2 fanout
phase B: querying count on every peer
  node-1 count=15 (expected 15)
  node-2 count=15 (expected 15)
  node-3 count=15 (expected 15)
  node-4 count=15 (expected 15)

raw file

Scenario 18 — Semantic query expansion FAIL

Reasons: semantic query did not surface alice's memory | semantic query did not surface bob's memory

scenario-18.json (report)
{
	"agent_group": "hermes",
	"pass": false,
	"query": "morning outdoor exercise routine",
	"reason": "semantic query did not surface alice's memory; semantic query did not surface bob's memory",
	"reasons": [
		"semantic query did not surface alice's memory",
		"semantic query did not surface bob's memory"
	],
	"scenario": "18",
	"skipped": false,
	"tls_mode": "tls",
	"writers": [
		{
			"agent": "ai:alice",
			"marker": "alice-sunrise-fda9b62e",
			"seen_by_charlie": 0
		},
		{
			"agent": "ai:bob",
			"marker": "bob-daybreak-670540ff",
			"seen_by_charlie": 0
		}
	]
}

raw file

scenario-18.log (console trace)
alice writes A on node-1
bob writes B on node-2
settle 15s for fanout + index rebuild
charlie queries on node-3 with semantically-related prompt
  charlie sees alice's memory: 0 (expected >=1)
  charlie sees bob's memory: 0 (expected >=1)

raw file

Scenario 20 — mTLS happy-path UNKNOWN

scenario-20.json (report)
{
	"agent_group": "hermes",
	"pass": null,
	"reason": "scenario 20 only runs under tls_mode=mtls (actual: tls)",
	"scenario": "20",
	"skipped": true,
	"tls_mode": "tls"
}

raw file

scenario-20.log (console trace)
skipped — scenario 20 only runs under tls_mode=mtls (actual: tls)

raw file

Scenario 22 — Identity spoofing resistance PASS

scenario-22.json (report)
{
	"agent_group": "hermes",
	"pass": true,
	"reasons": [],
	"scenario": "22",
	"skipped": false,
	"tests": {
		"body_vs_header_conflict": {
			"acceptable": [
				"ai:body-wins",
				"ai:attacker"
			],
			"stored_agent_id": "ai:attacker"
		},
		"header_only": {
			"expected": "ai:alice",
			"stored_agent_id": "ai:alice"
		}
	},
	"tls_mode": "tls"
}

raw file

scenario-22.log (console trace)
test 1: header-only X-Agent-Id=ai:alice
settle 2s for read-settle
  stored metadata.agent_id for header-only write: ai:alice (expected ai:alice)
test 2: body.metadata.agent_id=ai:body-wins vs X-Agent-Id=ai:attacker
settle 2s for read-settle
  stored metadata.agent_id for body+header conflict: ai:attacker

raw file

Scenario 23 — Malicious content fuzz UNKNOWN

scenario-23.json (report)

          

raw file

scenario-23.log (console trace)
payload sql: 61 bytes
payload html: 66 bytes
payload oversize: 1048576 bytes
Traceback (most recent call last):
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/scenarios/23_malicious_content_fuzz.py", line 106, in <module>
    main()
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/scenarios/23_malicious_content_fuzz.py", line 49, in main
    rc, write_doc = h.write_memory(
                    ^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 202, in write_memory
    return self.http_on(node_ip, "POST", "/api/v1/memories",
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 157, in http_on
    result = self.ssh_exec(node_ip, remote_cmd, timeout=timeout)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 103, in ssh_exec
    return self._run(cmd, timeout=timeout, stdin=stdin)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 87, in _run
    return subprocess.run(
           ^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 548, in run
    with Popen(*popenargs, **kwargs) as process:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 1026, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/usr/lib/python3.12/subprocess.py", line 1955, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
OSError: [Errno 7] Argument list too long: 'ssh'

raw file

Scenario 24 — Byzantine peer PASS

scenario-24.json (report)
{
	"agent_group": "hermes",
	"byzantine_marker": "bz-a9a3cf40365649fbb0c6024df9a0090a",
	"pass": true,
	"reasons": [],
	"scenario": "24",
	"skipped": false,
	"stored_metadata_agent_id": "REJECTED_BY_SERVER",
	"sync_push_http_code": "422",
	"tls_mode": "tls"
}

raw file

scenario-24.log (console trace)
node-2 sends sync_push to node-3 claiming sender_agent_id=ai:alice
  sync_push returned HTTP 422
settle 5s for server-side sync apply
  node-3 stored metadata.agent_id=ABSENT (declared: ai:alice)
  sync_push rejected HTTP 422 — stricter-than-spec, acceptable

raw file

Scenario 25 — Clock skew tolerance PASS

scenario-25.json (report)
{
	"agent_group": "hermes",
	"clock_offset_seconds": 300,
	"marker": "ck-d78f4e7e0b864929ad91cf715c3210c6",
	"pass": true,
	"reasons": [],
	"scenario": "25",
	"seen_on": {
		"node_1": 1,
		"node_3": 1
	},
	"skipped": false,
	"target_node": "node-3",
	"tls_mode": "tls"
}

raw file

scenario-25.log (console trace)
shifting node-3 clock +300s (NTP disabled for the duration)
  node-3 now reports: Wed Apr 22 22:49:55 UTC 2026
alice writes on node-1 (normal clock); waiting for quorum fanout to skewed node-3
settle 15s for skewed-peer convergence
  node-3 (+300s clock) sees marker: 1 (expected >=1)
  node-1 sees marker: 1 (expected >=1)
reverting node-3 clock

raw file

Scenario 28 — memory_search keyword FAIL

Reasons: node-2 did not find the unique token via /search | node-3 did not find the unique token via /search

scenario-28.json (report)
{
	"agent_group": "hermes",
	"pass": false,
	"peer_hits": {
		"node_2": 0,
		"node_3": 0
	},
	"reason": "node-2 did not find the unique token via /search; node-3 did not find the unique token via /search",
	"reasons": [
		"node-2 did not find the unique token via /search",
		"node-3 did not find the unique token via /search"
	],
	"scenario": "28",
	"skipped": false,
	"tls_mode": "tls",
	"token": "kwsearch-977f779667"
}

raw file

scenario-28.log (console trace)
alice writes a row containing unique token=kwsearch-977f779667
settle 8s for search index populate + fanout
bob + charlie call /api/v1/search with the exact token
  node-2 keyword search returned 0 hits
  node-3 keyword search returned 0 hits

raw file

Scenario 29 — memory_archive lifecycle FAIL

Reasons: archive POST returned HTTP 405 | bob did not see M1 in /api/v1/archive | restore returned HTTP 404

scenario-29.json (report)
{
	"agent_group": "hermes",
	"archive_http_code": 405,
	"bob_sees_archived": false,
	"m1_id": "620d3c1e-cc22-4f22-9c3e-314796391443",
	"node4_active_rows": 1,
	"pass": false,
	"reason": "archive POST returned HTTP 405; bob did not see M1 in /api/v1/archive; restore returned HTTP 404",
	"reasons": [
		"archive POST returned HTTP 405",
		"bob did not see M1 in /api/v1/archive",
		"restore returned HTTP 404"
	],
	"restore_http_code": 404,
	"scenario": "29",
	"skipped": false,
	"stats_shape_ok": true,
	"tls_mode": "tls"
}

raw file

scenario-29.log (console trace)
alice writes M1 on node-1
  M1 id=620d3c1e-cc22-4f22-9c3e-314796391443
settle 5s for pre-archive replication
alice archives M1 via DELETE /api/v1/memories/{id} (soft-delete → archive)
  archive returned HTTP 405
settle 5s for archive propagation
bob queries /api/v1/archive on node-2
  bob sees M1 in archive: False
charlie restores M1 via /api/v1/archive/{id}/restore on node-3
  restore returned HTTP 404
settle 5s for restore propagation
node-4 aggregator: M1 must be active again
  node-4 active rows matching marker: 1
fetch /api/v1/archive/stats on node-4

raw file

Scenario 30 — memory_capabilities handshake FAIL

Reasons: no peer returned a capabilities response — endpoint may not be exposed

scenario-30.json (report)
{
	"agent_group": "hermes",
	"pass": false,
	"peer_views": {
		"node_1": null,
		"node_2": null,
		"node_3": null,
		"node_4": null
	},
	"reason": "no peer returned a capabilities response — endpoint may not be exposed",
	"reasons": [
		"no peer returned a capabilities response — endpoint may not be exposed"
	],
	"scenario": "30",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-30.log (console trace)
  node-1 capabilities: []
  node-2 capabilities: []
  node-3 capabilities: []
  node-4 capabilities: []

raw file

Scenario 31 — memory_gc quiescence PASS

scenario-31.json (report)
{
	"agent_group": "hermes",
	"expected_live": 2,
	"forget_http_code": 400,
	"gc_http_code": 200,
	"live_markers_per_peer": {
		"node_1": 2,
		"node_2": 2,
		"node_3": 2,
		"node_4": 2
	},
	"pass": true,
	"reasons": [],
	"scenario": "31",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-31.log (console trace)
alice writes 4 memories
settle 6s for pre-gc replication
alice forgets 2 via /api/v1/forget
  forget returned HTTP 400
settle 5s for forget propagation
bob triggers /api/v1/gc on node-2
  gc returned HTTP 200
settle 8s for post-gc settle
verify remaining 2 markers are still readable on every peer
  node-1 sees 2/2 live markers
  node-2 sees 2/2 live markers
  node-3 sees 2/2 live markers
  node-4 sees 2/2 live markers

raw file

Scenario 32 — memory_inbox + notify FAIL

Reasons: notify returned HTTP 404 | bob's inbox did not deliver alice's notify

scenario-32.json (report)
{
	"agent_group": "hermes",
	"bob_inbox_count": 0,
	"bob_sees_marker": false,
	"charlie_inbox_count": 0,
	"charlie_sees_marker": false,
	"marker": "inb-8cb3b93402504a0ea23d0cdf864c672a",
	"notify_http_code": 404,
	"pass": false,
	"reason": "notify returned HTTP 404; bob's inbox did not deliver alice's notify",
	"reasons": [
		"notify returned HTTP 404",
		"bob's inbox did not deliver alice's notify"
	],
	"scenario": "32",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-32.log (console trace)
alice calls /api/v1/notify → target=ai:bob
  notify returned HTTP 404
settle 6s for notification fanout
bob queries his inbox on node-2
  bob inbox has 0 messages; sees marker: False
charlie queries his inbox on node-3 (must NOT see it)
  charlie inbox has 0 messages; sees marker: False

raw file

Scenario 33 — memory_subscribe pub/sub FAIL

Reasons: subscribe returned HTTP 404 | bob's subscription list did not include the subscribed namespace | unsubscribe returned HTTP 404

scenario-33.json (report)
{
	"agent_group": "hermes",
	"m1_delivered": 1,
	"namespace": "scenario33-pubsub-36499e",
	"ns_in_subs_after": false,
	"ns_in_subs_before": false,
	"pass": false,
	"reason": "subscribe returned HTTP 404; bob's subscription list did not include the subscribed namespace; unsubscribe returned HTTP 404",
	"reasons": [
		"subscribe returned HTTP 404",
		"bob's subscription list did not include the subscribed namespace",
		"unsubscribe returned HTTP 404"
	],
	"scenario": "33",
	"skipped": false,
	"subscribe_http_code": 404,
	"subscriptions_after_count": 0,
	"subscriptions_before_count": 0,
	"tls_mode": "tls",
	"unsubscribe_http_code": 404
}

raw file

scenario-33.log (console trace)
bob subscribes to namespace scenario33-pubsub-36499e on node-2
  subscribe returned HTTP 404
settle 2s for subscription settle
  bob subscriptions: 0 entries; contains ns: False
alice writes M1 into the subscribed namespace
settle 6s for write fanout to subscribers
  bob sees M1 in subscribed namespace: 1
bob unsubscribes from scenario33-pubsub-36499e
  unsubscribe returned HTTP 404
settle 2s for unsubscribe settle
  bob subscriptions after unsubscribe: ns still present = False
alice writes M2 post-unsubscribe (may still replicate via federation but subscription list excludes ns)
settle 5s for post-unsubscribe settle

raw file

Scenario 34 — memory_pending governance FAIL

Reasons: set-standard returned HTTP 405 | approve returned HTTP 403 | reject returned HTTP 404 | charlie saw rejected row — reject didn't prevent publication

scenario-34.json (report)
{
	"agent_group": "hermes",
	"approve_http_code": 403,
	"charlie_sees": {
		"approved": 1,
		"rejected": 1
	},
	"namespace": "scenario34-pending-ff28e0",
	"pass": false,
	"pending_queue_count": 0,
	"reason": "set-standard returned HTTP 405; approve returned HTTP 403; reject returned HTTP 404; charlie saw rejected row — reject didn't prevent publication",
	"reasons": [
		"set-standard returned HTTP 405",
		"approve returned HTTP 403",
		"reject returned HTTP 404",
		"charlie saw rejected row — reject didn't prevent publication"
	],
	"reject_http_code": 404,
	"scenario": "34",
	"set_standard_http_code": 405,
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-34.log (console trace)
alice sets namespace standard on scenario34-pending-ff28e0: write=approve, approver=ai:bob
  set-standard returned HTTP 405
settle 2s for standard settle
alice writes two memories into the governed namespace (should land in pending)
  p1=c830d115-3750-463d-abcb-d50a4d126000 p2=1eb5bbd3-f62f-43f3-a876-9581d40abbed
settle 4s for pending queue settle
bob lists pending on node-2
  pending queue has 0 entries
bob approves p1, rejects p2
  approve HTTP 403; reject HTTP 404
settle 5s for decision fanout
charlie reads the namespace — expects ONLY approved marker
  charlie sees approved=1 rejected=1

raw file

Scenario 35 — memory_namespace standards FAIL

Reasons: set-parent returned HTTP 405 | set-child returned HTTP 405 | clear-standard returned HTTP 405 | parent rule not layered into child's standard view | child rule missing from standard view

scenario-35.json (report)
{
	"agent_group": "hermes",
	"child_ns": "scenario35-parent-9943b9/child",
	"clear_http_code": 405,
	"get_standard_http_code": 200,
	"parent_ns": "scenario35-parent-9943b9",
	"pass": false,
	"post_clear_has_child_rule": false,
	"reason": "set-parent returned HTTP 405; set-child returned HTTP 405; clear-standard returned HTTP 405; parent rule not layered into child's standard view; child rule missing from standard view",
	"reasons": [
		"set-parent returned HTTP 405",
		"set-child returned HTTP 405",
		"clear-standard returned HTTP 405",
		"parent rule not layered into child's standard view",
		"child rule missing from standard view"
	],
	"scenario": "35",
	"sees_child_rule": false,
	"sees_parent_rule": false,
	"set_child_http_code": 405,
	"set_parent_http_code": 405,
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-35.log (console trace)
alice writes parent-standard-memory on node-1
alice sets namespace standard on scenario35-parent-9943b9
  set-parent returned HTTP 405
alice writes child-standard-memory on node-1
alice sets namespace standard on scenario35-parent-9943b9/child with parent=scenario35-parent-9943b9
  set-child returned HTTP 405
settle 4s for standard fanout
bob gets standard for scenario35-parent-9943b9/child on node-2 (expects layered parent+child)
  get-standard returned HTTP 200
  parent-rule visible=False; child-rule visible=False
alice clears standard on scenario35-parent-9943b9/child
  clear returned HTTP 405
settle 3s for clear settle

raw file

Scenario 36 — memory_session_start FAIL

Reasons: session_start returned HTTP 404

scenario-36.json (report)
{
	"agent_group": "hermes",
	"pass": false,
	"reason": "session_start returned HTTP 404",
	"reasons": [
		"session_start returned HTTP 404"
	],
	"scenario": "36",
	"skipped": false,
	"start_http_code": 404,
	"tls_mode": "tls"
}

raw file

scenario-36.log (console trace)
alice starts a session on node-1
  session_start returned HTTP 404, session_id=

raw file

Scenario 37 — memory_get_links bidirectional PASS

scenario-37.json (report)
{
	"agent_group": "hermes",
	"forward_has_target": true,
	"m1": "cf9f3855-0686-4dcf-bd33-a166477380db",
	"m2": "61d1d9d0-4f50-418a-bbfd-373b60240b3d",
	"pass": true,
	"reasons": [],
	"reverse_has_source": true,
	"scenario": "37",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-37.log (console trace)
alice writes M1 + M2 + links M1→M2
  M1=cf9f3855-0686-4dcf-bd33-a166477380db M2=61d1d9d0-4f50-418a-bbfd-373b60240b3d
settle 6s for link fanout
charlie queries /api/v1/links/M1 (forward)
charlie queries /api/v1/links/M2 (reverse)

raw file

Scenario 38 — /export + /import PASS

scenario-38.json (report)
{
	"agent_group": "hermes",
	"dst_ns": "scenario38-dst-a1c1ad",
	"expected_rows": 5,
	"export_http_code": 200,
	"import_http_code": 200,
	"markers_preserved": 5,
	"pass": true,
	"reasons": [],
	"rows_exported": 5,
	"rows_in_destination": 5,
	"scenario": "38",
	"skipped": false,
	"src_ns": "scenario38-src-a1c1ad",
	"tls_mode": "tls"
}

raw file

scenario-38.log (console trace)
alice writes 5 rows into scenario38-src-a1c1ad
settle 4s for pre-export replication
alice exports on node-1 (endpoint has no namespace filter; filter client-side)
  export returned HTTP 200, total_rows=197
  rewrote 5 memories from scenario38-src-a1c1ad -> scenario38-dst-a1c1ad
bob imports the payload into scenario38-dst-a1c1ad on node-2
  import returned HTTP 200
settle 6s for import + fanout
verify row counts match on destination
  scenario38-dst-a1c1ad has 5 rows (expected 5)
  markers preserved in destination: 5/5

raw file

Scenario 39 — /sync/since delta FAIL

Reasons: delta returned 0/6 expected markers — delta-sync incomplete

scenario-39.json (report)
{
	"agent_group": "hermes",
	"checkpoint_ms": 1776898055239,
	"expected_markers": 6,
	"markers_present": 0,
	"namespace": "scenario39-delta-6601cb",
	"pass": false,
	"reason": "delta returned 0/6 expected markers — delta-sync incomplete",
	"reasons": [
		"delta returned 0/6 expected markers — delta-sync incomplete"
	],
	"rows_returned": 0,
	"scenario": "39",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-39.log (console trace)
checkpoint = 1776898055239
suspending ai-memory on node-3
  !! ssh timeout (15s): root@157.245.1.95 pgrep -f 'ai-memory serve' | xargs -r kill -STOP
alice + bob write 6 rows while node-3 is out
resuming ai-memory on node-3
settle 4s for process resume
node-3 asks node-1 /api/v1/sync/since?after=1776898055239
  !! ssh timeout (30s): root@157.245.1.95 curl -sS --cacert /etc/ai-memory-a2a/tls/ca.pem 'https://142.93.70.84:9077/api/v1/sync/since?after=1776898055239&namespace=scenario39-delta-6601cb'
  /sync/since returned 0 rows; 0/6 match our markers

raw file

Scenario 40 — /memories/bulk FAIL

Reasons: bulk returned HTTP 422 | node-2 saw 0/500 bulk rows after fanout | node-3 saw 0/500 bulk rows after fanout | node-4 saw 0/500 bulk rows after fanout

scenario-40.json (report)
{
	"agent_group": "hermes",
	"bulk_http_code": "422",
	"bulk_size": 500,
	"namespace": "scenario40-bulk-f867e3",
	"pass": false,
	"per_peer_count": {
		"node_2": 0,
		"node_3": 0,
		"node_4": 0
	},
	"reason": "bulk returned HTTP 422; node-2 saw 0/500 bulk rows after fanout; node-3 saw 0/500 bulk rows after fanout; node-4 saw 0/500 bulk rows after fanout",
	"reasons": [
		"bulk returned HTTP 422",
		"node-2 saw 0/500 bulk rows after fanout",
		"node-3 saw 0/500 bulk rows after fanout",
		"node-4 saw 0/500 bulk rows after fanout"
	],
	"scenario": "40",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-40.log (console trace)
constructing 500-row bulk payload
staging bulk payload on node-1 /tmp, then POST /api/v1/memories/bulk
  bulk POST returned HTTP 422
settle 20s for bulk fanout across 3 peers + aggregator
  node-2 count=0 (expected 500)
  node-3 count=0 (expected 500)
  node-4 count=0 (expected 500)

raw file

Scenario 41 — /metrics Prometheus PASS

scenario-41.json (report)
{
	"activity_namespace": "scenario41-activity-936f30",
	"agent_group": "hermes",
	"pass": true,
	"per_peer": {
		"node_1": {
			"counters_t0": 8,
			"counters_t1": 8,
			"regressed_keys": 0
		},
		"node_2": {
			"counters_t0": 8,
			"counters_t1": 8,
			"regressed_keys": 0
		},
		"node_3": {
			"counters_t0": 7,
			"counters_t1": 7,
			"regressed_keys": 0
		}
	},
	"reasons": [],
	"scenario": "41",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-41.log (console trace)
scrape T0
  node-1 T0 parsed 8 memory counters
  node-2 T0 parsed 8 memory counters
  node-3 T0 parsed 7 memory counters
settle 5s for counter update
scrape T1
  node-1 T1 parsed 8 memory counters
  node-2 T1 parsed 8 memory counters
  node-3 T1 parsed 7 memory counters

raw file

Scenario 42 — /namespaces enumeration PASS

scenario-42.json (report)
{
	"agent_group": "hermes",
	"namespaces": [
		"scenario42-33ec94-0",
		"scenario42-33ec94-1",
		"scenario42-33ec94-2"
	],
	"pass": true,
	"per_peer": {
		"node_1": {
			"scenario42-33ec94-0": 2,
			"scenario42-33ec94-1": 2,
			"scenario42-33ec94-2": 2
		},
		"node_2": {
			"scenario42-33ec94-0": 2,
			"scenario42-33ec94-1": 2,
			"scenario42-33ec94-2": 2
		},
		"node_3": {
			"scenario42-33ec94-0": 2,
			"scenario42-33ec94-1": 2,
			"scenario42-33ec94-2": 2
		},
		"node_4": {
			"scenario42-33ec94-0": 2,
			"scenario42-33ec94-1": 2,
			"scenario42-33ec94-2": 2
		}
	},
	"reasons": [],
	"scenario": "42",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-42.log (console trace)
alice writes into 3 distinct namespaces: ['scenario42-33ec94-0', 'scenario42-33ec94-1', 'scenario42-33ec94-2']
settle 10s for namespace index fanout
  node-1 sees 3/3 target namespaces, counts: {'scenario42-33ec94-0': 2, 'scenario42-33ec94-1': 2, 'scenario42-33ec94-2': 2}
  node-2 sees 3/3 target namespaces, counts: {'scenario42-33ec94-0': 2, 'scenario42-33ec94-1': 2, 'scenario42-33ec94-2': 2}
  node-3 sees 3/3 target namespaces, counts: {'scenario42-33ec94-0': 2, 'scenario42-33ec94-1': 2, 'scenario42-33ec94-2': 2}
  node-4 sees 3/3 target namespaces, counts: {'scenario42-33ec94-0': 2, 'scenario42-33ec94-1': 2, 'scenario42-33ec94-2': 2}

raw file

All artifacts