../ runs index

Campaign a2a-hermes-v0.6.3.1-r3 FAIL

Agent group
hermes (homogeneous)
ai-memory ref
v0.6.3.1
Completed at
2026-05-02T17:28:41Z
Overall pass
false
Skipped reports
0

Infrastructure

Provider
digitalocean
Region
nyc3
Droplet size
s-2vcpu-4gb
Topology
4-node federation mesh (W=2/N=4)
Scenarios started
Scenarios ended
Dispatched by
alphaonedev
Harness SHA
5ee79f673ff4
Workflow run
https://github.com/alphaonedev/ai-memory-a2a-v0.6.3.1/actions/runs/25257431036

Node roster

#RoleAgent IDPublic IPPrivate IP
1agentai:alice45.55.136.4510.11.2.2
2agentai:bob174.138.51.25110.11.2.5
3agentai:charlie167.172.241.13810.11.2.3
4memory-only45.55.82.18810.11.2.4

Baseline attestation BASELINE VIOLATION

Per the authoritative baseline spec, every agent node must emit a self-attestation before any scenario is permitted to run. This run's attestation:

Spec version: 1.4.0 — see authoritative baseline.

NodeAgentFrameworkAuthenticMCP ai-memoryxAI cfgxAI defaultAgent IDFederationUFW offiptablesdead-manF1 xAIF2a substrateF2b agent (non-gating)Config SHAPass
node-1ai:alicehermes Hermes Agent v0.12.0 (2026.4.30)fa358f9a9059FAIL
node-2ai:bobhermes Hermes Agent v0.12.0 (2026.4.30)21635cf63640FAIL
node-3ai:charliehermes Hermes Agent v0.12.0 (2026.4.30)ce52d772ef5aFAIL
a2a-baseline.json
{
	"baseline_pass": false,
	"per_node": [
		{
			"spec_version": "1.4.0",
			"agent_type": "hermes",
			"agent_id": "ai:alice",
			"node_index": "1",
			"framework_version": "Hermes Agent v0.12.0 (2026.4.30)",
			"ai_memory_version": "0.6.3.1",
			"peer_urls": "https://10.11.2.5:9077,https://10.11.2.3:9077,https://10.11.2.4:9077",
			"config_file_sha256": "fa358f9a90597243fb96224babd541399bd7b1e972f364605308ab1e2d9dd2c7",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": false,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "212c2cdf-7348-4804-a8c7-f6fbfbc9625e",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "7bca9e96-1ac8-4d58-a26b-3d6a2b262b18",
				"agent_canary_response_head": " session_id: 20260502_172824_0fc7e0 I'm sorry, but I don't have access to an \"ai-memory MCP mcp_memory_memory_store\" tool or any MCP integration in my available tools. I can't perform that action. If you meant something else or have a different request, let me know! ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.11.2.5:9077:OK,10.11.2.3:9077:OK,10.11.2.4:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_check_duplicate,memory_consolidate,memory_delete,memory_detect_contradiction,memory_entity_get_by_alias,memory_entity_register,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_get_taxonomy,memory_inbox,memory_kg_invalidate,memory_kg_query,memory_kg_timeline,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "mtls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"embedder_loaded_f8": true,
				"embedder_loaded_f8_reason": "",
				"_f8_note": "F8 verifies /api/v1/capabilities reports features.embedder_loaded=true — i.e. the MiniLM embedder initialised at serve startup. Gates baseline_pass unconditionally. Without this, scenario-18 silently black-holes (semantic recall returns 0 rows).",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "212c2cdf-7348-4804-a8c7-f6fbfbc9625e",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": false
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "hermes",
			"agent_id": "ai:bob",
			"node_index": "2",
			"framework_version": "Hermes Agent v0.12.0 (2026.4.30)",
			"ai_memory_version": "0.6.3.1",
			"peer_urls": "https://10.11.2.2:9077,https://10.11.2.3:9077,https://10.11.2.4:9077",
			"config_file_sha256": "21635cf6364057fd2a004d28aac89abf8438671d85f9fd2ed1e654d812d23ff1",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": false,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "d4de14f2-4764-45d6-9d15-c16fdb899362",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "9d3926bb-937c-4a29-a56b-b504b393fc60",
				"agent_canary_response_head": " session_id: 20260502_172826_0a3644 I'm sorry, but I don't have access to an \"ai-memory MCP mcp_memory_memory_store\" tool or any MCP integration for saving memories in that format. My available tools include a standard \"memory\" tool for saving user preferences or environment facts, but it doesn't support the namespace, title, or content structure you described.  If you meant to use the built-in memory tool or need help with something else, let me know! ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.11.2.2:9077:OK,10.11.2.3:9077:OK,10.11.2.4:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_check_duplicate,memory_consolidate,memory_delete,memory_detect_contradiction,memory_entity_get_by_alias,memory_entity_register,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_get_taxonomy,memory_inbox,memory_kg_invalidate,memory_kg_query,memory_kg_timeline,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "mtls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"embedder_loaded_f8": true,
				"embedder_loaded_f8_reason": "",
				"_f8_note": "F8 verifies /api/v1/capabilities reports features.embedder_loaded=true — i.e. the MiniLM embedder initialised at serve startup. Gates baseline_pass unconditionally. Without this, scenario-18 silently black-holes (semantic recall returns 0 rows).",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "d4de14f2-4764-45d6-9d15-c16fdb899362",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": false
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "hermes",
			"agent_id": "ai:charlie",
			"node_index": "3",
			"framework_version": "Hermes Agent v0.12.0 (2026.4.30)",
			"ai_memory_version": "0.6.3.1",
			"peer_urls": "https://10.11.2.2:9077,https://10.11.2.5:9077,https://10.11.2.4:9077",
			"config_file_sha256": "ce52d772ef5a00968db29fb80eea7a14206b0a258a00ff2165db725405474618",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": false,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "0cb01865-2150-489c-a1cd-b5aa9691d5a6",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "30827890-4a1b-4b84-bba9-44a0b88020b9",
				"agent_canary_response_head": " session_id: 20260502_172820_75f4b7 I'm sorry, but I don't have access to an MCP tool named \"mcp_memory_memory_store\" or any AI-memory MCP integration available in my current toolset. I can't execute that command. If you meant one of my available tools like \"memory\" for saving persistent facts, let me know more details and I'll assist. ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.11.2.2:9077:OK,10.11.2.5:9077:OK,10.11.2.4:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_check_duplicate,memory_consolidate,memory_delete,memory_detect_contradiction,memory_entity_get_by_alias,memory_entity_register,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_get_taxonomy,memory_inbox,memory_kg_invalidate,memory_kg_query,memory_kg_timeline,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "mtls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"embedder_loaded_f8": true,
				"embedder_loaded_f8_reason": "",
				"_f8_note": "F8 verifies /api/v1/capabilities reports features.embedder_loaded=true — i.e. the MiniLM embedder initialised at serve startup. Gates baseline_pass unconditionally. Without this, scenario-18 silently black-holes (semantic recall returns 0 rows).",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "0cb01865-2150-489c-a1cd-b5aa9691d5a6",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": false
		}
	],
	"failure_mode": "baseline-violation"
}

raw file

Run focus

Campaign run failed: no scenario reports recovered.

What this campaign tested: Intended to exercise 30 scenarios (1,1b,2,4-6,9-18,22-25,28-42) covering transport protocols, framework integrations, and memory primitives in a 4-node DigitalOcean federation mesh with Hermes agents.

What it demonstrated: The testing infrastructure failed to generate or capture any scenario results, providing no evidence on AI memory sharing reliability.

AI NHI analysis · Claude Opus 4.7

Campaign run failed: no scenario reports recovered.

FAIL — no scenario reports recovered.

For three audiences

Non-technical end users

This test run was supposed to check if AI agents can reliably share memories with each other, but it didn't work properly. No results were collected from any of the planned checks. This means we can't say if the memory sharing works or not right now.

C-level decision makers

This run indicates a critical failure in the CI/CD pipeline, blocking assessment of production readiness for v0.6.3.1; risk posture is elevated due to unverified memory federation in multi-agent setups. Customer-facing claims on reliable AI collaboration cannot be substantiated without successful tests. Compared to prior runs, this represents a regression in test execution reliability, not in the core technology.

Engineers & architects

All 30 requested scenarios (S1,S1b,S2,S4-S6,S9-S18,S22-S25,S28-S42) resulted in zero reports recovered, with overall_pass=false and reasons=['no scenario reports recovered']. Probable root cause is a harness bug in report aggregation or CI workflow failure (harness_sha=5ee79f673ff4558c84c4656fa27eec80570d02a3, workflow=25257431036); no primitives or transports were actually exercised. Infra topology (4-node DO nyc3 mesh) appears provisioned correctly per node metadata.

What changes going into the next campaign

Investigate and fix CI harness report recovery issue before re-running to ensure scenarios execute and results are captured.

All artifacts