Non-technical end users
This test run was supposed to check if AI agents can reliably share memories across a network, but it didn't work at all. No tests ran, and no data was collected. This means we can't say if the memory sharing is reliable yet.
| # | Role | Agent ID | Public IP | Private IP |
|---|---|---|---|---|
| 1 | agent | ai:alice | 159.65.187.13 | 10.11.2.3 |
| 2 | agent | ai:bob | 68.183.149.88 | 10.11.2.4 |
| 3 | agent | ai:charlie | 104.131.160.79 | 10.11.2.2 |
| 4 | memory-only | — | 209.97.153.57 | 10.11.2.5 |
Per the authoritative baseline spec, every agent node must emit a self-attestation before any scenario is permitted to run. This run's attestation:
Spec version: 1.4.0 — see authoritative baseline.
| Node | Agent | Framework | Authentic | MCP ai-memory | xAI cfg | xAI default | Agent ID | Federation | UFW off | iptables | dead-man | F1 xAI | F2a substrate | F2b agent (non-gating) | Config SHA | Pass |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| node-1 | ai:alice | hermes Hermes Agent v0.12.0 (2026.4.30) | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | — | fa358f9a9059 | FAIL |
| node-2 | ai:bob | hermes Hermes Agent v0.12.0 (2026.4.30) | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | — | 21635cf63640 | FAIL |
| node-3 | ai:charlie | hermes Hermes Agent v0.12.0 (2026.4.30) | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | — | ce52d772ef5a | FAIL |
{
"baseline_pass": false,
"per_node": [
{
"spec_version": "1.4.0",
"agent_type": "hermes",
"agent_id": "ai:alice",
"node_index": "1",
"framework_version": "Hermes Agent v0.12.0 (2026.4.30)",
"ai_memory_version": "0.6.3.1",
"peer_urls": "https://10.11.2.4:9077,https://10.11.2.2:9077,https://10.11.2.5:9077",
"config_file_sha256": "fa358f9a90597243fb96224babd541399bd7b1e972f364605308ab1e2d9dd2c7",
"config_attestation": {
"framework_is_authentic": true,
"mcp_server_ai_memory_registered": true,
"llm_backend_is_xai_grok": true,
"llm_is_default_provider": false,
"mcp_command_is_ai_memory": true,
"agent_id_stamped": true,
"federation_live": true,
"ufw_disabled": true,
"iptables_flushed": true,
"dead_man_switch_scheduled": true
},
"negative_invariants": {
"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
"a2a_protocol_off": true,
"sub_agent_or_sessions_spawn_off": true,
"alternative_channels_off": true,
"tool_allowlist_is_memory_only": true,
"a2a_gate_profile_locked": true
},
"functional_probes": {
"xai_grok_chat_reachable": true,
"xai_grok_sample_reply": "READY",
"substrate_http_canary_f2a": true,
"substrate_http_canary_uuid": "35530b36-93c7-4c8a-9e07-bafa5df52004",
"agent_mcp_canary_f2b": false,
"agent_mcp_canary_uuid": "78b08442-87fc-4f51-adb9-2eb5b905f8c5",
"agent_canary_response_head": " session_id: 20260502_171640_96f6da I don't have access to an \"ai-memory MCP memory_store\" tool. The available tools do not include MCP integrations or a memory_store function. If this is referring to a custom setup or plugin, please provide more details or configure it via the hermes-agent skill. ",
"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
"mesh_connectivity_f4": true,
"mesh_edges_ok": 3,
"mesh_edges_total": 3,
"mesh_edges_detail": "10.11.2.4:9077:OK,10.11.2.2:9077:OK,10.11.2.5:9077:OK",
"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
"ai_memory_mcp_stdio_f5": true,
"ai_memory_mcp_stdio_init_ok": true,
"ai_memory_mcp_stdio_tools_ok": true,
"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_check_duplicate,memory_consolidate,memory_delete,memory_detect_contradiction,memory_entity_get_by_alias,memory_entity_register,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_get_taxonomy,memory_inbox,memory_kg_invalidate,memory_kg_query,memory_kg_timeline,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
"tls_mode": "mtls",
"tls_handshake_f6": true,
"tls_handshake_f6_reason": "",
"mtls_enforcement_f7": true,
"mtls_enforcement_f7_reason": "",
"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
"embedder_loaded_f8": true,
"embedder_loaded_f8_reason": "",
"_f8_note": "F8 verifies /api/v1/capabilities reports features.embedder_loaded=true — i.e. the MiniLM embedder initialised at serve startup. Gates baseline_pass unconditionally. Without this, scenario-18 silently black-holes (semantic recall returns 0 rows).",
"agent_mcp_ai_memory_canary": true,
"canary_uuid": "35530b36-93c7-4c8a-9e07-bafa5df52004",
"canary_namespace": "_baseline_canary_f2a"
},
"baseline_pass": false
},
{
"spec_version": "1.4.0",
"agent_type": "hermes",
"agent_id": "ai:bob",
"node_index": "2",
"framework_version": "Hermes Agent v0.12.0 (2026.4.30)",
"ai_memory_version": "0.6.3.1",
"peer_urls": "https://10.11.2.3:9077,https://10.11.2.2:9077,https://10.11.2.5:9077",
"config_file_sha256": "21635cf6364057fd2a004d28aac89abf8438671d85f9fd2ed1e654d812d23ff1",
"config_attestation": {
"framework_is_authentic": true,
"mcp_server_ai_memory_registered": true,
"llm_backend_is_xai_grok": true,
"llm_is_default_provider": false,
"mcp_command_is_ai_memory": true,
"agent_id_stamped": true,
"federation_live": true,
"ufw_disabled": true,
"iptables_flushed": true,
"dead_man_switch_scheduled": true
},
"negative_invariants": {
"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
"a2a_protocol_off": true,
"sub_agent_or_sessions_spawn_off": true,
"alternative_channels_off": true,
"tool_allowlist_is_memory_only": true,
"a2a_gate_profile_locked": true
},
"functional_probes": {
"xai_grok_chat_reachable": true,
"xai_grok_sample_reply": "READY",
"substrate_http_canary_f2a": true,
"substrate_http_canary_uuid": "173aa10e-1426-4057-a0e6-fb98c8c76de9",
"agent_mcp_canary_f2b": false,
"agent_mcp_canary_uuid": "ce4f384b-1b0b-4220-93c3-9985fe83ba8e",
"agent_canary_response_head": " session_id: 20260502_171447_26e956 I'm sorry, but I don't have access to an \"ai-memory MCP memory_store\" tool. The available tools do not include any MCP-related functions for saving memories in that format. If you meant one of my existing tools like \"memory\" for persistent facts, let me know how I can assist with that. ",
"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
"mesh_connectivity_f4": true,
"mesh_edges_ok": 3,
"mesh_edges_total": 3,
"mesh_edges_detail": "10.11.2.3:9077:OK,10.11.2.2:9077:OK,10.11.2.5:9077:OK",
"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
"ai_memory_mcp_stdio_f5": true,
"ai_memory_mcp_stdio_init_ok": true,
"ai_memory_mcp_stdio_tools_ok": true,
"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_check_duplicate,memory_consolidate,memory_delete,memory_detect_contradiction,memory_entity_get_by_alias,memory_entity_register,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_get_taxonomy,memory_inbox,memory_kg_invalidate,memory_kg_query,memory_kg_timeline,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
"tls_mode": "mtls",
"tls_handshake_f6": true,
"tls_handshake_f6_reason": "",
"mtls_enforcement_f7": true,
"mtls_enforcement_f7_reason": "",
"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
"embedder_loaded_f8": true,
"embedder_loaded_f8_reason": "",
"_f8_note": "F8 verifies /api/v1/capabilities reports features.embedder_loaded=true — i.e. the MiniLM embedder initialised at serve startup. Gates baseline_pass unconditionally. Without this, scenario-18 silently black-holes (semantic recall returns 0 rows).",
"agent_mcp_ai_memory_canary": true,
"canary_uuid": "173aa10e-1426-4057-a0e6-fb98c8c76de9",
"canary_namespace": "_baseline_canary_f2a"
},
"baseline_pass": false
},
{
"spec_version": "1.4.0",
"agent_type": "hermes",
"agent_id": "ai:charlie",
"node_index": "3",
"framework_version": "Hermes Agent v0.12.0 (2026.4.30)",
"ai_memory_version": "0.6.3.1",
"peer_urls": "https://10.11.2.3:9077,https://10.11.2.4:9077,https://10.11.2.5:9077",
"config_file_sha256": "ce52d772ef5a00968db29fb80eea7a14206b0a258a00ff2165db725405474618",
"config_attestation": {
"framework_is_authentic": true,
"mcp_server_ai_memory_registered": true,
"llm_backend_is_xai_grok": true,
"llm_is_default_provider": false,
"mcp_command_is_ai_memory": true,
"agent_id_stamped": true,
"federation_live": true,
"ufw_disabled": true,
"iptables_flushed": true,
"dead_man_switch_scheduled": true
},
"negative_invariants": {
"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
"a2a_protocol_off": true,
"sub_agent_or_sessions_spawn_off": true,
"alternative_channels_off": true,
"tool_allowlist_is_memory_only": true,
"a2a_gate_profile_locked": true
},
"functional_probes": {
"xai_grok_chat_reachable": true,
"xai_grok_sample_reply": "READY",
"substrate_http_canary_f2a": true,
"substrate_http_canary_uuid": "9b0990b2-a1bf-4f01-9d13-f0d7098c5238",
"agent_mcp_canary_f2b": false,
"agent_mcp_canary_uuid": "a91c1ac3-4623-4d25-b50b-ba01362235cb",
"agent_canary_response_head": " session_id: 20260502_171648_248b56 I'm sorry, but I don't have access to an \"ai-memory MCP memory_store\" tool or any MCP integration in my available tools. I can't perform that action. If you meant one of my existing tools like \"memory\" for saving persistent facts, let me know how I can help with that instead. ",
"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
"mesh_connectivity_f4": true,
"mesh_edges_ok": 3,
"mesh_edges_total": 3,
"mesh_edges_detail": "10.11.2.3:9077:OK,10.11.2.4:9077:OK,10.11.2.5:9077:OK",
"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
"ai_memory_mcp_stdio_f5": true,
"ai_memory_mcp_stdio_init_ok": true,
"ai_memory_mcp_stdio_tools_ok": true,
"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_check_duplicate,memory_consolidate,memory_delete,memory_detect_contradiction,memory_entity_get_by_alias,memory_entity_register,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_get_taxonomy,memory_inbox,memory_kg_invalidate,memory_kg_query,memory_kg_timeline,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
"tls_mode": "mtls",
"tls_handshake_f6": true,
"tls_handshake_f6_reason": "",
"mtls_enforcement_f7": true,
"mtls_enforcement_f7_reason": "",
"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
"embedder_loaded_f8": true,
"embedder_loaded_f8_reason": "",
"_f8_note": "F8 verifies /api/v1/capabilities reports features.embedder_loaded=true — i.e. the MiniLM embedder initialised at serve startup. Gates baseline_pass unconditionally. Without this, scenario-18 silently black-holes (semantic recall returns 0 rows).",
"agent_mcp_ai_memory_canary": true,
"canary_uuid": "9b0990b2-a1bf-4f01-9d13-f0d7098c5238",
"canary_namespace": "_baseline_canary_f2a"
},
"baseline_pass": false
}
],
"failure_mode": "baseline-violation"
}
Run focus
What this campaign tested: Intended to exercise 32 scenarios covering transport protocols, framework integrations, and memory primitives in a 4-node Hermes agent federation mesh on DigitalOcean, but no tests executed successfully.
What it demonstrated: The testing infrastructure failed to produce any scenario results, demonstrating a critical breakdown in the CI harness or reporting pipeline for ai-memory v0.6.3.1.
AI NHI analysis · Claude Opus 4.7
FAIL — no scenario reports recovered
This test run was supposed to check if AI agents can reliably share memories across a network, but it didn't work at all. No tests ran, and no data was collected. This means we can't say if the memory sharing is reliable yet.
This run indicates a high-risk failure in the testing pipeline, blocking assessment of production readiness for Hermes agents under v0.6.3.1. Customer claims about reliable inter-agent memory sharing remain unviable without successful validation. Compared to prior runs, this is a regression in CI reliability, requiring immediate fix before scaling deployments.
All 32 requested scenarios (S1, S1b, S2, S4-S6, S9-S18, S22-S25, S28-S42) yielded no reports, with overall_pass=false due to 'no scenario reports recovered'. Probable root cause is a failure in the GitHub Actions workflow (run 25257181969, harness_sha 184c069adb710837e2e4c62b19632ec05600f41d) or DigitalOcean infra provisioning/teardown in nyc3 region. No primitives or transports were tested; investigate logging in the 4-node mesh (W=2/N=4) for agent startup issues on nodes ai:alice, ai:bob, ai:charlie.
Debug and resolve CI harness reporting failure in GitHub Actions workflow before retrying the campaign.