../ runs index

Campaign a2a-ironclaw-v0.6.2-patch2-r25v-s18probe FAIL

Agent group
ironclaw (homogeneous)
ai-memory ref
release/v0.6.2
Completed at
2026-04-23T21:20:49Z
Overall pass
false
Skipped reports
0

Infrastructure

Provider
digitalocean
Region
nyc3
Droplet size
s-2vcpu-4gb
Topology
4-node federation mesh (W=2/N=4)
Scenarios started
2026-04-23T21:20:40Z
Scenarios ended
2026-04-23T21:20:49Z
Dispatched by
alphaonedev
Harness SHA
3c702ea41487
Workflow run
https://github.com/alphaonedev/ai-memory-ai2ai-gate/actions/runs/24859065637

Node roster

#RoleAgent IDPublic IPPrivate IP
1agentai:alice143.198.20.23110.10.0.4
2agentai:bob159.89.186.1710.10.0.5
3agentai:charlie157.245.217.6110.10.0.3
4memory-only138.197.45.3110.10.0.2

Baseline attestation BASELINE OK

Per the authoritative baseline spec, every agent node must emit a self-attestation before any scenario is permitted to run. This run's attestation:

Spec version: 1.4.0 — see authoritative baseline.

NodeAgentFrameworkAuthenticMCP ai-memoryxAI cfgxAI defaultAgent IDFederationUFW offiptablesdead-manF1 xAIF2a substrateF2b agent (non-gating)Config SHAPass
node-1ai:aliceironclaw ironclaw 0.26.0707ec771464ePASS
node-2ai:bobironclaw ironclaw 0.26.0386fb1153093PASS
node-3ai:charlieironclaw ironclaw 0.26.00e209a2e4002PASS
a2a-baseline.json
{
	"baseline_pass": true,
	"per_node": [
		{
			"spec_version": "1.4.0",
			"agent_type": "ironclaw",
			"agent_id": "ai:alice",
			"node_index": "1",
			"framework_version": "ironclaw 0.26.0",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "http://10.10.0.5:9077,http://10.10.0.3:9077,http://10.10.0.2:9077",
			"config_file_sha256": "707ec771464e1dd17d7b001b3b121d1322afdc69377182decf9f936780683935",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "604189be-7a39-4c19-9d22-d7329d70cc31",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "49596d71-5f8d-45dc-90b4-c9834c811359",
				"agent_canary_response_head": "error: unrecognized subcommand 'chat'    tip: a similar subcommand exists: 'channels'  Usage: ironclaw [OPTIONS] [COMMAND]  For more information, try '--help'. ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.10.0.5:9077:OK,10.10.0.3:9077:OK,10.10.0.2:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "off",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"embedder_loaded_f8": true,
				"embedder_loaded_f8_reason": "",
				"_f8_note": "F8 verifies /api/v1/capabilities reports features.embedder_loaded=true — i.e. the MiniLM embedder initialised at serve startup. Gates baseline_pass unconditionally. Without this, scenario-18 silently black-holes (semantic recall returns 0 rows).",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "604189be-7a39-4c19-9d22-d7329d70cc31",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "ironclaw",
			"agent_id": "ai:bob",
			"node_index": "2",
			"framework_version": "ironclaw 0.26.0",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "http://10.10.0.4:9077,http://10.10.0.3:9077,http://10.10.0.2:9077",
			"config_file_sha256": "386fb11530937fce76408bcd6becf6f67800893cde88ecb3887c957d525990ce",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "957a6112-8597-48da-8ea8-ae3aff8c010c",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "5f6ce2f9-c71c-40ca-830a-03d7a62cd0ae",
				"agent_canary_response_head": "error: unrecognized subcommand 'chat'    tip: a similar subcommand exists: 'channels'  Usage: ironclaw [OPTIONS] [COMMAND]  For more information, try '--help'. ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.10.0.4:9077:OK,10.10.0.3:9077:OK,10.10.0.2:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "off",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"embedder_loaded_f8": true,
				"embedder_loaded_f8_reason": "",
				"_f8_note": "F8 verifies /api/v1/capabilities reports features.embedder_loaded=true — i.e. the MiniLM embedder initialised at serve startup. Gates baseline_pass unconditionally. Without this, scenario-18 silently black-holes (semantic recall returns 0 rows).",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "957a6112-8597-48da-8ea8-ae3aff8c010c",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "ironclaw",
			"agent_id": "ai:charlie",
			"node_index": "3",
			"framework_version": "ironclaw 0.26.0",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "http://10.10.0.4:9077,http://10.10.0.5:9077,http://10.10.0.2:9077",
			"config_file_sha256": "0e209a2e400201c011cf1326ccd3ef09e9179326719ed436994317ee4999bb5e",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "0ebcefe4-3fa7-4ab7-a318-29475e094a94",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "ea17963a-bbba-40e8-bd85-4d1e461a78e7",
				"agent_canary_response_head": "error: unrecognized subcommand 'chat'    tip: a similar subcommand exists: 'channels'  Usage: ironclaw [OPTIONS] [COMMAND]  For more information, try '--help'. ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.10.0.4:9077:OK,10.10.0.5:9077:OK,10.10.0.2:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "off",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"embedder_loaded_f8": true,
				"embedder_loaded_f8_reason": "",
				"_f8_note": "F8 verifies /api/v1/capabilities reports features.embedder_loaded=true — i.e. the MiniLM embedder initialised at serve startup. Gates baseline_pass unconditionally. Without this, scenario-18 silently black-holes (semantic recall returns 0 rows).",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "0ebcefe4-3fa7-4ab7-a318-29475e094a94",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		}
	]
}

raw file

F3 — peer A2A via shared memory F3 OK

Workflow-level probe answering "can agents communicate through ai-memory?". Writer ai:alice posted canary UUID 091581f7-952a-4b84-a78f-a0abdee60aa3 to namespace _baseline_peer_canary via node-1's local ai-memory serve HTTP. After W=2 fanout settle, probe confirmed the canary on each of the 3 peer nodes via their local GET /api/v1/memories.

f3-peer-a2a.json
{
	"probe": "F3",
	"name": "peer-a2a-via-shared-memory",
	"description": "Writer agent posts a canary via local ai-memory HTTP on node-1; verifies the row propagates to the 3 peer nodes (W=2/N=4 quorum) before scenarios run.",
	"canary_uuid": "091581f7-952a-4b84-a78f-a0abdee60aa3",
	"canary_namespace": "_baseline_peer_canary",
	"writer_agent": "ai:alice",
	"pass": true
}

raw file

Run focus

Hybrid semantic recall failed to surface Bob's shared memory.

What this campaign tested: Tested scenario 18 exercising hybrid recall mode with semantic querying in a 4-node federation mesh over non-TLS transport, covering embedding primitives and multi-writer sharing.

What it demonstrated: Demonstrated that semantic queries in hybrid mode fail to retrieve all relevant memories from multiple agents in the federated setup, surfacing only one out of two expected items.

AI NHI analysis · Claude Opus 4.7

Hybrid semantic recall failed to surface Bob's shared memory.

FAIL — 0/1 scenarios passed due to missing memory in query results.

For three audiences

Non-technical end users

In this test, two AI agents shared memories about morning exercise routines. When a third agent asked about it, the system only showed one agent's memory and missed the other. This shows that agents do not reliably share and access all relevant memories yet.

C-level decision makers

This run exposes a high-risk bug in semantic memory sharing, rendering the v0.6.2 release not production-ready for federated agent groups. Customer-facing claims on reliable AI-to-AI memory propagation are untenable until fixed; represents a regression from prior stable runs.

Engineers & architects

Scenario S18 failed with hybrid recall returning only 1 row instead of 2, impacting the semantic embedding query primitive in the W=2/N=4 federated mesh. Charlie saw Alice's marker but not Bob's, despite both embeddings present in diagnostics. Probable root cause is incomplete query fanout or replication inconsistency across nodes; re-run with trace-level logging on recall endpoints.

What changes going into the next campaign

Add trace logging for query propagation and replication status before re-running S18.

Tests performed in this run

Every scenario that produced a JSON report in this campaign, in testbook order. Click a row's scenario id to jump to its full report below. See the Every test performed page for the authoritative catalog.

IDTitleResultReason
S18Semantic query expansion?semantic query did not surface bob's memory

Scenario 18 — Semantic query expansion FAIL

Reasons: semantic query did not surface bob's memory

scenario-18.json (report)
{
	"agent_group": "ironclaw",
	"diag_list_alice_present": 1,
	"diag_list_bob_present": 1,
	"diag_node3_embedding_probe": "dawn-walk|1536|BYTES | ridge-strides|1536|BYTES",
	"pass": false,
	"query": "morning outdoor exercise routine",
	"reason": "semantic query did not surface bob's memory",
	"reasons": [
		"semantic query did not surface bob's memory"
	],
	"recall_mode": "hybrid",
	"rows_in_recall": 1,
	"scenario": "18",
	"skipped": false,
	"tls_mode": "off",
	"writers": [
		{
			"agent": "ai:alice",
			"marker": "alice-sunrise-7a498688",
			"seen_by_charlie": 1
		},
		{
			"agent": "ai:bob",
			"marker": "bob-daybreak-da176dbe",
			"seen_by_charlie": 0
		}
	]
}

raw file

scenario-18.log (console trace)
alice writes A on node-1
bob writes B on node-2
polling node-3 for both writes to propagate (max 30 s)
  both writes visible after 1 s
settle 3s for embedder + HNSW catch-up
  node-3 DB embedding probe: 'dawn-walk|1536|BYTES | ridge-strides|1536|BYTES'
charlie queries on node-3 with semantically-related prompt
  recall mode=hybrid returned 1 rows
  charlie sees alice's memory: 1 (expected >=1)
  charlie sees bob's memory: 0 (expected >=1)

raw file

All artifacts