../ runs index

Campaign a2a-ironclaw-v0.6.2-rc.0-v3r5 FAIL

Agent group
ironclaw (homogeneous)
ai-memory ref
v0.6.2-rc.0
Completed at
2026-04-22T03:38:58Z
Overall pass
false
Skipped reports
1

Infrastructure

Provider
digitalocean
Region
nyc3
Droplet size
s-2vcpu-4gb
Topology
4-node federation mesh (W=2/N=4)
Scenarios started
2026-04-22T03:26:49Z
Scenarios ended
2026-04-22T03:38:58Z
Dispatched by
alphaonedev
Harness SHA
adf914ac1af2
Workflow run
https://github.com/alphaonedev/ai-memory-ai2ai-gate/actions/runs/24758433768

Node roster

#RoleAgent IDPublic IPPrivate IP
1agentai:alice138.197.33.14210.251.0.5
2agentai:bob104.131.116.22910.251.0.3
3agentai:charlie143.198.12.14010.251.0.2
4memory-only167.71.241.1010.251.0.4

Baseline attestation BASELINE OK

Per the authoritative baseline spec, every agent node must emit a self-attestation before any scenario is permitted to run. This run's attestation:

Spec version: 1.4.0 — see authoritative baseline.

NodeAgentFrameworkAuthenticMCP ai-memoryxAI cfgxAI defaultAgent IDFederationUFW offiptablesdead-manF1 xAIF2a substrateF2b agent (non-gating)Config SHAPass
node-1ai:aliceironclaw ironclaw 0.26.00c7297ee4dc0PASS
node-2ai:bobironclaw ironclaw 0.26.0c04db4787d24PASS
node-3ai:charlieironclaw ironclaw 0.26.05ecd9e89ca3bPASS
a2a-baseline.json
{
	"baseline_pass": true,
	"per_node": [
		{
			"spec_version": "1.4.0",
			"agent_type": "ironclaw",
			"agent_id": "ai:alice",
			"node_index": "1",
			"framework_version": "ironclaw 0.26.0",
			"ai_memory_version": "0.6.2-rc.0",
			"peer_urls": "http://10.251.0.3:9077,http://10.251.0.2:9077,http://10.251.0.4:9077",
			"config_file_sha256": "0c7297ee4dc0053971058d34e81bb47d4aa621c817b8fe65cff0c40647773821",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "c49f781a-468b-4fb5-b4c5-384883108a5a",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "b916d77e-9ce9-44a6-b578-8b54dee9b32f",
				"agent_canary_response_head": "error: unrecognized subcommand 'chat'    tip: a similar subcommand exists: 'channels'  Usage: ironclaw [OPTIONS] [COMMAND]  For more information, try '--help'. ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.251.0.3:9077:OK,10.251.0.2:9077:OK,10.251.0.4:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "off",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "c49f781a-468b-4fb5-b4c5-384883108a5a",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "ironclaw",
			"agent_id": "ai:bob",
			"node_index": "2",
			"framework_version": "ironclaw 0.26.0",
			"ai_memory_version": "0.6.2-rc.0",
			"peer_urls": "http://10.251.0.5:9077,http://10.251.0.2:9077,http://10.251.0.4:9077",
			"config_file_sha256": "c04db4787d247368eee6b8832650cb06e502c464a7581750808faf1da953b7d5",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "9bd7e7d8-804c-4fc7-a0c7-ef7279b83e97",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "e42a4c87-fa85-47c1-8a98-796c33210f39",
				"agent_canary_response_head": "error: unrecognized subcommand 'chat'    tip: a similar subcommand exists: 'channels'  Usage: ironclaw [OPTIONS] [COMMAND]  For more information, try '--help'. ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.251.0.5:9077:OK,10.251.0.2:9077:OK,10.251.0.4:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "off",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "9bd7e7d8-804c-4fc7-a0c7-ef7279b83e97",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "ironclaw",
			"agent_id": "ai:charlie",
			"node_index": "3",
			"framework_version": "ironclaw 0.26.0",
			"ai_memory_version": "0.6.2-rc.0",
			"peer_urls": "http://10.251.0.5:9077,http://10.251.0.3:9077,http://10.251.0.4:9077",
			"config_file_sha256": "5ecd9e89ca3bf0f5b06596ff62d8ece0091ec25843707039ee035c6acd2adf8d",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "521ce3de-92a4-43e7-a8e0-b68551c4fbe7",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "ebc1c390-285c-42e6-b68c-db1f1687378e",
				"agent_canary_response_head": "error: unrecognized subcommand 'chat'    tip: a similar subcommand exists: 'channels'  Usage: ironclaw [OPTIONS] [COMMAND]  For more information, try '--help'. ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.251.0.5:9077:OK,10.251.0.3:9077:OK,10.251.0.4:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "off",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "521ce3de-92a4-43e7-a8e0-b68551c4fbe7",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		}
	]
}

raw file

F3 — peer A2A via shared memory F3 OK

Workflow-level probe answering "can agents communicate through ai-memory?". Writer ai:alice posted canary UUID 9b48a7ef-7742-4331-bd1b-68ce16a62124 to namespace _baseline_peer_canary via node-1's local ai-memory serve HTTP. After W=2 fanout settle, probe confirmed the canary on each of the 3 peer nodes via their local GET /api/v1/memories.

f3-peer-a2a.json
{
	"probe": "F3",
	"name": "peer-a2a-via-shared-memory",
	"description": "Writer agent posts a canary via local ai-memory HTTP on node-1; verifies the row propagates to the 3 peer nodes (W=2/N=4 quorum) before scenarios run.",
	"canary_uuid": "9b48a7ef-7742-4331-bd1b-68ce16a62124",
	"canary_namespace": "_baseline_peer_canary",
	"writer_agent": "ai:alice",
	"pass": true
}

raw file

Run focus

Federated memory sharing fails in 13 scenarios, core propagation partial.

What this campaign tested: This campaign exercised 35 scenarios testing memory propagation, linking, deletion, semantic/keyword search, bulk ops, pubsub, archiving, and admin endpoints across HTTP transport in a 4-node federation mesh.

What it demonstrated: Basic memory writes and reads succeed in simple cases, but advanced primitives like search, notifications, subscriptions, approvals, and delta-sync fail consistently due to endpoint errors and incomplete federation.

AI NHI analysis · Claude Opus 4.7

Federated memory sharing fails in 13 scenarios, core propagation partial.

FAIL — 13/35 scenarios failed, 1 skipped, core features degraded.

For three audiences

Non-technical end users

The AI agents could share basic memories in some tests, but often failed to find or update shared information reliably. Complex tasks like searching for memories or notifying others didn't work, meaning agents can't depend on each other for complete recall. Overall, the system needs fixes to make memory sharing trustworthy.

C-level decision makers

High risk posture with failures in federation sync and admin APIs blocking production readiness; customer claims on reliable AI2AI memory would be unviable currently. Regressions from prior runs in endpoint availability (e.g., 404/405 errors) suggest deployment issues. Fast-track fixes to unblock release candidate.

Engineers & architects

Failures cluster in admin-heavy scenarios (S29 archive 405, S30 capabilities missing, S32 notify 404, S33 pubsub 404, S34 approvals 403/404/405, S35 ns rules 405, S36 sessions 404, S39 delta-sync incomplete, S40 bulk 422) plus propagation issues (S1 MCP recall 0, S12 registration miss on node-4, S18 semantic miss, S28 keyword miss on nodes 2/3). Probable root causes include unimplemented endpoints and node-4 federation lag; check probes F# for HTTP codes and sync delays. Basic primitives like delete (S10), link (S11), version-branch (S13) passed.

What changes going into the next campaign

Implement all missing admin endpoints (e.g., /archive, /notify, /subscribe) and debug node-4 visibility issues before next campaign.

Tests performed in this run

Every scenario that produced a JSON report in this campaign, in testbook order. Click a row's scenario id to jump to its full report below. See the Every test performed page for the authoritative catalog.

IDTitleResultReason
S1Per-agent write + read (MCP stdio)?ai:alice recalled 0 < 20 via MCP; ai:bob recalled 0 < 20 via MCP; ai:charlie recalled 0 < 20 via MCP; cross-cluster identity check failed — see per_ns
S1bPer-agent write + read (HTTP)PASS
S2Shared-context handoffPASS
S4Federation-aware concurrent writesPASS
S5Consolidation + curationPASS
S6Contradiction detectionPASS
S9Mutation round-tripPASS
S10Deletion propagationPASS
S11Link integrityPASS
S12Agent registration?node-4 did not see registered agent ai:dave-probe-de012782
S13Concurrent write contentionPASS
S14Partition tolerancePASS
S15Read-your-writesPASS
S16Tier promotionPASS
S17Stats consistencyPASS
S18Semantic query expansion?semantic query did not surface alice's memory; semantic query did not surface bob's memory
S22Identity spoofing resistancePASS
S23Malicious content fuzz?
S24Byzantine peerPASS
S25Clock skew tolerancePASS
S28memory_search keyword?node-2 did not find the unique token via /search; node-3 did not find the unique token via /search
S29memory_archive lifecycle?archive POST returned HTTP 405; bob did not see M1 in /api/v1/archive; restore returned HTTP 404
S30memory_capabilities handshake?no peer returned a capabilities response — endpoint may not be exposed
S31memory_gc quiescencePASS
S32memory_inbox + notify?notify returned HTTP 404; bob's inbox did not deliver alice's notify
S33memory_subscribe pub/sub?subscribe returned HTTP 404; bob's subscription list did not include the subscribed namespace; unsubscribe returned HTTP 404
S34memory_pending governance?set-standard returned HTTP 405; approve returned HTTP 403; reject returned HTTP 404; charlie saw rejected row — reject didn't prevent publication
S35memory_namespace standards?set-parent returned HTTP 405; set-child returned HTTP 405; clear-standard returned HTTP 405; parent rule not layered into child's standard view; child rule missing from standard view
S36memory_session_start?session_start returned HTTP 404
S37memory_get_links bidirectionalPASS
S38/export + /importPASS
S39/sync/since delta?delta returned 0/6 expected markers — delta-sync incomplete
S40/memories/bulk?bulk returned HTTP 422; node-2 saw 0/500 bulk rows after fanout; node-3 saw 0/500 bulk rows after fanout; node-4 saw 0/500 bulk rows after fanout
S41/metrics PrometheusPASS
S42/namespaces enumerationPASS

Scenario 1 — Per-agent write + read (MCP stdio) FAIL

Reasons: ai:alice recalled 0 < 20 via MCP | ai:bob recalled 0 < 20 via MCP | ai:charlie recalled 0 < 20 via MCP | cross-cluster identity check failed — see per_ns

scenario-1.json (report)
{
	"agent_group": "ironclaw",
	"expected_per_reader": 20,
	"pass": false,
	"per_agent": {
		"ai:alice": {
			"recall": 0
		},
		"ai:bob": {
			"recall": 0
		},
		"ai:charlie": {
			"recall": 0
		}
	},
	"per_namespace_node4": {
		"scenario1-ai:alice": {
			"count": 0,
			"wrong_agent_id": 0
		},
		"scenario1-ai:bob": {
			"count": 0,
			"wrong_agent_id": 0
		},
		"scenario1-ai:charlie": {
			"count": 0,
			"wrong_agent_id": 0
		}
	},
	"reason": "ai:alice recalled 0 < 20 via MCP; ai:bob recalled 0 < 20 via MCP; ai:charlie recalled 0 < 20 via MCP; cross-cluster identity check failed — see per_ns",
	"reasons": [
		"ai:alice recalled 0 < 20 via MCP",
		"ai:bob recalled 0 < 20 via MCP",
		"ai:charlie recalled 0 < 20 via MCP",
		"cross-cluster identity check failed — see per_ns"
	],
	"scenario": "1",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-1.log (console trace)
phase A: each agent writes 10 memories via MCP
  ai:alice on 138.197.33.142
  !! drive_agent store failed for ai:alice i=1: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=2: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=3: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=4: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=5: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=6: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=7: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=8: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=9: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=10: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  ai:bob on 104.131.116.229
  !! drive_agent store failed for ai:bob i=1: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=2: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=3: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=4: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=5: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=6: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=7: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=8: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=9: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=10: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  ai:charlie on 143.198.12.140
  !! drive_agent store failed for ai:charlie i=1: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=2: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=3: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=4: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=5: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=6: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=7: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=8: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=9: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=10: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

settle 15s for W=2/N=4 convergence
phase B: each agent counts rows in the OTHER two namespaces
  ai:alice recalled 0 rows from the other two namespaces
  ai:bob recalled 0 rows from the other two namespaces
  ai:charlie recalled 0 rows from the other two namespaces
phase C: cross-cluster identity check on node-4
  ns=scenario1-ai:alice count=0 wrong_agent_id=0
  !! expected 10 rows, got 0
  ns=scenario1-ai:bob count=0 wrong_agent_id=0
  !! expected 10 rows, got 0
  ns=scenario1-ai:charlie count=0 wrong_agent_id=0
  !! expected 10 rows, got 0

raw file

Scenario 1b — Per-agent write + read (HTTP) PASS

scenario-1b.json (report)
{
	"agent_group": "ironclaw",
	"expected_per_reader": 20,
	"pass": true,
	"path": "serve-http",
	"per_agent": {
		"ai:alice": {
			"recall": 20
		},
		"ai:bob": {
			"recall": 20
		},
		"ai:charlie": {
			"recall": 20
		}
	},
	"per_namespace_node4": {
		"scenario1b-ai:alice": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1b-ai:bob": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1b-ai:charlie": {
			"count": 10,
			"wrong_agent_id": 0
		}
	},
	"reasons": [],
	"scenario": "1b",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-1b.log (console trace)
phase A: each agent POSTs 10 memories to local serve
  ai:alice on 138.197.33.142
  ai:bob on 104.131.116.229
  ai:charlie on 143.198.12.140
settle 15s for W=2/N=4 convergence
phase B: count rows in other two namespaces via local serve HTTP
  ai:alice sees 20 rows from the other two namespaces
  ai:bob sees 20 rows from the other two namespaces
  ai:charlie sees 20 rows from the other two namespaces
phase C: cross-cluster identity check on node-4
  ns=scenario1b-ai:alice count=10 wrong_agent_id=0
  ns=scenario1b-ai:bob count=10 wrong_agent_id=0
  ns=scenario1b-ai:charlie count=10 wrong_agent_id=0

raw file

Scenario 2 — Shared-context handoff PASS

scenario-2.json (report)
{
	"ack_uuid": "a-4280a0f164914a19a4df296dcb6b3dfb",
	"agent_group": "ironclaw",
	"handoff_uuid": "h-add2238fb850402fbe5eb7bf903ae5b8",
	"pass": true,
	"path": "serve-http",
	"per_agent": {
		"ai:alice": {
			"sees_ack": 1
		},
		"ai:bob": {
			"sees_handoff": 1
		}
	},
	"reasons": [],
	"scenario": "2",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-2.log (console trace)
phase A: ai:alice writes handoff to ai:bob (uuid=h-add2238fb850402fbe5eb7bf903ae5b8)
settle 8s for quorum fanout
phase B: ai:bob reads handoff on node-2
  ai:bob sees 1 handoff memories from ai:alice
phase C: ai:bob writes acknowledgement (uuid=a-4280a0f164914a19a4df296dcb6b3dfb)
settle 8s for reverse-direction fanout
phase D: ai:alice reads ack on node-1
  ai:alice sees 1 ack memories from ai:bob

raw file

Scenario 4 — Federation-aware concurrent writes PASS

scenario-4.json (report)
{
	"agent_group": "ironclaw",
	"expected_per_agent": 30,
	"pass": true,
	"per_agent": {
		"ai:alice": {
			"count": 30,
			"wrong_agent_id": 0
		},
		"ai:bob": {
			"count": 30,
			"wrong_agent_id": 0
		},
		"ai:charlie": {
			"count": 30,
			"wrong_agent_id": 0
		}
	},
	"reasons": [],
	"scenario": "4",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-4.log (console trace)
phase A: launching concurrent 30-row bursts from 3 agents
  ai:alice burst ok=30/30
  ai:bob burst ok=30/30
  ai:charlie burst ok=30/30
settle 20s for W=2 fanout convergence
phase B: querying node-4 aggregator for per-agent counts
  ai:alice: count=30 (expected 30) wrong_agent_id=0
  ai:bob: count=30 (expected 30) wrong_agent_id=0
  ai:charlie: count=30 (expected 30) wrong_agent_id=0

raw file

Scenario 5 — Consolidation + curation PASS

scenario-5.json (report)
{
	"agent_group": "ironclaw",
	"consolidate_http_code": 201,
	"consolidated_from_agents": [
		"ai:charlie",
		"ai:bob",
		"ai:alice"
	],
	"consolidated_id": "fa6d272c-84bd-42cf-82d7-e392358dfb05",
	"pass": true,
	"reasons": [],
	"scenario": "5",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-5.log (console trace)
phase A: each agent writes 3 related memories
  ai:alice on 138.197.33.142
  ai:bob on 104.131.116.229
  ai:charlie on 143.198.12.140
settle 8s for quorum fanout
phase B: collect source ids on node-1, then trigger consolidate
  source ids (count=9): ['2a2a3fe2-51c3-41fe-93b7-e5149e1e5917', '4a65eb5c-da6e-4b4d-a0a5-c2c7f16bbc4b', '0b4ab4a8-b172-48ee-8f36-ab427b81a987', '35f44b98-1fcd-43e1-914e-a94af13e336b', '68bccda8-5f12-487a-a637-70774122906a']...
  consolidate HTTP 201, consolidated_id=fa6d272c-84bd-42cf-82d7-e392358dfb05
settle 10s for consolidation fanout
phase C: verifying consolidated_from_agents on node-4
  consolidated_from_agents=['ai:charlie', 'ai:bob', 'ai:alice']

raw file

Scenario 6 — Contradiction detection PASS

scenario-6.json (report)
{
	"agent_group": "ironclaw",
	"alice_id": "85c1b1ef-1bba-4f12-93a3-4c53f5a96c85",
	"bob_id": "b74280c6-d702-4a97-8cda-65f0da91c70d",
	"charlie_sees_both_memories": true,
	"charlie_sees_contradicts_link": true,
	"detect_http_code": 200,
	"pass": true,
	"reasons": [],
	"scenario": "6",
	"skipped": false,
	"tls_mode": "off",
	"topic": "sky-color-08068efc"
}

raw file

scenario-6.log (console trace)
alice writes claim: "sky-color-08068efc is blue" on node-1
bob writes contradicting claim: "sky-color-08068efc is red" on node-2
  alice.id=85c1b1ef-1bba-4f12-93a3-4c53f5a96c85 bob.id=b74280c6-d702-4a97-8cda-65f0da91c70d
settle 10s for quorum fanout + contradiction indexing
charlie queries /api/v1/contradictions on node-3
  HTTP 200
  sees both memories: True; sees contradicts link: True

raw file

Scenario 9 — Mutation round-trip PASS

scenario-9.json (report)
{
	"agent_group": "ironclaw",
	"charlie_view": {
		"agent_id": "ai:alice",
		"content": "v2-307767e96fb544a78dd6c570a01108fe"
	},
	"m1_id": "054907c2-a079-4f52-9560-012fd21c068c",
	"pass": true,
	"put_http_code": 200,
	"reasons": [],
	"scenario": "9",
	"skipped": false,
	"tls_mode": "off",
	"v1_uuid": "v1-15e35d2489f447ebb78774b4ea835e7d",
	"v2_uuid": "v2-307767e96fb544a78dd6c570a01108fe"
}

raw file

scenario-9.log (console trace)
alice writes M1 content=v1-15e35d2489f447ebb78774b4ea835e7d on node-1
  M1 id=054907c2-a079-4f52-9560-012fd21c068c
settle 5s for initial replication
bob updates M1 content=v2-307767e96fb544a78dd6c570a01108fe on node-2 via PUT
  PUT returned HTTP 200
settle 8s for update fanout
charlie reads M1 on node-3 and checks content + provenance
  charlie sees content="v2-307767e96fb544a78dd6c570a01108fe" agent_id="ai:alice"

raw file

Scenario 10 — Deletion propagation PASS

scenario-10.json (report)
{
	"agent_group": "ironclaw",
	"delete_http_code": 200,
	"m1_id": "7eb59338-0455-4013-8658-90014836a505",
	"pass": true,
	"post_delete_hits": {
		"node-2": 0,
		"node-3": 0,
		"node-4": 0
	},
	"post_delete_still_visible_peers": 0,
	"pre_delete_visible_peers": 3,
	"reasons": [],
	"scenario": "10",
	"skipped": false,
	"tls_mode": "off",
	"uuid": "d-18ca820d4a2c4cc5b3b53179fb256205"
}

raw file

scenario-10.log (console trace)
alice writes M1 content=d-18ca820d4a2c4cc5b3b53179fb256205 on node-1
  created memory id=7eb59338-0455-4013-8658-90014836a505
settle 8s for pre-delete fanout
pre-delete: verifying M1 is visible on all peers
  pre-delete node-2 sees 1
  pre-delete node-3 sees 1
  pre-delete node-4 sees 1
alice deletes M1 on node-1
  DELETE returned HTTP 200
settle 15s for tombstone propagation
post-delete: verifying M1 is GONE from all peers
  post-delete node-2 sees 0 (expected 0)
  post-delete node-3 sees 0 (expected 0)
  post-delete node-4 sees 0 (expected 0)

raw file

Scenario 11 — Link integrity PASS

scenario-11.json (report)
{
	"agent_group": "ironclaw",
	"charlie_sees_link": 1,
	"link_http_code": 201,
	"m1_id": "a5cdc92d-3061-4b71-b74d-cceb57cb885d",
	"m2_id": "8943c7bf-786d-46d6-925f-e80b6079a531",
	"pass": true,
	"reasons": [],
	"relation": "related_to",
	"scenario": "11",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-11.log (console trace)
alice writes M1 on node-1
bob writes M2 on node-2
  M1=a5cdc92d-3061-4b71-b74d-cceb57cb885d M2=8943c7bf-786d-46d6-925f-e80b6079a531
settle 5s for pre-link replication
alice links M1 -> M2 with relation=related_to
  link POST returned HTTP 201
settle 8s for link fanout
charlie queries links of M1 on node-3
  charlie sees M1->M2 link: 1 (expected >=1)

raw file

Scenario 12 — Agent registration FAIL

Reasons: node-4 did not see registered agent ai:dave-probe-de012782

scenario-12.json (report)
{
	"agent_group": "ironclaw",
	"pass": false,
	"peers_see": {
		"node_2": 1,
		"node_3": 1,
		"node_4": 0
	},
	"reason": "node-4 did not see registered agent ai:dave-probe-de012782",
	"reasons": [
		"node-4 did not see registered agent ai:dave-probe-de012782"
	],
	"register_http_code": 201,
	"registered_agent": "ai:dave-probe-de012782",
	"scenario": "12",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-12.log (console trace)
alice registers new agent ai:dave-probe-de012782 on node-1
  POST /api/v1/agents returned HTTP 201
settle 10s for agent-list fanout
  node-2 sees ai:dave-probe-de012782: 1 (expected >=1)
  node-3 sees ai:dave-probe-de012782: 1 (expected >=1)
  node-4 sees ai:dave-probe-de012782: 0 (expected >=1)

raw file

Scenario 13 — Concurrent write contention PASS

scenario-13.json (report)
{
	"agent_group": "ironclaw",
	"m1_id": "15775189-6d1c-49d9-8569-929cf0b0b216",
	"pass": true,
	"peer_view": {
		"node_1": "vb-d65f78c6c86d411b9474b77fb1773e67",
		"node_2": "vb-d65f78c6c86d411b9474b77fb1773e67",
		"node_3": "vb-d65f78c6c86d411b9474b77fb1773e67",
		"node_4": "vb-d65f78c6c86d411b9474b77fb1773e67"
	},
	"reasons": [],
	"scenario": "13",
	"skipped": false,
	"submitted": {
		"v0": "v0-53bf6e2772354965825cd614d401d0a6",
		"vA_alice": "va-f9315dd6a7a443d3b29c897bb68d7683",
		"vB_bob": "vb-d65f78c6c86d411b9474b77fb1773e67"
	},
	"tls_mode": "off"
}

raw file

scenario-13.log (console trace)
alice writes M1 content=v0-53bf6e2772354965825cd614d401d0a6 on node-1
  M1 id=15775189-6d1c-49d9-8569-929cf0b0b216
settle 5s for initial replication
alice + bob issue concurrent PUTs (vA=va-f9315dd6a7a443d3b29c897bb68d7683 from alice, vB=vb-d65f78c6c86d411b9474b77fb1773e67 from bob)
  concurrent PUT results: [(0, {'body': {'access_count': 0, 'confidence': 1.0, 'content': 'va-f9315dd6a7a443d3b29c897bb68d7683', 'created_at': '2026-04-22T03:31:26.267282233+00:00', 'expires_at': '2026-04-29T03:31:26.267282233+00:00', 'id': '15775189-6d1c-49d9-8569-929cf0b0b216', 'metadata': {'agent_id': 'ai:alice', 'scenario': '13'}, 'namespace': 'scenario13-contention', 'priority': 5, 'source': 'api', 'tags': [], 'tier': 'mid', 'title': 'm1', 'updated_at': '2026-04-22T03:31:31.997865522+00:00'}, 'http_code': 200}), (0, {'body': {'access_count': 0, 'confidence': 1.0, 'content': 'vb-d65f78c6c86d411b9474b77fb1773e67', 'created_at': '2026-04-22T03:31:26.267282233+00:00', 'expires_at': '2026-04-29T03:31:26.267282233+00:00', 'id': '15775189-6d1c-49d9-8569-929cf0b0b216', 'metadata': {'agent_id': 'ai:alice', 'scenario': '13'}, 'namespace': 'scenario13-contention', 'priority': 5, 'source': 'api', 'tags': [], 'tier': 'mid', 'title': 'm1', 'updated_at': '2026-04-22T03:31:32.016733675+00:00'}, 'http_code': 200})]
settle 10s for quorum convergence
  node-1 sees content=vb-d65f78c6c86d411b9474b77fb1773e67
  node-2 sees content=vb-d65f78c6c86d411b9474b77fb1773e67
  node-3 sees content=vb-d65f78c6c86d411b9474b77fb1773e67
  node-4 sees content=vb-d65f78c6c86d411b9474b77fb1773e67

raw file

Scenario 14 — Partition tolerance PASS

scenario-14.json (report)
{
	"agent_group": "ironclaw",
	"expected_post_recovery": 20,
	"node3_saw": 20,
	"partition_target": "node-3",
	"pass": true,
	"reasons": [],
	"scenario": "14",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-14.log (console trace)
suspending ai-memory on node-3 (SIGSTOP)
  !! ssh timeout (15s): root@143.198.12.140 pgrep -f 'ai-memory serve' | xargs -r kill -STOP
settle 2s for process-suspend observe
writing 10 memories each from alice + bob during node-3 outage
resuming ai-memory on node-3 (SIGCONT)
settle 20s for post-partition catchup
checking node-3 caught up
  node-3 sees 20 memories in scenario14-partition (expected 20)

raw file

Scenario 15 — Read-your-writes PASS

scenario-15.json (report)
{
	"agent_group": "ironclaw",
	"pass": true,
	"reasons": [],
	"scenario": "15",
	"skipped": false,
	"tls_mode": "off",
	"uuid": "ryw-b2d968a7fc01457ba263c84ac6cf2577",
	"writer_sees_own_write": 1
}

raw file

scenario-15.log (console trace)
alice writes + immediately reads M1 on node-1 (uuid=ryw-b2d968a7fc01457ba263c84ac6cf2577)
  alice sees 1 (expected 1) immediately after write

raw file

Scenario 16 — Tier promotion PASS

scenario-16.json (report)
{
	"agent_group": "ironclaw",
	"bob_sees_tier": "long",
	"m1_id": "0622449c-23c4-4a47-a7b9-0d375a7bf7b5",
	"pass": true,
	"promote_http_code": 200,
	"reasons": [],
	"scenario": "16",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-16.log (console trace)
alice writes M1 tier=short on node-1
  M1 id=0622449c-23c4-4a47-a7b9-0d375a7bf7b5
settle 5s for pre-promote replication
alice promotes M1 to tier=long
  promote returned HTTP 200
settle 8s for promotion fanout
  bob sees tier=long (expected long)

raw file

Scenario 17 — Stats consistency PASS

scenario-17.json (report)
{
	"agent_group": "ironclaw",
	"expected_count": 15,
	"pass": true,
	"per_peer": {
		"node_1": 15,
		"node_2": 15,
		"node_3": 15,
		"node_4": 15
	},
	"reasons": [],
	"scenario": "17",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-17.log (console trace)
phase A: each of 3 agents writes 5 memories to scenario17-stats
  ai:alice on 138.197.33.142
  ai:bob on 104.131.116.229
  ai:charlie on 143.198.12.140
settle 15s for W=2 fanout
phase B: querying count on every peer
  node-1 count=15 (expected 15)
  node-2 count=15 (expected 15)
  node-3 count=15 (expected 15)
  node-4 count=15 (expected 15)

raw file

Scenario 18 — Semantic query expansion FAIL

Reasons: semantic query did not surface alice's memory | semantic query did not surface bob's memory

scenario-18.json (report)
{
	"agent_group": "ironclaw",
	"pass": false,
	"query": "morning outdoor exercise routine",
	"reason": "semantic query did not surface alice's memory; semantic query did not surface bob's memory",
	"reasons": [
		"semantic query did not surface alice's memory",
		"semantic query did not surface bob's memory"
	],
	"scenario": "18",
	"skipped": false,
	"tls_mode": "off",
	"writers": [
		{
			"agent": "ai:alice",
			"marker": "alice-sunrise-3b98b7ce",
			"seen_by_charlie": 0
		},
		{
			"agent": "ai:bob",
			"marker": "bob-daybreak-d2b4e8c1",
			"seen_by_charlie": 0
		}
	]
}

raw file

scenario-18.log (console trace)
alice writes A on node-1
bob writes B on node-2
settle 15s for fanout + index rebuild
charlie queries on node-3 with semantically-related prompt
  charlie sees alice's memory: 0 (expected >=1)
  charlie sees bob's memory: 0 (expected >=1)

raw file

Scenario 22 — Identity spoofing resistance PASS

scenario-22.json (report)
{
	"agent_group": "ironclaw",
	"pass": true,
	"reasons": [],
	"scenario": "22",
	"skipped": false,
	"tests": {
		"body_vs_header_conflict": {
			"acceptable": [
				"ai:body-wins",
				"ai:attacker"
			],
			"stored_agent_id": "ai:attacker"
		},
		"header_only": {
			"expected": "ai:alice",
			"stored_agent_id": "ai:alice"
		}
	},
	"tls_mode": "off"
}

raw file

scenario-22.log (console trace)
test 1: header-only X-Agent-Id=ai:alice
settle 2s for read-settle
  stored metadata.agent_id for header-only write: ai:alice (expected ai:alice)
test 2: body.metadata.agent_id=ai:body-wins vs X-Agent-Id=ai:attacker
settle 2s for read-settle
  stored metadata.agent_id for body+header conflict: ai:attacker

raw file

Scenario 23 — Malicious content fuzz UNKNOWN

scenario-23.json (report)

          

raw file

scenario-23.log (console trace)
payload sql: 61 bytes
payload html: 66 bytes
payload oversize: 1048576 bytes
Traceback (most recent call last):
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/scenarios/23_malicious_content_fuzz.py", line 106, in <module>
    main()
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/scenarios/23_malicious_content_fuzz.py", line 49, in main
    rc, write_doc = h.write_memory(
                    ^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 202, in write_memory
    return self.http_on(node_ip, "POST", "/api/v1/memories",
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 157, in http_on
    result = self.ssh_exec(node_ip, remote_cmd, timeout=timeout)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 103, in ssh_exec
    return self._run(cmd, timeout=timeout, stdin=stdin)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 87, in _run
    return subprocess.run(
           ^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 548, in run
    with Popen(*popenargs, **kwargs) as process:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 1026, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/usr/lib/python3.12/subprocess.py", line 1955, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
OSError: [Errno 7] Argument list too long: 'ssh'

raw file

Scenario 24 — Byzantine peer PASS

scenario-24.json (report)
{
	"agent_group": "ironclaw",
	"byzantine_marker": "bz-bf0fb28e4f0f4a04b3512f971e65f3ef",
	"pass": true,
	"reasons": [],
	"scenario": "24",
	"skipped": false,
	"stored_metadata_agent_id": "REJECTED_BY_SERVER",
	"sync_push_http_code": "422",
	"tls_mode": "off"
}

raw file

scenario-24.log (console trace)
node-2 sends sync_push to node-3 claiming sender_agent_id=ai:alice
  sync_push returned HTTP 422
settle 5s for server-side sync apply
  node-3 stored metadata.agent_id=ABSENT (declared: ai:alice)
  sync_push rejected HTTP 422 — stricter-than-spec, acceptable

raw file

Scenario 25 — Clock skew tolerance PASS

scenario-25.json (report)
{
	"agent_group": "ironclaw",
	"clock_offset_seconds": 300,
	"marker": "ck-a2fad0ddf8ae46129184280a46013193",
	"pass": true,
	"reasons": [],
	"scenario": "25",
	"seen_on": {
		"node_1": 1,
		"node_3": 1
	},
	"skipped": false,
	"target_node": "node-3",
	"tls_mode": "off"
}

raw file

scenario-25.log (console trace)
shifting node-3 clock +300s (NTP disabled for the duration)
  node-3 now reports: Wed Apr 22 03:39:00 UTC 2026
alice writes on node-1 (normal clock); waiting for quorum fanout to skewed node-3
settle 15s for skewed-peer convergence
  node-3 (+300s clock) sees marker: 1 (expected >=1)
  node-1 sees marker: 1 (expected >=1)
reverting node-3 clock

raw file

Scenario 28 — memory_search keyword FAIL

Reasons: node-2 did not find the unique token via /search | node-3 did not find the unique token via /search

scenario-28.json (report)
{
	"agent_group": "ironclaw",
	"pass": false,
	"peer_hits": {
		"node_2": 0,
		"node_3": 0
	},
	"reason": "node-2 did not find the unique token via /search; node-3 did not find the unique token via /search",
	"reasons": [
		"node-2 did not find the unique token via /search",
		"node-3 did not find the unique token via /search"
	],
	"scenario": "28",
	"skipped": false,
	"tls_mode": "off",
	"token": "kwsearch-a225d3dc9b"
}

raw file

scenario-28.log (console trace)
alice writes a row containing unique token=kwsearch-a225d3dc9b
settle 8s for search index populate + fanout
bob + charlie call /api/v1/search with the exact token
  node-2 keyword search returned 0 hits
  node-3 keyword search returned 0 hits

raw file

Scenario 29 — memory_archive lifecycle FAIL

Reasons: archive POST returned HTTP 405 | bob did not see M1 in /api/v1/archive | restore returned HTTP 404

scenario-29.json (report)
{
	"agent_group": "ironclaw",
	"archive_http_code": 405,
	"bob_sees_archived": false,
	"m1_id": "b0c797cb-3c12-4f71-9725-e1d7a7f92075",
	"node4_active_rows": 1,
	"pass": false,
	"reason": "archive POST returned HTTP 405; bob did not see M1 in /api/v1/archive; restore returned HTTP 404",
	"reasons": [
		"archive POST returned HTTP 405",
		"bob did not see M1 in /api/v1/archive",
		"restore returned HTTP 404"
	],
	"restore_http_code": 404,
	"scenario": "29",
	"skipped": false,
	"stats_shape_ok": true,
	"tls_mode": "off"
}

raw file

scenario-29.log (console trace)
alice writes M1 on node-1
  M1 id=b0c797cb-3c12-4f71-9725-e1d7a7f92075
settle 5s for pre-archive replication
alice archives M1 via DELETE /api/v1/memories/{id} (soft-delete → archive)
  archive returned HTTP 405
settle 5s for archive propagation
bob queries /api/v1/archive on node-2
  bob sees M1 in archive: False
charlie restores M1 via /api/v1/archive/{id}/restore on node-3
  restore returned HTTP 404
settle 5s for restore propagation
node-4 aggregator: M1 must be active again
  node-4 active rows matching marker: 1
fetch /api/v1/archive/stats on node-4

raw file

Scenario 30 — memory_capabilities handshake FAIL

Reasons: no peer returned a capabilities response — endpoint may not be exposed

scenario-30.json (report)
{
	"agent_group": "ironclaw",
	"pass": false,
	"peer_views": {
		"node_1": null,
		"node_2": null,
		"node_3": null,
		"node_4": null
	},
	"reason": "no peer returned a capabilities response — endpoint may not be exposed",
	"reasons": [
		"no peer returned a capabilities response — endpoint may not be exposed"
	],
	"scenario": "30",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-30.log (console trace)
  node-1 capabilities: []
  node-2 capabilities: []
  node-3 capabilities: []
  node-4 capabilities: []

raw file

Scenario 31 — memory_gc quiescence PASS

scenario-31.json (report)
{
	"agent_group": "ironclaw",
	"expected_live": 2,
	"forget_http_code": 400,
	"gc_http_code": 200,
	"live_markers_per_peer": {
		"node_1": 2,
		"node_2": 2,
		"node_3": 2,
		"node_4": 2
	},
	"pass": true,
	"reasons": [],
	"scenario": "31",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-31.log (console trace)
alice writes 4 memories
settle 6s for pre-gc replication
alice forgets 2 via /api/v1/forget
  forget returned HTTP 400
settle 5s for forget propagation
bob triggers /api/v1/gc on node-2
  gc returned HTTP 200
settle 8s for post-gc settle
verify remaining 2 markers are still readable on every peer
  node-1 sees 2/2 live markers
  node-2 sees 2/2 live markers
  node-3 sees 2/2 live markers
  node-4 sees 2/2 live markers

raw file

Scenario 32 — memory_inbox + notify FAIL

Reasons: notify returned HTTP 404 | bob's inbox did not deliver alice's notify

scenario-32.json (report)
{
	"agent_group": "ironclaw",
	"bob_inbox_count": 0,
	"bob_sees_marker": false,
	"charlie_inbox_count": 0,
	"charlie_sees_marker": false,
	"marker": "inb-bfd7b118a97a4faaad0556d6406aeda6",
	"notify_http_code": 404,
	"pass": false,
	"reason": "notify returned HTTP 404; bob's inbox did not deliver alice's notify",
	"reasons": [
		"notify returned HTTP 404",
		"bob's inbox did not deliver alice's notify"
	],
	"scenario": "32",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-32.log (console trace)
alice calls /api/v1/notify → target=ai:bob
  notify returned HTTP 404
settle 6s for notification fanout
bob queries his inbox on node-2
  bob inbox has 0 messages; sees marker: False
charlie queries his inbox on node-3 (must NOT see it)
  charlie inbox has 0 messages; sees marker: False

raw file

Scenario 33 — memory_subscribe pub/sub FAIL

Reasons: subscribe returned HTTP 404 | bob's subscription list did not include the subscribed namespace | unsubscribe returned HTTP 404

scenario-33.json (report)
{
	"agent_group": "ironclaw",
	"m1_delivered": 1,
	"namespace": "scenario33-pubsub-22c00c",
	"ns_in_subs_after": false,
	"ns_in_subs_before": false,
	"pass": false,
	"reason": "subscribe returned HTTP 404; bob's subscription list did not include the subscribed namespace; unsubscribe returned HTTP 404",
	"reasons": [
		"subscribe returned HTTP 404",
		"bob's subscription list did not include the subscribed namespace",
		"unsubscribe returned HTTP 404"
	],
	"scenario": "33",
	"skipped": false,
	"subscribe_http_code": 404,
	"subscriptions_after_count": 0,
	"subscriptions_before_count": 0,
	"tls_mode": "off",
	"unsubscribe_http_code": 404
}

raw file

scenario-33.log (console trace)
bob subscribes to namespace scenario33-pubsub-22c00c on node-2
  subscribe returned HTTP 404
settle 2s for subscription settle
  bob subscriptions: 0 entries; contains ns: False
alice writes M1 into the subscribed namespace
settle 6s for write fanout to subscribers
  bob sees M1 in subscribed namespace: 1
bob unsubscribes from scenario33-pubsub-22c00c
  unsubscribe returned HTTP 404
settle 2s for unsubscribe settle
  bob subscriptions after unsubscribe: ns still present = False
alice writes M2 post-unsubscribe (may still replicate via federation but subscription list excludes ns)
settle 5s for post-unsubscribe settle

raw file

Scenario 34 — memory_pending governance FAIL

Reasons: set-standard returned HTTP 405 | approve returned HTTP 403 | reject returned HTTP 404 | charlie saw rejected row — reject didn't prevent publication

scenario-34.json (report)
{
	"agent_group": "ironclaw",
	"approve_http_code": 403,
	"charlie_sees": {
		"approved": 1,
		"rejected": 1
	},
	"namespace": "scenario34-pending-8863db",
	"pass": false,
	"pending_queue_count": 0,
	"reason": "set-standard returned HTTP 405; approve returned HTTP 403; reject returned HTTP 404; charlie saw rejected row — reject didn't prevent publication",
	"reasons": [
		"set-standard returned HTTP 405",
		"approve returned HTTP 403",
		"reject returned HTTP 404",
		"charlie saw rejected row — reject didn't prevent publication"
	],
	"reject_http_code": 404,
	"scenario": "34",
	"set_standard_http_code": 405,
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-34.log (console trace)
alice sets namespace standard on scenario34-pending-8863db: write=approve, approver=ai:bob
  set-standard returned HTTP 405
settle 2s for standard settle
alice writes two memories into the governed namespace (should land in pending)
  p1=c3b9d116-b37b-4bba-af1c-da880dce976c p2=597c79db-2348-4446-868d-507e36dd68ee
settle 4s for pending queue settle
bob lists pending on node-2
  pending queue has 0 entries
bob approves p1, rejects p2
  approve HTTP 403; reject HTTP 404
settle 5s for decision fanout
charlie reads the namespace — expects ONLY approved marker
  charlie sees approved=1 rejected=1

raw file

Scenario 35 — memory_namespace standards FAIL

Reasons: set-parent returned HTTP 405 | set-child returned HTTP 405 | clear-standard returned HTTP 405 | parent rule not layered into child's standard view | child rule missing from standard view

scenario-35.json (report)
{
	"agent_group": "ironclaw",
	"child_ns": "scenario35-parent-2fb829/child",
	"clear_http_code": 405,
	"get_standard_http_code": 200,
	"parent_ns": "scenario35-parent-2fb829",
	"pass": false,
	"post_clear_has_child_rule": false,
	"reason": "set-parent returned HTTP 405; set-child returned HTTP 405; clear-standard returned HTTP 405; parent rule not layered into child's standard view; child rule missing from standard view",
	"reasons": [
		"set-parent returned HTTP 405",
		"set-child returned HTTP 405",
		"clear-standard returned HTTP 405",
		"parent rule not layered into child's standard view",
		"child rule missing from standard view"
	],
	"scenario": "35",
	"sees_child_rule": false,
	"sees_parent_rule": false,
	"set_child_http_code": 405,
	"set_parent_http_code": 405,
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-35.log (console trace)
alice writes parent-standard-memory on node-1
alice sets namespace standard on scenario35-parent-2fb829
  set-parent returned HTTP 405
alice writes child-standard-memory on node-1
alice sets namespace standard on scenario35-parent-2fb829/child with parent=scenario35-parent-2fb829
  set-child returned HTTP 405
settle 4s for standard fanout
bob gets standard for scenario35-parent-2fb829/child on node-2 (expects layered parent+child)
  get-standard returned HTTP 200
  parent-rule visible=False; child-rule visible=False
alice clears standard on scenario35-parent-2fb829/child
  clear returned HTTP 405
settle 3s for clear settle

raw file

Scenario 36 — memory_session_start FAIL

Reasons: session_start returned HTTP 404

scenario-36.json (report)
{
	"agent_group": "ironclaw",
	"pass": false,
	"reason": "session_start returned HTTP 404",
	"reasons": [
		"session_start returned HTTP 404"
	],
	"scenario": "36",
	"skipped": false,
	"start_http_code": 404,
	"tls_mode": "off"
}

raw file

scenario-36.log (console trace)
alice starts a session on node-1
  session_start returned HTTP 404, session_id=

raw file

Scenario 37 — memory_get_links bidirectional PASS

scenario-37.json (report)
{
	"agent_group": "ironclaw",
	"forward_has_target": true,
	"m1": "dcbe7e82-2a1d-466c-96f4-23f0d93ea38f",
	"m2": "ece52486-547f-41f5-886e-94a5c81d7ebc",
	"pass": true,
	"reasons": [],
	"reverse_has_source": true,
	"scenario": "37",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-37.log (console trace)
alice writes M1 + M2 + links M1→M2
  M1=dcbe7e82-2a1d-466c-96f4-23f0d93ea38f M2=ece52486-547f-41f5-886e-94a5c81d7ebc
settle 6s for link fanout
charlie queries /api/v1/links/M1 (forward)
charlie queries /api/v1/links/M2 (reverse)

raw file

Scenario 38 — /export + /import PASS

scenario-38.json (report)
{
	"agent_group": "ironclaw",
	"dst_ns": "scenario38-dst-14a563",
	"expected_rows": 5,
	"export_http_code": 200,
	"import_http_code": 200,
	"markers_preserved": 5,
	"pass": true,
	"reasons": [],
	"rows_exported": 5,
	"rows_in_destination": 5,
	"scenario": "38",
	"skipped": false,
	"src_ns": "scenario38-src-14a563",
	"tls_mode": "off"
}

raw file

scenario-38.log (console trace)
alice writes 5 rows into scenario38-src-14a563
settle 4s for pre-export replication
alice exports on node-1 (endpoint has no namespace filter; filter client-side)
  export returned HTTP 200, total_rows=197
  rewrote 5 memories from scenario38-src-14a563 -> scenario38-dst-14a563
bob imports the payload into scenario38-dst-14a563 on node-2
  import returned HTTP 200
settle 6s for import + fanout
verify row counts match on destination
  scenario38-dst-14a563 has 5 rows (expected 5)
  markers preserved in destination: 5/5

raw file

Scenario 39 — /sync/since delta FAIL

Reasons: delta returned 0/6 expected markers — delta-sync incomplete

scenario-39.json (report)
{
	"agent_group": "ironclaw",
	"checkpoint_ms": 1776829020939,
	"expected_markers": 6,
	"markers_present": 0,
	"namespace": "scenario39-delta-402179",
	"pass": false,
	"reason": "delta returned 0/6 expected markers — delta-sync incomplete",
	"reasons": [
		"delta returned 0/6 expected markers — delta-sync incomplete"
	],
	"rows_returned": 0,
	"scenario": "39",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-39.log (console trace)
checkpoint = 1776829020939
suspending ai-memory on node-3
  !! ssh timeout (15s): root@143.198.12.140 pgrep -f 'ai-memory serve' | xargs -r kill -STOP
alice + bob write 6 rows while node-3 is out
resuming ai-memory on node-3
settle 4s for process resume
node-3 asks node-1 /api/v1/sync/since?after=1776829020939
  !! ssh timeout (30s): root@143.198.12.140 curl -sS 'http://138.197.33.142:9077/api/v1/sync/since?after=1776829020939&namespace=scenario39-delta-402179'
  /sync/since returned 0 rows; 0/6 match our markers

raw file

Scenario 40 — /memories/bulk FAIL

Reasons: bulk returned HTTP 422 | node-2 saw 0/500 bulk rows after fanout | node-3 saw 0/500 bulk rows after fanout | node-4 saw 0/500 bulk rows after fanout

scenario-40.json (report)
{
	"agent_group": "ironclaw",
	"bulk_http_code": "422",
	"bulk_size": 500,
	"namespace": "scenario40-bulk-85ac25",
	"pass": false,
	"per_peer_count": {
		"node_2": 0,
		"node_3": 0,
		"node_4": 0
	},
	"reason": "bulk returned HTTP 422; node-2 saw 0/500 bulk rows after fanout; node-3 saw 0/500 bulk rows after fanout; node-4 saw 0/500 bulk rows after fanout",
	"reasons": [
		"bulk returned HTTP 422",
		"node-2 saw 0/500 bulk rows after fanout",
		"node-3 saw 0/500 bulk rows after fanout",
		"node-4 saw 0/500 bulk rows after fanout"
	],
	"scenario": "40",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-40.log (console trace)
constructing 500-row bulk payload
staging bulk payload on node-1 /tmp, then POST /api/v1/memories/bulk
  bulk POST returned HTTP 422
settle 20s for bulk fanout across 3 peers + aggregator
  node-2 count=0 (expected 500)
  node-3 count=0 (expected 500)
  node-4 count=0 (expected 500)

raw file

Scenario 41 — /metrics Prometheus PASS

scenario-41.json (report)
{
	"activity_namespace": "scenario41-activity-cc805d",
	"agent_group": "ironclaw",
	"pass": true,
	"per_peer": {
		"node_1": {
			"counters_t0": 7,
			"counters_t1": 7,
			"regressed_keys": 0
		},
		"node_2": {
			"counters_t0": 7,
			"counters_t1": 7,
			"regressed_keys": 0
		},
		"node_3": {
			"counters_t0": 7,
			"counters_t1": 7,
			"regressed_keys": 0
		}
	},
	"reasons": [],
	"scenario": "41",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-41.log (console trace)
scrape T0
  node-1 T0 parsed 7 memory counters
  node-2 T0 parsed 7 memory counters
  node-3 T0 parsed 7 memory counters
settle 5s for counter update
scrape T1
  node-1 T1 parsed 7 memory counters
  node-2 T1 parsed 7 memory counters
  node-3 T1 parsed 7 memory counters

raw file

Scenario 42 — /namespaces enumeration PASS

scenario-42.json (report)
{
	"agent_group": "ironclaw",
	"namespaces": [
		"scenario42-84aa43-0",
		"scenario42-84aa43-1",
		"scenario42-84aa43-2"
	],
	"pass": true,
	"per_peer": {
		"node_1": {
			"scenario42-84aa43-0": 2,
			"scenario42-84aa43-1": 2,
			"scenario42-84aa43-2": 2
		},
		"node_2": {
			"scenario42-84aa43-0": 2,
			"scenario42-84aa43-1": 2,
			"scenario42-84aa43-2": 2
		},
		"node_3": {
			"scenario42-84aa43-0": 2,
			"scenario42-84aa43-1": 2,
			"scenario42-84aa43-2": 2
		},
		"node_4": {
			"scenario42-84aa43-0": 2,
			"scenario42-84aa43-1": 2,
			"scenario42-84aa43-2": 2
		}
	},
	"reasons": [],
	"scenario": "42",
	"skipped": false,
	"tls_mode": "off"
}

raw file

scenario-42.log (console trace)
alice writes into 3 distinct namespaces: ['scenario42-84aa43-0', 'scenario42-84aa43-1', 'scenario42-84aa43-2']
settle 10s for namespace index fanout
  node-1 sees 3/3 target namespaces, counts: {'scenario42-84aa43-0': 2, 'scenario42-84aa43-1': 2, 'scenario42-84aa43-2': 2}
  node-2 sees 3/3 target namespaces, counts: {'scenario42-84aa43-0': 2, 'scenario42-84aa43-1': 2, 'scenario42-84aa43-2': 2}
  node-3 sees 3/3 target namespaces, counts: {'scenario42-84aa43-0': 2, 'scenario42-84aa43-1': 2, 'scenario42-84aa43-2': 2}
  node-4 sees 3/3 target namespaces, counts: {'scenario42-84aa43-0': 2, 'scenario42-84aa43-1': 2, 'scenario42-84aa43-2': 2}

raw file

All artifacts