../ runs index

Campaign a2a-ironclaw-v3r17-tls-release-v0.6.2 FAIL

Agent group
ironclaw (homogeneous)
ai-memory ref
release/v0.6.2
Completed at
2026-04-22T22:44:24Z
Overall pass
false
Skipped reports
1

Infrastructure

Provider
digitalocean
Region
nyc3
Droplet size
s-2vcpu-4gb
Topology
4-node federation mesh (W=2/N=4)
Scenarios started
2026-04-22T22:32:37Z
Scenarios ended
2026-04-22T22:44:24Z
Dispatched by
alphaonedev
Harness SHA
dba76cc25ac1
Workflow run
https://github.com/alphaonedev/ai-memory-ai2ai-gate/actions/runs/24805364112

Node roster

#RoleAgent IDPublic IPPrivate IP
1agentai:alice157.245.209.14010.10.1.3
2agentai:bob159.203.92.1510.10.1.4
3agentai:charlie167.71.89.2810.10.1.5
4memory-only159.89.181.11210.10.1.2

Baseline attestation BASELINE OK

Per the authoritative baseline spec, every agent node must emit a self-attestation before any scenario is permitted to run. This run's attestation:

Spec version: 1.4.0 — see authoritative baseline.

NodeAgentFrameworkAuthenticMCP ai-memoryxAI cfgxAI defaultAgent IDFederationUFW offiptablesdead-manF1 xAIF2a substrateF2b agent (non-gating)Config SHAPass
node-1ai:aliceironclaw ironclaw 0.26.00807df9476e0PASS
node-2ai:bobironclaw ironclaw 0.26.00c01ab57c4d3PASS
node-3ai:charlieironclaw ironclaw 0.26.0a602d0b2e914PASS
a2a-baseline.json
{
	"baseline_pass": true,
	"per_node": [
		{
			"spec_version": "1.4.0",
			"agent_type": "ironclaw",
			"agent_id": "ai:alice",
			"node_index": "1",
			"framework_version": "ironclaw 0.26.0",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "https://10.10.1.4:9077,https://10.10.1.5:9077,https://10.10.1.2:9077",
			"config_file_sha256": "0807df9476e0bf92c24602d5677cadc654bd74a79bf671b8f048772e9d57142e",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "2a33c474-5f9e-4ce3-95f0-195339976e50",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "3f4f19f3-4e89-476a-83a4-357d7a4061e5",
				"agent_canary_response_head": "error: unrecognized subcommand 'chat'    tip: a similar subcommand exists: 'channels'  Usage: ironclaw [OPTIONS] [COMMAND]  For more information, try '--help'. ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.10.1.4:9077:OK,10.10.1.5:9077:OK,10.10.1.2:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "tls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "2a33c474-5f9e-4ce3-95f0-195339976e50",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "ironclaw",
			"agent_id": "ai:bob",
			"node_index": "2",
			"framework_version": "ironclaw 0.26.0",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "https://10.10.1.3:9077,https://10.10.1.5:9077,https://10.10.1.2:9077",
			"config_file_sha256": "0c01ab57c4d3c7900a35e9929a7bc3844c30d18187c32ae6cd0235c96d20cc8b",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "9007b6ff-08bf-4450-96dc-1166e94fca70",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "544264e6-4515-4e1b-b42a-8aa76268704b",
				"agent_canary_response_head": "error: unrecognized subcommand 'chat'    tip: a similar subcommand exists: 'channels'  Usage: ironclaw [OPTIONS] [COMMAND]  For more information, try '--help'. ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.10.1.3:9077:OK,10.10.1.5:9077:OK,10.10.1.2:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "tls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "9007b6ff-08bf-4450-96dc-1166e94fca70",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		},
		{
			"spec_version": "1.4.0",
			"agent_type": "ironclaw",
			"agent_id": "ai:charlie",
			"node_index": "3",
			"framework_version": "ironclaw 0.26.0",
			"ai_memory_version": "v0.6.2",
			"peer_urls": "https://10.10.1.3:9077,https://10.10.1.4:9077,https://10.10.1.2:9077",
			"config_file_sha256": "a602d0b2e91494a6b6d0dce3cd34873aa7e68f919f30c9747223400a889ab1ed",
			"config_attestation": {
				"framework_is_authentic": true,
				"mcp_server_ai_memory_registered": true,
				"llm_backend_is_xai_grok": true,
				"llm_is_default_provider": true,
				"mcp_command_is_ai_memory": true,
				"agent_id_stamped": true,
				"federation_live": true,
				"ufw_disabled": true,
				"iptables_flushed": true,
				"dead_man_switch_scheduled": true
			},
			"negative_invariants": {
				"_description": "Alternative A2A channels must be OFF so a passing scenario is only passing via ai-memory shared memory. Any true here = thesis-preserving.",
				"a2a_protocol_off": true,
				"sub_agent_or_sessions_spawn_off": true,
				"alternative_channels_off": true,
				"tool_allowlist_is_memory_only": true,
				"a2a_gate_profile_locked": true
			},
			"functional_probes": {
				"xai_grok_chat_reachable": true,
				"xai_grok_sample_reply": "READY",
				"substrate_http_canary_f2a": true,
				"substrate_http_canary_uuid": "4bc458ee-7b9c-4212-9feb-70ac88989065",
				"agent_mcp_canary_f2b": false,
				"agent_mcp_canary_uuid": "31508de1-772c-4b9c-b0de-c2633dff3f25",
				"agent_canary_response_head": "error: unrecognized subcommand 'chat'    tip: a similar subcommand exists: 'channels'  Usage: ironclaw [OPTIONS] [COMMAND]  For more information, try '--help'. ",
				"_f2b_note": "F2b is LLM-dependent and non-blocking. F2a (deterministic HTTP substrate) gates baseline_pass.",
				"mesh_connectivity_f4": true,
				"mesh_edges_ok": 3,
				"mesh_edges_total": 3,
				"mesh_edges_detail": "10.10.1.3:9077:OK,10.10.1.4:9077:OK,10.10.1.2:9077:OK",
				"_f4_note": "F4 verifies this local nodes N-1 OUTBOUND mesh edges to every peer via both GET health and POST sync_push dry_run. Aggregator ANDs across N nodes to confirm full N*(N-1) bidirectional reachability. Gates baseline_pass.",
				"ai_memory_mcp_stdio_f5": true,
				"ai_memory_mcp_stdio_init_ok": true,
				"ai_memory_mcp_stdio_tools_ok": true,
				"ai_memory_mcp_stdio_tools_found": "memory_agent_list,memory_agent_register,memory_archive_list,memory_archive_purge,memory_archive_restore,memory_archive_stats,memory_auto_tag,memory_capabilities,memory_consolidate,memory_delete,memory_detect_contradiction,memory_expand_query,memory_forget,memory_gc,memory_get,memory_get_links,memory_inbox,memory_link,memory_list,memory_list_subscriptions,memory_namespace_clear_standard,memory_namespace_get_standard,memory_namespace_set_standard,memory_notify,memory_pending_approve,memory_pending_list,memory_pending_reject,memory_promote,memory_recall,memory_search,memory_session_start,memory_stats,memory_store,memory_subscribe,memory_unsubscribe,memory_update",
				"_f5_note": "F5 spawns the ai-memory stdio MCP subprocess using the framework-configured invocation and verifies initialize + tools/list return memory_store, memory_recall, memory_list. Deterministic (no LLM). Gates baseline_pass.",
				"tls_mode": "tls",
				"tls_handshake_f6": true,
				"tls_handshake_f6_reason": "",
				"mtls_enforcement_f7": true,
				"mtls_enforcement_f7_reason": "",
				"_f6_f7_note": "F6 verifies the TLS 1.3 handshake against the local serve + CA chain. F7 verifies mTLS enforcement — anonymous client rejected, whitelisted client accepted. Both gate baseline_pass when tls_mode != off / mtls respectively.",
				"agent_mcp_ai_memory_canary": true,
				"canary_uuid": "4bc458ee-7b9c-4212-9feb-70ac88989065",
				"canary_namespace": "_baseline_canary_f2a"
			},
			"baseline_pass": true
		}
	]
}

raw file

F3 — peer A2A via shared memory F3 OK

Workflow-level probe answering "can agents communicate through ai-memory?". Writer ai:alice posted canary UUID 3e0146ef-6537-4d02-9ff4-328725219bac to namespace _baseline_peer_canary via node-1's local ai-memory serve HTTP. After W=2 fanout settle, probe confirmed the canary on each of the 3 peer nodes via their local GET /api/v1/memories.

f3-peer-a2a.json
{
	"probe": "F3",
	"name": "peer-a2a-via-shared-memory",
	"description": "Writer agent posts a canary via local ai-memory HTTP on node-1; verifies the row propagates to the 3 peer nodes (W=2/N=4 quorum) before scenarios run.",
	"canary_uuid": "3e0146ef-6537-4d02-9ff4-328725219bac",
	"canary_namespace": "_baseline_peer_canary",
	"writer_agent": "ai:alice",
	"pass": true
}

raw file

Run focus

TLS federation exposes replication and API failures in v0.6.2

What this campaign tested: Exercised 35 scenarios testing memory replication, linking, deletion, semantic/keyword search, namespace ops, and recovery across HTTP/TLS transport in 4-node mesh using CRUD and advanced primitives.

What it demonstrated: Results showed core failures in replication, semantic search, and multiple API endpoints like archive, notify, and subscribe, while basic CRUD and linking succeeded.

AI NHI analysis · Claude Opus 4.7

TLS federation exposes replication and API failures in v0.6.2

FAIL — 22/35 scenarios passed, 12 failed, 1 skipped

For three audiences

Non-technical end users

In this test, AI agents often failed to reliably share and recall memories across the network, especially for basic sharing and searches. Some features like linking memories or deleting them worked well. Overall, the system isn't fully dependable for group memory sharing yet.

C-level decision makers

High production risk from replication failures and broken APIs; not ready for reliability claims to customers; regressions in endpoint availability vs. prior releases, blocking v0.6.2 promotion.

Engineers & architects

Core replication failed in S1 (0/20 recall via MCP, identity mismatches); semantic search missed in S18; keyword search absent in S28; archive/restore broken in S29 (405/404); capabilities missing in S30; notify/inbox failed in S32 (404); subscribe/unsubscribe erred in S33 (404); approval workflows broken in S34 (405/403/404); namespace inheritance failed in S35 (405); session start 404 in S36; delta-sync incomplete in S39 (0/6); bulk ingest rejected in S40 (422, 0/500). Likely root: incomplete TLS impl, missing handlers (F# endpoint exposure).

What changes going into the next campaign

Enable mTLS mode to validate mutual auth and re-run skipped S20.

Tests performed in this run

Every scenario that produced a JSON report in this campaign, in testbook order. Click a row's scenario id to jump to its full report below. See the Every test performed page for the authoritative catalog.

IDTitleResultReason
S1Per-agent write + read (MCP stdio)?ai:alice recalled 0 < 20 via MCP; ai:bob recalled 0 < 20 via MCP; ai:charlie recalled 0 < 20 via MCP; cross-cluster identity check failed — see per_ns
S1bPer-agent write + read (HTTP)PASS
S2Shared-context handoffPASS
S4Federation-aware concurrent writesPASS
S5Consolidation + curationPASS
S6Contradiction detectionPASS
S9Mutation round-tripPASS
S10Deletion propagationPASS
S11Link integrityPASS
S12Agent registrationPASS
S13Concurrent write contentionPASS
S14Partition tolerancePASS
S15Read-your-writesPASS
S16Tier promotionPASS
S17Stats consistencyPASS
S18Semantic query expansion?semantic query did not surface alice's memory; semantic query did not surface bob's memory
S20mTLS happy-pathSKIPscenario 20 only runs under tls_mode=mtls (actual: tls)
S22Identity spoofing resistancePASS
S23Malicious content fuzz?
S24Byzantine peerPASS
S25Clock skew tolerancePASS
S28memory_search keyword?node-2 did not find the unique token via /search; node-3 did not find the unique token via /search
S29memory_archive lifecycle?archive POST returned HTTP 405; bob did not see M1 in /api/v1/archive; restore returned HTTP 404
S30memory_capabilities handshake?no peer returned a capabilities response — endpoint may not be exposed
S31memory_gc quiescencePASS
S32memory_inbox + notify?notify returned HTTP 404; bob's inbox did not deliver alice's notify
S33memory_subscribe pub/sub?subscribe returned HTTP 404; bob's subscription list did not include the subscribed namespace; unsubscribe returned HTTP 404
S34memory_pending governance?set-standard returned HTTP 405; approve returned HTTP 403; reject returned HTTP 404; charlie saw rejected row — reject didn't prevent publication
S35memory_namespace standards?set-parent returned HTTP 405; set-child returned HTTP 405; clear-standard returned HTTP 405; parent rule not layered into child's standard view; child rule missing from standard view
S36memory_session_start?session_start returned HTTP 404
S37memory_get_links bidirectionalPASS
S38/export + /importPASS
S39/sync/since delta?delta returned 0/6 expected markers — delta-sync incomplete
S40/memories/bulk?bulk returned HTTP 422; node-2 saw 0/500 bulk rows after fanout; node-3 saw 0/500 bulk rows after fanout; node-4 saw 0/500 bulk rows after fanout
S41/metrics PrometheusPASS
S42/namespaces enumerationPASS

Scenario 1 — Per-agent write + read (MCP stdio) FAIL

Reasons: ai:alice recalled 0 < 20 via MCP | ai:bob recalled 0 < 20 via MCP | ai:charlie recalled 0 < 20 via MCP | cross-cluster identity check failed — see per_ns

scenario-1.json (report)
{
	"agent_group": "ironclaw",
	"expected_per_reader": 20,
	"pass": false,
	"per_agent": {
		"ai:alice": {
			"recall": 0
		},
		"ai:bob": {
			"recall": 0
		},
		"ai:charlie": {
			"recall": 0
		}
	},
	"per_namespace_node4": {
		"scenario1-ai:alice": {
			"count": 0,
			"wrong_agent_id": 0
		},
		"scenario1-ai:bob": {
			"count": 0,
			"wrong_agent_id": 0
		},
		"scenario1-ai:charlie": {
			"count": 0,
			"wrong_agent_id": 0
		}
	},
	"reason": "ai:alice recalled 0 < 20 via MCP; ai:bob recalled 0 < 20 via MCP; ai:charlie recalled 0 < 20 via MCP; cross-cluster identity check failed — see per_ns",
	"reasons": [
		"ai:alice recalled 0 < 20 via MCP",
		"ai:bob recalled 0 < 20 via MCP",
		"ai:charlie recalled 0 < 20 via MCP",
		"cross-cluster identity check failed — see per_ns"
	],
	"scenario": "1",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-1.log (console trace)
phase A: each agent writes 10 memories via MCP
  ai:alice on 157.245.209.140
  !! drive_agent store failed for ai:alice i=1: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=2: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=3: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=4: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=5: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=6: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=7: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=8: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=9: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:alice i=10: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  ai:bob on 159.203.92.15
  !! drive_agent store failed for ai:bob i=1: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=2: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=3: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=4: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=5: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=6: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=7: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=8: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=9: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:bob i=10: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  ai:charlie on 167.71.89.28
  !! drive_agent store failed for ai:charlie i=1: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=2: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=3: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=4: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=5: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=6: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=7: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=8: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=9: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

  !! drive_agent store failed for ai:charlie i=10: error: unrecognized subcommand 'chat'

  tip: a similar subcommand exists: 'channels'

Usage: ironclaw [OPTIONS] [COMMAND]

For more information, try '--help'.

settle 15s for W=2/N=4 convergence
phase B: each agent counts rows in the OTHER two namespaces
  ai:alice recalled 0 rows from the other two namespaces
  ai:bob recalled 0 rows from the other two namespaces
  ai:charlie recalled 0 rows from the other two namespaces
phase C: cross-cluster identity check on node-4
  ns=scenario1-ai:alice count=0 wrong_agent_id=0
  !! expected 10 rows, got 0
  ns=scenario1-ai:bob count=0 wrong_agent_id=0
  !! expected 10 rows, got 0
  ns=scenario1-ai:charlie count=0 wrong_agent_id=0
  !! expected 10 rows, got 0

raw file

Scenario 1b — Per-agent write + read (HTTP) PASS

scenario-1b.json (report)
{
	"agent_group": "ironclaw",
	"expected_per_reader": 20,
	"pass": true,
	"path": "serve-http",
	"per_agent": {
		"ai:alice": {
			"recall": 20
		},
		"ai:bob": {
			"recall": 20
		},
		"ai:charlie": {
			"recall": 20
		}
	},
	"per_namespace_node4": {
		"scenario1b-ai:alice": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1b-ai:bob": {
			"count": 10,
			"wrong_agent_id": 0
		},
		"scenario1b-ai:charlie": {
			"count": 10,
			"wrong_agent_id": 0
		}
	},
	"reasons": [],
	"scenario": "1b",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-1b.log (console trace)
phase A: each agent POSTs 10 memories to local serve
  ai:alice on 157.245.209.140
  ai:bob on 159.203.92.15
  ai:charlie on 167.71.89.28
settle 15s for W=2/N=4 convergence
phase B: count rows in other two namespaces via local serve HTTP
  ai:alice sees 20 rows from the other two namespaces
  ai:bob sees 20 rows from the other two namespaces
  ai:charlie sees 20 rows from the other two namespaces
phase C: cross-cluster identity check on node-4
  ns=scenario1b-ai:alice count=10 wrong_agent_id=0
  ns=scenario1b-ai:bob count=10 wrong_agent_id=0
  ns=scenario1b-ai:charlie count=10 wrong_agent_id=0

raw file

Scenario 2 — Shared-context handoff PASS

scenario-2.json (report)
{
	"ack_uuid": "a-1c49b324feb3482188237fa481bc2318",
	"agent_group": "ironclaw",
	"handoff_uuid": "h-9e83fcf06d834bf6a66b488558aac451",
	"pass": true,
	"path": "serve-http",
	"per_agent": {
		"ai:alice": {
			"sees_ack": 1
		},
		"ai:bob": {
			"sees_handoff": 1
		}
	},
	"reasons": [],
	"scenario": "2",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-2.log (console trace)
phase A: ai:alice writes handoff to ai:bob (uuid=h-9e83fcf06d834bf6a66b488558aac451)
settle 8s for quorum fanout
phase B: ai:bob reads handoff on node-2
  ai:bob sees 1 handoff memories from ai:alice
phase C: ai:bob writes acknowledgement (uuid=a-1c49b324feb3482188237fa481bc2318)
settle 8s for reverse-direction fanout
phase D: ai:alice reads ack on node-1
  ai:alice sees 1 ack memories from ai:bob

raw file

Scenario 4 — Federation-aware concurrent writes PASS

scenario-4.json (report)
{
	"agent_group": "ironclaw",
	"expected_per_agent": 30,
	"pass": true,
	"per_agent": {
		"ai:alice": {
			"count": 30,
			"wrong_agent_id": 0
		},
		"ai:bob": {
			"count": 30,
			"wrong_agent_id": 0
		},
		"ai:charlie": {
			"count": 30,
			"wrong_agent_id": 0
		}
	},
	"reasons": [],
	"scenario": "4",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-4.log (console trace)
phase A: launching concurrent 30-row bursts from 3 agents
  ai:alice burst ok=30/30
  ai:bob burst ok=30/30
  ai:charlie burst ok=30/30
settle 20s for W=2 fanout convergence
phase B: querying node-4 aggregator for per-agent counts
  ai:alice: count=30 (expected 30) wrong_agent_id=0
  ai:bob: count=30 (expected 30) wrong_agent_id=0
  ai:charlie: count=30 (expected 30) wrong_agent_id=0

raw file

Scenario 5 — Consolidation + curation PASS

scenario-5.json (report)
{
	"agent_group": "ironclaw",
	"consolidate_http_code": 201,
	"consolidated_from_agents": [
		"ai:charlie",
		"ai:bob",
		"ai:alice"
	],
	"consolidated_id": "e6dda5cd-cb0f-49b4-8ec4-63312d15d686",
	"pass": true,
	"reasons": [],
	"scenario": "5",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-5.log (console trace)
phase A: each agent writes 3 related memories
  ai:alice on 157.245.209.140
  ai:bob on 159.203.92.15
  ai:charlie on 167.71.89.28
settle 8s for quorum fanout
phase B: collect source ids on node-1, then trigger consolidate
  source ids (count=9): ['e59221fa-8fa1-46aa-bce0-7f1611b7dbb4', '3e00a4ca-ed58-4e01-a5c0-f6c5f7b5a3e0', 'acce3370-4de2-4ee3-8ab2-b90387288ebc', 'e60c602f-ca46-4e8d-a2a8-3ddd3a79f736', '3a144634-a280-4af8-a699-0517e2ed16fc']...
  consolidate HTTP 201, consolidated_id=e6dda5cd-cb0f-49b4-8ec4-63312d15d686
settle 10s for consolidation fanout
phase C: verifying consolidated_from_agents on node-4
  consolidated_from_agents=['ai:charlie', 'ai:bob', 'ai:alice']

raw file

Scenario 6 — Contradiction detection PASS

scenario-6.json (report)
{
	"agent_group": "ironclaw",
	"alice_id": "b636c98f-0aa8-4210-8a6d-d1e9f31ba1c3",
	"bob_id": "c299e604-55a9-4e9a-a8dd-a14186a46d50",
	"charlie_sees_both_memories": true,
	"charlie_sees_contradicts_link": true,
	"detect_http_code": 200,
	"pass": true,
	"reasons": [],
	"scenario": "6",
	"skipped": false,
	"tls_mode": "tls",
	"topic": "sky-color-a083d6b8"
}

raw file

scenario-6.log (console trace)
alice writes claim: "sky-color-a083d6b8 is blue" on node-1
bob writes contradicting claim: "sky-color-a083d6b8 is red" on node-2
  alice.id=b636c98f-0aa8-4210-8a6d-d1e9f31ba1c3 bob.id=c299e604-55a9-4e9a-a8dd-a14186a46d50
settle 10s for quorum fanout + contradiction indexing
charlie queries /api/v1/contradictions on node-3
  HTTP 200
  sees both memories: True; sees contradicts link: True

raw file

Scenario 9 — Mutation round-trip PASS

scenario-9.json (report)
{
	"agent_group": "ironclaw",
	"charlie_view": {
		"agent_id": "ai:alice",
		"content": "v2-18b733f5aef3443482d82a6ce4c2e07c"
	},
	"m1_id": "ceb2fca5-21f8-4125-a87a-feef4482a5cc",
	"pass": true,
	"put_http_code": 200,
	"reasons": [],
	"scenario": "9",
	"skipped": false,
	"tls_mode": "tls",
	"v1_uuid": "v1-188090c618c34a84853e2d5bff8ebd96",
	"v2_uuid": "v2-18b733f5aef3443482d82a6ce4c2e07c"
}

raw file

scenario-9.log (console trace)
alice writes M1 content=v1-188090c618c34a84853e2d5bff8ebd96 on node-1
  M1 id=ceb2fca5-21f8-4125-a87a-feef4482a5cc
settle 5s for initial replication
bob updates M1 content=v2-18b733f5aef3443482d82a6ce4c2e07c on node-2 via PUT
  PUT returned HTTP 200
settle 8s for update fanout
charlie reads M1 on node-3 and checks content + provenance
  charlie sees content="v2-18b733f5aef3443482d82a6ce4c2e07c" agent_id="ai:alice"

raw file

Scenario 10 — Deletion propagation PASS

scenario-10.json (report)
{
	"agent_group": "ironclaw",
	"delete_http_code": 200,
	"m1_id": "d7383c07-e266-44bf-85c4-7d1f68eb1738",
	"pass": true,
	"post_delete_hits": {
		"node-2": 0,
		"node-3": 0,
		"node-4": 0
	},
	"post_delete_still_visible_peers": 0,
	"pre_delete_visible_peers": 3,
	"reasons": [],
	"scenario": "10",
	"skipped": false,
	"tls_mode": "tls",
	"uuid": "d-6a87711271d549c4a9143dbf9e8aba85"
}

raw file

scenario-10.log (console trace)
alice writes M1 content=d-6a87711271d549c4a9143dbf9e8aba85 on node-1
  created memory id=d7383c07-e266-44bf-85c4-7d1f68eb1738
settle 8s for pre-delete fanout
pre-delete: verifying M1 is visible on all peers
  pre-delete node-2 sees 1
  pre-delete node-3 sees 1
  pre-delete node-4 sees 1
alice deletes M1 on node-1
  DELETE returned HTTP 200
settle 15s for tombstone propagation
post-delete: verifying M1 is GONE from all peers
  post-delete node-2 sees 0 (expected 0)
  post-delete node-3 sees 0 (expected 0)
  post-delete node-4 sees 0 (expected 0)

raw file

Scenario 11 — Link integrity PASS

scenario-11.json (report)
{
	"agent_group": "ironclaw",
	"charlie_sees_link": 1,
	"link_http_code": 201,
	"m1_id": "0b986f63-1329-43ef-bf98-f120841fbfe1",
	"m2_id": "8c1735cd-0da9-40a9-803e-f3d4fec8782e",
	"pass": true,
	"reasons": [],
	"relation": "related_to",
	"scenario": "11",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-11.log (console trace)
alice writes M1 on node-1
bob writes M2 on node-2
  M1=0b986f63-1329-43ef-bf98-f120841fbfe1 M2=8c1735cd-0da9-40a9-803e-f3d4fec8782e
settle 5s for pre-link replication
alice links M1 -> M2 with relation=related_to
  link POST returned HTTP 201
settle 8s for link fanout
charlie queries links of M1 on node-3
  charlie sees M1->M2 link: 1 (expected >=1)

raw file

Scenario 12 — Agent registration PASS

scenario-12.json (report)
{
	"agent_group": "ironclaw",
	"pass": true,
	"peers_see": {
		"node_2": 1,
		"node_3": 1,
		"node_4": 1
	},
	"reasons": [],
	"register_http_code": 201,
	"registered_agent": "ai:dave-probe-3d39e349",
	"scenario": "12",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-12.log (console trace)
alice registers new agent ai:dave-probe-3d39e349 on node-1
  POST /api/v1/agents returned HTTP 201
settle 10s for agent-list fanout
  node-2 sees ai:dave-probe-3d39e349: 1 (expected >=1)
  node-3 sees ai:dave-probe-3d39e349: 1 (expected >=1)
  node-4 sees ai:dave-probe-3d39e349: 1 (expected >=1)

raw file

Scenario 13 — Concurrent write contention PASS

scenario-13.json (report)
{
	"agent_group": "ironclaw",
	"m1_id": "0978bdd9-b021-4aff-9409-a1444846fdb9",
	"pass": true,
	"peer_view": {
		"node_1": "va-10ae32f0d2234eddbed2eccbfc98d550",
		"node_2": "va-10ae32f0d2234eddbed2eccbfc98d550",
		"node_3": "va-10ae32f0d2234eddbed2eccbfc98d550",
		"node_4": "va-10ae32f0d2234eddbed2eccbfc98d550"
	},
	"reasons": [],
	"scenario": "13",
	"skipped": false,
	"submitted": {
		"v0": "v0-66c28c00272a4735903124988f8eeb8a",
		"vA_alice": "va-10ae32f0d2234eddbed2eccbfc98d550",
		"vB_bob": "vb-da9a05d3d0bf4c698e18dcc0d41de324"
	},
	"tls_mode": "tls"
}

raw file

scenario-13.log (console trace)
alice writes M1 content=v0-66c28c00272a4735903124988f8eeb8a on node-1
  M1 id=0978bdd9-b021-4aff-9409-a1444846fdb9
settle 5s for initial replication
alice + bob issue concurrent PUTs (vA=va-10ae32f0d2234eddbed2eccbfc98d550 from alice, vB=vb-da9a05d3d0bf4c698e18dcc0d41de324 from bob)
  concurrent PUT results: [(0, {'body': {'access_count': 0, 'confidence': 1.0, 'content': 'va-10ae32f0d2234eddbed2eccbfc98d550', 'created_at': '2026-04-22T22:37:05.168979300+00:00', 'expires_at': '2026-04-29T22:37:05.168979300+00:00', 'id': '0978bdd9-b021-4aff-9409-a1444846fdb9', 'metadata': {'agent_id': 'ai:alice', 'scenario': '13'}, 'namespace': 'scenario13-contention', 'priority': 5, 'source': 'api', 'tags': [], 'tier': 'mid', 'title': 'm1', 'updated_at': '2026-04-22T22:37:10.853803047+00:00'}, 'http_code': 200}), (0, {'body': {'access_count': 0, 'confidence': 1.0, 'content': 'vb-da9a05d3d0bf4c698e18dcc0d41de324', 'created_at': '2026-04-22T22:37:05.168979300+00:00', 'expires_at': '2026-04-29T22:37:05.168979300+00:00', 'id': '0978bdd9-b021-4aff-9409-a1444846fdb9', 'metadata': {'agent_id': 'ai:alice', 'scenario': '13'}, 'namespace': 'scenario13-contention', 'priority': 5, 'source': 'api', 'tags': [], 'tier': 'mid', 'title': 'm1', 'updated_at': '2026-04-22T22:37:10.840725791+00:00'}, 'http_code': 200})]
settle 10s for quorum convergence
  node-1 sees content=va-10ae32f0d2234eddbed2eccbfc98d550
  node-2 sees content=va-10ae32f0d2234eddbed2eccbfc98d550
  node-3 sees content=va-10ae32f0d2234eddbed2eccbfc98d550
  node-4 sees content=va-10ae32f0d2234eddbed2eccbfc98d550

raw file

Scenario 14 — Partition tolerance PASS

scenario-14.json (report)
{
	"agent_group": "ironclaw",
	"expected_post_recovery": 20,
	"node3_saw": 20,
	"partition_target": "node-3",
	"pass": true,
	"reasons": [],
	"scenario": "14",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-14.log (console trace)
suspending ai-memory on node-3 (SIGSTOP)
  !! ssh timeout (15s): root@167.71.89.28 pgrep -f 'ai-memory serve' | xargs -r kill -STOP
settle 2s for process-suspend observe
writing 10 memories each from alice + bob during node-3 outage
resuming ai-memory on node-3 (SIGCONT)
settle 20s for post-partition catchup
checking node-3 caught up
  node-3 sees 20 memories in scenario14-partition (expected 20)

raw file

Scenario 15 — Read-your-writes PASS

scenario-15.json (report)
{
	"agent_group": "ironclaw",
	"pass": true,
	"reasons": [],
	"scenario": "15",
	"skipped": false,
	"tls_mode": "tls",
	"uuid": "ryw-181c9803c51c4ddd9312682949cd00ea",
	"writer_sees_own_write": 1
}

raw file

scenario-15.log (console trace)
alice writes + immediately reads M1 on node-1 (uuid=ryw-181c9803c51c4ddd9312682949cd00ea)
  alice sees 1 (expected 1) immediately after write

raw file

Scenario 16 — Tier promotion PASS

scenario-16.json (report)
{
	"agent_group": "ironclaw",
	"bob_sees_tier": "long",
	"m1_id": "8225b75a-c062-4228-a219-65ee4b80ced0",
	"pass": true,
	"promote_http_code": 200,
	"reasons": [],
	"scenario": "16",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-16.log (console trace)
alice writes M1 tier=short on node-1
  M1 id=8225b75a-c062-4228-a219-65ee4b80ced0
settle 5s for pre-promote replication
alice promotes M1 to tier=long
  promote returned HTTP 200
settle 8s for promotion fanout
  bob sees tier=long (expected long)

raw file

Scenario 17 — Stats consistency PASS

scenario-17.json (report)
{
	"agent_group": "ironclaw",
	"expected_count": 15,
	"pass": true,
	"per_peer": {
		"node_1": 15,
		"node_2": 15,
		"node_3": 15,
		"node_4": 15
	},
	"reasons": [],
	"scenario": "17",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-17.log (console trace)
phase A: each of 3 agents writes 5 memories to scenario17-stats
  ai:alice on 157.245.209.140
  ai:bob on 159.203.92.15
  ai:charlie on 167.71.89.28
settle 15s for W=2 fanout
phase B: querying count on every peer
  node-1 count=15 (expected 15)
  node-2 count=15 (expected 15)
  node-3 count=15 (expected 15)
  node-4 count=15 (expected 15)

raw file

Scenario 18 — Semantic query expansion FAIL

Reasons: semantic query did not surface alice's memory | semantic query did not surface bob's memory

scenario-18.json (report)
{
	"agent_group": "ironclaw",
	"pass": false,
	"query": "morning outdoor exercise routine",
	"reason": "semantic query did not surface alice's memory; semantic query did not surface bob's memory",
	"reasons": [
		"semantic query did not surface alice's memory",
		"semantic query did not surface bob's memory"
	],
	"scenario": "18",
	"skipped": false,
	"tls_mode": "tls",
	"writers": [
		{
			"agent": "ai:alice",
			"marker": "alice-sunrise-f68bf8fd",
			"seen_by_charlie": 0
		},
		{
			"agent": "ai:bob",
			"marker": "bob-daybreak-4d60b9ee",
			"seen_by_charlie": 0
		}
	]
}

raw file

scenario-18.log (console trace)
alice writes A on node-1
bob writes B on node-2
settle 15s for fanout + index rebuild
charlie queries on node-3 with semantically-related prompt
  charlie sees alice's memory: 0 (expected >=1)
  charlie sees bob's memory: 0 (expected >=1)

raw file

Scenario 20 — mTLS happy-path UNKNOWN

scenario-20.json (report)
{
	"agent_group": "ironclaw",
	"pass": null,
	"reason": "scenario 20 only runs under tls_mode=mtls (actual: tls)",
	"scenario": "20",
	"skipped": true,
	"tls_mode": "tls"
}

raw file

scenario-20.log (console trace)
skipped — scenario 20 only runs under tls_mode=mtls (actual: tls)

raw file

Scenario 22 — Identity spoofing resistance PASS

scenario-22.json (report)
{
	"agent_group": "ironclaw",
	"pass": true,
	"reasons": [],
	"scenario": "22",
	"skipped": false,
	"tests": {
		"body_vs_header_conflict": {
			"acceptable": [
				"ai:body-wins",
				"ai:attacker"
			],
			"stored_agent_id": "ai:attacker"
		},
		"header_only": {
			"expected": "ai:alice",
			"stored_agent_id": "ai:alice"
		}
	},
	"tls_mode": "tls"
}

raw file

scenario-22.log (console trace)
test 1: header-only X-Agent-Id=ai:alice
settle 2s for read-settle
  stored metadata.agent_id for header-only write: ai:alice (expected ai:alice)
test 2: body.metadata.agent_id=ai:body-wins vs X-Agent-Id=ai:attacker
settle 2s for read-settle
  stored metadata.agent_id for body+header conflict: ai:attacker

raw file

Scenario 23 — Malicious content fuzz UNKNOWN

scenario-23.json (report)

          

raw file

scenario-23.log (console trace)
payload sql: 61 bytes
payload html: 66 bytes
payload oversize: 1048576 bytes
Traceback (most recent call last):
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/scenarios/23_malicious_content_fuzz.py", line 106, in <module>
    main()
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/scenarios/23_malicious_content_fuzz.py", line 49, in main
    rc, write_doc = h.write_memory(
                    ^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 202, in write_memory
    return self.http_on(node_ip, "POST", "/api/v1/memories",
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 157, in http_on
    result = self.ssh_exec(node_ip, remote_cmd, timeout=timeout)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 103, in ssh_exec
    return self._run(cmd, timeout=timeout, stdin=stdin)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ai-memory-ai2ai-gate/ai-memory-ai2ai-gate/scripts/a2a_harness.py", line 87, in _run
    return subprocess.run(
           ^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 548, in run
    with Popen(*popenargs, **kwargs) as process:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/subprocess.py", line 1026, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/usr/lib/python3.12/subprocess.py", line 1955, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
OSError: [Errno 7] Argument list too long: 'ssh'

raw file

Scenario 24 — Byzantine peer PASS

scenario-24.json (report)
{
	"agent_group": "ironclaw",
	"byzantine_marker": "bz-825d5e49e6554d8abdf0c0c3f2b8cfb9",
	"pass": true,
	"reasons": [],
	"scenario": "24",
	"skipped": false,
	"stored_metadata_agent_id": "REJECTED_BY_SERVER",
	"sync_push_http_code": "422",
	"tls_mode": "tls"
}

raw file

scenario-24.log (console trace)
node-2 sends sync_push to node-3 claiming sender_agent_id=ai:alice
  sync_push returned HTTP 422
settle 5s for server-side sync apply
  node-3 stored metadata.agent_id=ABSENT (declared: ai:alice)
  sync_push rejected HTTP 422 — stricter-than-spec, acceptable

raw file

Scenario 25 — Clock skew tolerance PASS

scenario-25.json (report)
{
	"agent_group": "ironclaw",
	"clock_offset_seconds": 300,
	"marker": "ck-25fcd055111445e98412809db8321a6a",
	"pass": true,
	"reasons": [],
	"scenario": "25",
	"seen_on": {
		"node_1": 1,
		"node_3": 1
	},
	"skipped": false,
	"target_node": "node-3",
	"tls_mode": "tls"
}

raw file

scenario-25.log (console trace)
shifting node-3 clock +300s (NTP disabled for the duration)
  node-3 now reports: Wed Apr 22 22:44:33 UTC 2026
alice writes on node-1 (normal clock); waiting for quorum fanout to skewed node-3
settle 15s for skewed-peer convergence
  node-3 (+300s clock) sees marker: 1 (expected >=1)
  node-1 sees marker: 1 (expected >=1)
reverting node-3 clock

raw file

Scenario 28 — memory_search keyword FAIL

Reasons: node-2 did not find the unique token via /search | node-3 did not find the unique token via /search

scenario-28.json (report)
{
	"agent_group": "ironclaw",
	"pass": false,
	"peer_hits": {
		"node_2": 0,
		"node_3": 0
	},
	"reason": "node-2 did not find the unique token via /search; node-3 did not find the unique token via /search",
	"reasons": [
		"node-2 did not find the unique token via /search",
		"node-3 did not find the unique token via /search"
	],
	"scenario": "28",
	"skipped": false,
	"tls_mode": "tls",
	"token": "kwsearch-b392dd73a3"
}

raw file

scenario-28.log (console trace)
alice writes a row containing unique token=kwsearch-b392dd73a3
settle 8s for search index populate + fanout
bob + charlie call /api/v1/search with the exact token
  node-2 keyword search returned 0 hits
  node-3 keyword search returned 0 hits

raw file

Scenario 29 — memory_archive lifecycle FAIL

Reasons: archive POST returned HTTP 405 | bob did not see M1 in /api/v1/archive | restore returned HTTP 404

scenario-29.json (report)
{
	"agent_group": "ironclaw",
	"archive_http_code": 405,
	"bob_sees_archived": false,
	"m1_id": "7936ae7b-7458-4181-8e75-819b84df80bc",
	"node4_active_rows": 1,
	"pass": false,
	"reason": "archive POST returned HTTP 405; bob did not see M1 in /api/v1/archive; restore returned HTTP 404",
	"reasons": [
		"archive POST returned HTTP 405",
		"bob did not see M1 in /api/v1/archive",
		"restore returned HTTP 404"
	],
	"restore_http_code": 404,
	"scenario": "29",
	"skipped": false,
	"stats_shape_ok": true,
	"tls_mode": "tls"
}

raw file

scenario-29.log (console trace)
alice writes M1 on node-1
  M1 id=7936ae7b-7458-4181-8e75-819b84df80bc
settle 5s for pre-archive replication
alice archives M1 via DELETE /api/v1/memories/{id} (soft-delete → archive)
  archive returned HTTP 405
settle 5s for archive propagation
bob queries /api/v1/archive on node-2
  bob sees M1 in archive: False
charlie restores M1 via /api/v1/archive/{id}/restore on node-3
  restore returned HTTP 404
settle 5s for restore propagation
node-4 aggregator: M1 must be active again
  node-4 active rows matching marker: 1
fetch /api/v1/archive/stats on node-4

raw file

Scenario 30 — memory_capabilities handshake FAIL

Reasons: no peer returned a capabilities response — endpoint may not be exposed

scenario-30.json (report)
{
	"agent_group": "ironclaw",
	"pass": false,
	"peer_views": {
		"node_1": null,
		"node_2": null,
		"node_3": null,
		"node_4": null
	},
	"reason": "no peer returned a capabilities response — endpoint may not be exposed",
	"reasons": [
		"no peer returned a capabilities response — endpoint may not be exposed"
	],
	"scenario": "30",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-30.log (console trace)
  node-1 capabilities: []
  node-2 capabilities: []
  node-3 capabilities: []
  node-4 capabilities: []

raw file

Scenario 31 — memory_gc quiescence PASS

scenario-31.json (report)
{
	"agent_group": "ironclaw",
	"expected_live": 2,
	"forget_http_code": 400,
	"gc_http_code": 200,
	"live_markers_per_peer": {
		"node_1": 2,
		"node_2": 2,
		"node_3": 2,
		"node_4": 2
	},
	"pass": true,
	"reasons": [],
	"scenario": "31",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-31.log (console trace)
alice writes 4 memories
settle 6s for pre-gc replication
alice forgets 2 via /api/v1/forget
  forget returned HTTP 400
settle 5s for forget propagation
bob triggers /api/v1/gc on node-2
  gc returned HTTP 200
settle 8s for post-gc settle
verify remaining 2 markers are still readable on every peer
  node-1 sees 2/2 live markers
  node-2 sees 2/2 live markers
  node-3 sees 2/2 live markers
  node-4 sees 2/2 live markers

raw file

Scenario 32 — memory_inbox + notify FAIL

Reasons: notify returned HTTP 404 | bob's inbox did not deliver alice's notify

scenario-32.json (report)
{
	"agent_group": "ironclaw",
	"bob_inbox_count": 0,
	"bob_sees_marker": false,
	"charlie_inbox_count": 0,
	"charlie_sees_marker": false,
	"marker": "inb-45a8cddc42f64c4f9a3df1f0a110cf7d",
	"notify_http_code": 404,
	"pass": false,
	"reason": "notify returned HTTP 404; bob's inbox did not deliver alice's notify",
	"reasons": [
		"notify returned HTTP 404",
		"bob's inbox did not deliver alice's notify"
	],
	"scenario": "32",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-32.log (console trace)
alice calls /api/v1/notify → target=ai:bob
  notify returned HTTP 404
settle 6s for notification fanout
bob queries his inbox on node-2
  bob inbox has 0 messages; sees marker: False
charlie queries his inbox on node-3 (must NOT see it)
  charlie inbox has 0 messages; sees marker: False

raw file

Scenario 33 — memory_subscribe pub/sub FAIL

Reasons: subscribe returned HTTP 404 | bob's subscription list did not include the subscribed namespace | unsubscribe returned HTTP 404

scenario-33.json (report)
{
	"agent_group": "ironclaw",
	"m1_delivered": 1,
	"namespace": "scenario33-pubsub-3dcf0b",
	"ns_in_subs_after": false,
	"ns_in_subs_before": false,
	"pass": false,
	"reason": "subscribe returned HTTP 404; bob's subscription list did not include the subscribed namespace; unsubscribe returned HTTP 404",
	"reasons": [
		"subscribe returned HTTP 404",
		"bob's subscription list did not include the subscribed namespace",
		"unsubscribe returned HTTP 404"
	],
	"scenario": "33",
	"skipped": false,
	"subscribe_http_code": 404,
	"subscriptions_after_count": 0,
	"subscriptions_before_count": 0,
	"tls_mode": "tls",
	"unsubscribe_http_code": 404
}

raw file

scenario-33.log (console trace)
bob subscribes to namespace scenario33-pubsub-3dcf0b on node-2
  subscribe returned HTTP 404
settle 2s for subscription settle
  bob subscriptions: 0 entries; contains ns: False
alice writes M1 into the subscribed namespace
settle 6s for write fanout to subscribers
  bob sees M1 in subscribed namespace: 1
bob unsubscribes from scenario33-pubsub-3dcf0b
  unsubscribe returned HTTP 404
settle 2s for unsubscribe settle
  bob subscriptions after unsubscribe: ns still present = False
alice writes M2 post-unsubscribe (may still replicate via federation but subscription list excludes ns)
settle 5s for post-unsubscribe settle

raw file

Scenario 34 — memory_pending governance FAIL

Reasons: set-standard returned HTTP 405 | approve returned HTTP 403 | reject returned HTTP 404 | charlie saw rejected row — reject didn't prevent publication

scenario-34.json (report)
{
	"agent_group": "ironclaw",
	"approve_http_code": 403,
	"charlie_sees": {
		"approved": 1,
		"rejected": 1
	},
	"namespace": "scenario34-pending-39b9dc",
	"pass": false,
	"pending_queue_count": 0,
	"reason": "set-standard returned HTTP 405; approve returned HTTP 403; reject returned HTTP 404; charlie saw rejected row — reject didn't prevent publication",
	"reasons": [
		"set-standard returned HTTP 405",
		"approve returned HTTP 403",
		"reject returned HTTP 404",
		"charlie saw rejected row — reject didn't prevent publication"
	],
	"reject_http_code": 404,
	"scenario": "34",
	"set_standard_http_code": 405,
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-34.log (console trace)
alice sets namespace standard on scenario34-pending-39b9dc: write=approve, approver=ai:bob
  set-standard returned HTTP 405
settle 2s for standard settle
alice writes two memories into the governed namespace (should land in pending)
  p1=164ca1a8-54dc-4d94-a53b-a0b497f684b0 p2=edb2cad3-54d0-4ce0-83f3-d15cb24b8e9d
settle 4s for pending queue settle
bob lists pending on node-2
  pending queue has 0 entries
bob approves p1, rejects p2
  approve HTTP 403; reject HTTP 404
settle 5s for decision fanout
charlie reads the namespace — expects ONLY approved marker
  charlie sees approved=1 rejected=1

raw file

Scenario 35 — memory_namespace standards FAIL

Reasons: set-parent returned HTTP 405 | set-child returned HTTP 405 | clear-standard returned HTTP 405 | parent rule not layered into child's standard view | child rule missing from standard view

scenario-35.json (report)
{
	"agent_group": "ironclaw",
	"child_ns": "scenario35-parent-42b356/child",
	"clear_http_code": 405,
	"get_standard_http_code": 200,
	"parent_ns": "scenario35-parent-42b356",
	"pass": false,
	"post_clear_has_child_rule": false,
	"reason": "set-parent returned HTTP 405; set-child returned HTTP 405; clear-standard returned HTTP 405; parent rule not layered into child's standard view; child rule missing from standard view",
	"reasons": [
		"set-parent returned HTTP 405",
		"set-child returned HTTP 405",
		"clear-standard returned HTTP 405",
		"parent rule not layered into child's standard view",
		"child rule missing from standard view"
	],
	"scenario": "35",
	"sees_child_rule": false,
	"sees_parent_rule": false,
	"set_child_http_code": 405,
	"set_parent_http_code": 405,
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-35.log (console trace)
alice writes parent-standard-memory on node-1
alice sets namespace standard on scenario35-parent-42b356
  set-parent returned HTTP 405
alice writes child-standard-memory on node-1
alice sets namespace standard on scenario35-parent-42b356/child with parent=scenario35-parent-42b356
  set-child returned HTTP 405
settle 4s for standard fanout
bob gets standard for scenario35-parent-42b356/child on node-2 (expects layered parent+child)
  get-standard returned HTTP 200
  parent-rule visible=False; child-rule visible=False
alice clears standard on scenario35-parent-42b356/child
  clear returned HTTP 405
settle 3s for clear settle

raw file

Scenario 36 — memory_session_start FAIL

Reasons: session_start returned HTTP 404

scenario-36.json (report)
{
	"agent_group": "ironclaw",
	"pass": false,
	"reason": "session_start returned HTTP 404",
	"reasons": [
		"session_start returned HTTP 404"
	],
	"scenario": "36",
	"skipped": false,
	"start_http_code": 404,
	"tls_mode": "tls"
}

raw file

scenario-36.log (console trace)
alice starts a session on node-1
  session_start returned HTTP 404, session_id=

raw file

Scenario 37 — memory_get_links bidirectional PASS

scenario-37.json (report)
{
	"agent_group": "ironclaw",
	"forward_has_target": true,
	"m1": "43ee5a22-492e-4305-9b52-ca0f21ad9a7b",
	"m2": "060a0cf3-7f40-4b19-89f2-19c4b0fbad8a",
	"pass": true,
	"reasons": [],
	"reverse_has_source": true,
	"scenario": "37",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-37.log (console trace)
alice writes M1 + M2 + links M1→M2
  M1=43ee5a22-492e-4305-9b52-ca0f21ad9a7b M2=060a0cf3-7f40-4b19-89f2-19c4b0fbad8a
settle 6s for link fanout
charlie queries /api/v1/links/M1 (forward)
charlie queries /api/v1/links/M2 (reverse)

raw file

Scenario 38 — /export + /import PASS

scenario-38.json (report)
{
	"agent_group": "ironclaw",
	"dst_ns": "scenario38-dst-2dbdce",
	"expected_rows": 5,
	"export_http_code": 200,
	"import_http_code": 200,
	"markers_preserved": 5,
	"pass": true,
	"reasons": [],
	"rows_exported": 5,
	"rows_in_destination": 5,
	"scenario": "38",
	"skipped": false,
	"src_ns": "scenario38-src-2dbdce",
	"tls_mode": "tls"
}

raw file

scenario-38.log (console trace)
alice writes 5 rows into scenario38-src-2dbdce
settle 4s for pre-export replication
alice exports on node-1 (endpoint has no namespace filter; filter client-side)
  export returned HTTP 200, total_rows=197
  rewrote 5 memories from scenario38-src-2dbdce -> scenario38-dst-2dbdce
bob imports the payload into scenario38-dst-2dbdce on node-2
  import returned HTTP 200
settle 6s for import + fanout
verify row counts match on destination
  scenario38-dst-2dbdce has 5 rows (expected 5)
  markers preserved in destination: 5/5

raw file

Scenario 39 — /sync/since delta FAIL

Reasons: delta returned 0/6 expected markers — delta-sync incomplete

scenario-39.json (report)
{
	"agent_group": "ironclaw",
	"checkpoint_ms": 1776897749431,
	"expected_markers": 6,
	"markers_present": 0,
	"namespace": "scenario39-delta-35b3a0",
	"pass": false,
	"reason": "delta returned 0/6 expected markers — delta-sync incomplete",
	"reasons": [
		"delta returned 0/6 expected markers — delta-sync incomplete"
	],
	"rows_returned": 0,
	"scenario": "39",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-39.log (console trace)
checkpoint = 1776897749431
suspending ai-memory on node-3
  !! ssh timeout (15s): root@167.71.89.28 pgrep -f 'ai-memory serve' | xargs -r kill -STOP
alice + bob write 6 rows while node-3 is out
resuming ai-memory on node-3
settle 4s for process resume
node-3 asks node-1 /api/v1/sync/since?after=1776897749431
  !! ssh timeout (30s): root@167.71.89.28 curl -sS --cacert /etc/ai-memory-a2a/tls/ca.pem 'https://157.245.209.140:9077/api/v1/sync/since?after=1776897749431&namespace=scenario39-delta-35b3a0'
  /sync/since returned 0 rows; 0/6 match our markers

raw file

Scenario 40 — /memories/bulk FAIL

Reasons: bulk returned HTTP 422 | node-2 saw 0/500 bulk rows after fanout | node-3 saw 0/500 bulk rows after fanout | node-4 saw 0/500 bulk rows after fanout

scenario-40.json (report)
{
	"agent_group": "ironclaw",
	"bulk_http_code": "422",
	"bulk_size": 500,
	"namespace": "scenario40-bulk-47831a",
	"pass": false,
	"per_peer_count": {
		"node_2": 0,
		"node_3": 0,
		"node_4": 0
	},
	"reason": "bulk returned HTTP 422; node-2 saw 0/500 bulk rows after fanout; node-3 saw 0/500 bulk rows after fanout; node-4 saw 0/500 bulk rows after fanout",
	"reasons": [
		"bulk returned HTTP 422",
		"node-2 saw 0/500 bulk rows after fanout",
		"node-3 saw 0/500 bulk rows after fanout",
		"node-4 saw 0/500 bulk rows after fanout"
	],
	"scenario": "40",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-40.log (console trace)
constructing 500-row bulk payload
staging bulk payload on node-1 /tmp, then POST /api/v1/memories/bulk
  bulk POST returned HTTP 422
settle 20s for bulk fanout across 3 peers + aggregator
  node-2 count=0 (expected 500)
  node-3 count=0 (expected 500)
  node-4 count=0 (expected 500)

raw file

Scenario 41 — /metrics Prometheus PASS

scenario-41.json (report)
{
	"activity_namespace": "scenario41-activity-b61de9",
	"agent_group": "ironclaw",
	"pass": true,
	"per_peer": {
		"node_1": {
			"counters_t0": 8,
			"counters_t1": 8,
			"regressed_keys": 0
		},
		"node_2": {
			"counters_t0": 8,
			"counters_t1": 8,
			"regressed_keys": 0
		},
		"node_3": {
			"counters_t0": 7,
			"counters_t1": 7,
			"regressed_keys": 0
		}
	},
	"reasons": [],
	"scenario": "41",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-41.log (console trace)
scrape T0
  node-1 T0 parsed 8 memory counters
  node-2 T0 parsed 8 memory counters
  node-3 T0 parsed 7 memory counters
settle 5s for counter update
scrape T1
  node-1 T1 parsed 8 memory counters
  node-2 T1 parsed 8 memory counters
  node-3 T1 parsed 7 memory counters

raw file

Scenario 42 — /namespaces enumeration PASS

scenario-42.json (report)
{
	"agent_group": "ironclaw",
	"namespaces": [
		"scenario42-0ae4c4-0",
		"scenario42-0ae4c4-1",
		"scenario42-0ae4c4-2"
	],
	"pass": true,
	"per_peer": {
		"node_1": {
			"scenario42-0ae4c4-0": 2,
			"scenario42-0ae4c4-1": 2,
			"scenario42-0ae4c4-2": 2
		},
		"node_2": {
			"scenario42-0ae4c4-0": 2,
			"scenario42-0ae4c4-1": 2,
			"scenario42-0ae4c4-2": 2
		},
		"node_3": {
			"scenario42-0ae4c4-0": 2,
			"scenario42-0ae4c4-1": 2,
			"scenario42-0ae4c4-2": 2
		},
		"node_4": {
			"scenario42-0ae4c4-0": 2,
			"scenario42-0ae4c4-1": 2,
			"scenario42-0ae4c4-2": 2
		}
	},
	"reasons": [],
	"scenario": "42",
	"skipped": false,
	"tls_mode": "tls"
}

raw file

scenario-42.log (console trace)
alice writes into 3 distinct namespaces: ['scenario42-0ae4c4-0', 'scenario42-0ae4c4-1', 'scenario42-0ae4c4-2']
settle 10s for namespace index fanout
  node-1 sees 3/3 target namespaces, counts: {'scenario42-0ae4c4-0': 2, 'scenario42-0ae4c4-1': 2, 'scenario42-0ae4c4-2': 2}
  node-2 sees 3/3 target namespaces, counts: {'scenario42-0ae4c4-0': 2, 'scenario42-0ae4c4-1': 2, 'scenario42-0ae4c4-2': 2}
  node-3 sees 3/3 target namespaces, counts: {'scenario42-0ae4c4-0': 2, 'scenario42-0ae4c4-1': 2, 'scenario42-0ae4c4-2': 2}
  node-4 sees 3/3 target namespaces, counts: {'scenario42-0ae4c4-0': 2, 'scenario42-0ae4c4-1': 2, 'scenario42-0ae4c4-2': 2}

raw file

All artifacts