{
  "_campaign_id": "a2a<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>ironclaw<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>v0.6.3.1<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>r2",
  "_generated_by": "scripts/analyze_run.py",
  "_model": "grok<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>4<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>fast<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>non<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>reasoning",
  "for_c_level": "Run verdict partial. Detailed narrative synthesis unavailable (LLM call failed: xAI HTTP 400: {\"code\":\"Client specified an invalid argument\",\"error\":\"Incorrect API key provided: ***. You can obtain an). Counts are reliable; consult the per<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>scenario PASS/FAIL and the testbook for primitive<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>level mapping.",
  "for_non_technical": "This run exercised 0 tests of AI<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>agent<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>to<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>AI<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>agent communication through ai<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>memory. 0 worked correctly, 0 did not, and 0 were intentionally skipped because prerequisites weren't met.",
  "for_sme": "Scenario outcomes: pass=0 fail=0 skip=0 of 0 total. First failure reasons are persisted on each scenario<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>N.json. LLM narrative unavailable: xAI HTTP 400: {\"code\":\"Client specified an invalid argument\",\"error\":\"Incorrect API key provided: ***. You can obtain an.",
  "headline": "0/0 scenarios passed; 0 failed, 0 skipped",
  "next_run_change": "Investigate the LLM<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>call failure or re<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>run with XAI_API_KEY verified; counts are unaffected.",
  "verdict": "PARTIAL \u2014 auto<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>generated (LLM unavailable: xAI HTTP 400: {\"code\":\"Client specified an invalid argument\",\"error\":\"Incorrect API key provided: ***. You can obtain an)",
  "what_it_proved": "Direct counts: pass=0, fail=0, skip=0.",
  "what_it_tested": "Campaign a2a<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>ironclaw<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>v0.6.3.1<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>r2 ran 0 testbook scenarios across transport, primitives, and cross<REDACTED<REDACTED-DO-TOKEN>XAI<REDACTED-DO-TOKEN>API<REDACTED-DO-TOKEN>KEY>cutting axes."
}