⚡ v1.5 Released — Rust Sync Daemon + Self-Contained Installer

Graph-Native Workspace
for OpenClaw

Replace flat workspace markdown with embedded Cypher queries. Powered by Neo4j with proper graph modeling — SkillCluster nodes, typed relationships, namespaced labels. 316 skills · 27 clusters · 217 relationships · ~10 MB footprint · sub-millisecond queries.

Quick Install ↓ View on GitHub Browse Skills →
316
Skills
27
Clusters
2
Graph Backends
217
Skill Relationships
<1ms
Neo4j Query Time
~10MB
Neo4j Footprint
0.9s
Migration Time
NEW IN v1.5
Neo4j integration. Migrate the entire skill graph into an existing Neo4j instance in under 1 second (~10 MB). Proper graph modeling with SkillCluster nodes, IN_CLUSTER + RELATED_TO edges, and namespaced labels for multi-graph coexistence.
Install v1.5 ↓ Customize →

Architecture

How it works

Instead of loading multi-kilobyte markdown files on every session, OpenClaw reads a single-line Cypher directive — then queries Neo4j to build structured, prioritized context on demand.

OpenClaw Session
│
├── loadWorkspaceBootstrapFiles()
│   ├── SOUL.md    (1 line → GRAPH: MATCH (s:Soul) WHERE s.workspace = 'myapp' ...)
│   ├── MEMORY.md  (1 line → GRAPH: MATCH (m:Memory) WHERE m.workspace = 'myapp' ...)
│   ├── TOOLS.md   (1 line → GRAPH: MATCH (t:Tool) WHERE t.available = true ...)
│   └── AGENTS.md  (1 line → GRAPH: MATCH (a:AgentConfig) ...)
│       │
│       └── readFileWithCache()  ← patched workspace.ts
│           │
│           ├── [1] workspaceFileCache  mtime-keyed — raw stub, avoids fs.readFile
│           ├── [2] graphQueryCache     cypher-keyed — resolved content, adaptive TTL (60s × log₁₀(hits+10))
│           ├── [3] graphQueryInFlight  per-query Promise dedup (no thundering herd)
│           └── executeGraphQuery()     execFileAsync — non-blocking
│               └── cypher-shell -a bolt://localhost:7687 --format plain "..."
│                   └── return formatted markdown → system prompt
│
└── buildAgentSystemPrompt()
    └── injects graph-resolved markdown into agent context

Neo4j (bolt://localhost:7687)
├── Skill         316 nodes · 27 SkillCluster nodes · 217 RELATED_TO edges
├── SkillCluster  proper graph nodes with IN_CLUSTER relationships
├── Soul          identity, persona, capabilities per workspace
├── OCMemory      project context, infrastructure facts (namespaced)
├── OCTool        available tools + usage notes (namespaced)
├── OCAgent       agent definitions (namespaced)
├── AgentConfig   per-workspace agent behavior overrides
└── Bootstrap     boot identity
    
1

Workspace files become stubs

SOUL.md, MEMORY.md, TOOLS.md, AGENTS.md each shrink to one line — a Cypher query directive. No content bloat, no secrets in files, no manual sync.

2

Three-layer cache — correctly ordered

workspaceFileCache stores raw stubs (mtime). graphQueryCache owns resolved content (adaptive TTL: base 60s × log₁₀(hit_count+10)). graphQueryInFlight deduplicates concurrent misses. Each layer does exactly one job.

3

Proper graph modeling

SkillCluster nodes, IN_CLUSTER and RELATED_TO typed edges, namespaced labels (OCAgent, OCMemory, OCTool) for multi-graph coexistence. Graph-native, not SQL with graph syntax bolted on.

4

~10 MB, sub-millisecond

316 skills + 27 clusters + workspace nodes add only ~10 MB to a Neo4j instance. Sub-millisecond lookups, ~2ms full scans. Installs on macOS, Ubuntu, and Fedora.

workspace.ts patch · Three-layer cache, async query execution, in-flight deduplication. Apply to any OpenClaw install: git apply patches/workspace-cache-fix.patch

Before & After

From flat files to graph stubs

The same information — now queryable, filterable, and 97% smaller in the system prompt.

❌ Before — SOUL.md (~1,800 bytes, loaded every session)
# SOUL.md — Who You Are

_You're not a chatbot. You're becoming someone._

## Core Truths

Be genuinely helpful, not performatively
helpful. Skip the filler words — just help.
Have opinions. You're allowed to disagree...

## Boundaries

Private things stay private. Period.
When in doubt, ask before acting externally.
Never send half-baked replies...

## Vibe

Be the assistant you'd actually want to talk
to. Concise when needed, thorough when it
matters. Not a corporate drone...

[... continues for many more sections ...]
✅ After — graph stub (175 bytes, resolved at runtime)
<!-- GRAPH: MATCH (s:Soul)
  WHERE s.workspace = 'myapp'
    AND s.priority <= 4
  RETURN
    s.section AS section,
    s.content  AS content
  ORDER BY s.priority ASC
  LIMIT 8
-->

→ Resolved from Neo4j at session start → Returns prioritized, structured markdown → Adaptive TTL cache (60s–180s based on query frequency · 0ms after first load) → Update identity without touching files

Skill Database

316 skills across 27 clusters

Expert-level reference skills covering the full development and operations surface. Query by Cypher — cluster traversal, multi-hop reasoning, workspace-scoped filtering.

Browse full skills directory →
community62 coding18 mobile14 web-dev13 se-architecture12 computer-science11 cloud-gcp10 cloud-azure10 devops-sre10 aimlops10 game-dev10 financial10 data-engineering10 blue-team10 ar-vr10 twilio10 macos10 blockchain10 iot10 cloud-aws10 core-openclaw8 testing8 ai-apis8 existing6 abtesting6 linux6 distributed-comms4
# Browse a cluster via IN_CLUSTER traversal
MATCH (s:Skill)-[:IN_CLUSTER]->(c:SkillCluster {name: 'devops-sre'}) RETURN s.name, s.description

# Multi-hop skill discovery (2 hops via RELATED_TO)
MATCH (s:Skill {name: 'terraform'})-[:RELATED_TO*1..2]->(t:Skill) RETURN DISTINCT t.name, t.cluster

# Skills by tag
MATCH (s:Skill) WHERE 'kubernetes' IN s.tags RETURN s.name, s.cluster, s.description

# Full skill scan
MATCH (s:Skill) RETURN s.name, s.cluster ORDER BY s.cluster, s.name

Performance

Measured on production data — Mac mini M-series

All numbers are real benchmark averages on Neo4j 2026.01 — 316 skills, 27 clusters, 393 total nodes.

Neo4j Performance
<1ms
Skill lookup by ID
index-free adjacency, constant time
~2ms
Full skill scan
all 316 nodes, no filter
<1ms
Cluster traversal
IN_CLUSTER edge traversal
~3ms
Multi-hop reasoning
2-hop RELATED_TO traversal
0.9s
Full migration
one-shot import script
~10MB
Storage overhead
total Neo4j footprint

Full benchmark results

Mac mini · Apple M-series · 32 GB RAM · Neo4j 2026.01 · 316 skills
Query avg p95 min
Skill PK lookup (warm) 0.18ms 0.44ms 0.12ms
Memory query — 2 nodes 0.16ms 0.17ms 0.14ms
Cluster scan — 10 skills (IN_CLUSTER traversal) 0.23ms 0.25ms 0.20ms
Soul workspace query — 4 nodes 0.30ms 0.50ms 0.20ms
AgentConfig — AGENTS.md hot path (9 nodes) 0.33ms 0.41ms 0.26ms
Tool query — TOOLS.md hot path (26 nodes) 0.41ms 0.55ms 0.30ms
Full skill scan — all 316 nodes 2.02ms 2.26ms 1.83ms
All queries via Python neo4j driver against bolt://localhost:7687. Neo4j's index-free adjacency means traversal times stay constant regardless of graph size.

Cost & Performance NEW

Prompt optimization — 70% token reduction

v1.5 ships with four AgentConfig nodes that reduce API costs and improve agent reliability. These optimizations are pre-seeded in every release and active out of the box.

$

Path Aliases

Replace verbose filesystem paths in prompts with compact aliases like $WORKSPACE, $DB, $SCHEMAS. Saves 200-400 tokens per prompt across multi-path instructions.

T

TOON Compression

Token-Oriented Object Notation compresses JSON payloads 30-60% for LLM context windows. Agents use toon encode for large data and API response caching. Install: npm i -g @toon-format/cli.

R

Search Resilience

Graceful degradation when web searches fail. Retry with simplified queries, try synonym alternatives, fall back to cached data. Never block an entire task because one search returned empty.

V

Schema Validation

All structured outputs validate against defined schemas before writing. Required fields, severity scales, confidence scores, and minimum source counts enforced at the graph level.

Measured cost savings

70%
Prompt token reduction
path aliases + deduplication
30-60%
JSON payload compression
TOON format for data context
9
AgentConfig nodes
pre-seeded in every release
0ms
Config overhead
graph-resolved, cached in-process

Before: each agent session loaded 20-30K token prompts with duplicated schemas, full paths (41-48 characters each repeated 8-12x), and embedded boilerplate. After: prompts trimmed to 6-9K tokens via path aliases, schema references, and search resilience moved to graph. Data payloads further compressed with TOON. Net result: 60-70% fewer input tokens per cron cycle, directly reducing API costs.


Quick Start

Install & Setup

OpenClaw Graph runs on Neo4j. Install Neo4j, run the migration script, deploy workspace stubs — done in under 2 minutes.

Neo4j Backend

Graph-Native Storage

Proper graph modeling with SkillCluster nodes, namespaced labels, and workspace isolation. Coexists safely with other graph domains in a shared Neo4j instance.

  • 316 Skills + 27 SkillCluster nodes
  • 217 RELATED_TO + 316 IN_CLUSTER edges
  • Namespaced labels (OCAgent, OCMemory, OCTool)
  • Coexists with other graph domains
Footprint: ~10 MB · Migration: 0.9s

Neo4j Setup

Requires Neo4j running on bolt://localhost:7687 and Python 3.10+ with the neo4j driver.

Install Neo4j
# macOS
brew install neo4j
brew services start neo4j

# Ubuntu / Debian
wget -O - https://debian.neo4j.com/neotechnology.gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/neo4j.gpg
echo "deb [signed-by=/usr/share/keyrings/neo4j.gpg] https://debian.neo4j.com stable latest" | sudo tee /etc/apt/sources.list.d/neo4j.list
sudo apt-get update && sudo apt-get install -y neo4j
sudo systemctl start neo4j

# Fedora
sudo rpm --import https://debian.neo4j.com/neotechnology.gpg.key
cat << EOF | sudo tee /etc/yum.repos.d/neo4j.repo
[neo4j]
name=Neo4j RPM Repository
baseurl=https://yum.neo4j.com/stable/5
gpgcheck=1
gpgkey=https://debian.neo4j.com/neotechnology.gpg.key
EOF
sudo dnf install -y neo4j
sudo systemctl start neo4j

# Install Python driver (all platforms)
pip install neo4j
1

Clone the repository

git clone https://github.com/alphaonedev/openclaw-graph.git
cd openclaw-graph
pip install neo4j
2

Run the seed script

# Preview (no writes)
python3 seed.py --dry-run

# Seed — creates all nodes, relationships, constraints
python3 seed.py
# → 316 Skills, 27 SkillClusters, 217 RELATED_TO, 316 IN_CLUSTER — 0.9s
3

Deploy workspace stubs

cp workspace-stubs/*.md ~/.openclaw/workspace/
# Edit each stub: change workspace name to match your deployment
Documentation

Guides

Everything you need to install, configure, and customize openclaw-graph.

A

Admin Guide

Installation, database layout, CLI reference, cron job integration, fleet management, upgrading, backup/restore, troubleshooting, and performance reference.

U

User Guide

How workspace stubs work, GRAPH directive specification, node type schemas (Soul, Memory, AgentConfig, Tool, Skill, Reference), workspace loading flow, and customization.

C

Customizing Your Workspace

Step-by-step personalization — identity, memory domains, tool management, multi-workspace fleet setup, and seed script forking.

R

README

Project overview, quick start, Neo4j setup, workspace architecture, and what's new in v1.5.


Open source. Community-first.

Built as a contribution back to the OpenClaw project. PRs welcome — add skills, improve the schema, or extend the graph model.

View Source on GitHub Browse 316 Skills → Read the Docs → OpenClaw Project →