Make every AI you've ever talked to remember from day one. Paste your Claude / ChatGPT / Slack export — get an instant, ranked, recallable knowledge base.
Five minutes to a useful agent. Export your old AI conversations, run one command, and your next agent starts with everything the last one already knew.
ai-memory mine parses Claude, ChatGPT, and Slack exports into ranked, tiered, recall-ready memories. No re-typing. No copy-paste. No loss of context across model changes.
Every AI conversation you've had is a body of decisions, corrections, preferences, and small facts your future agents will need. Today, when you switch tools or start a new context window, that knowledge evaporates. mine ingests it into your local SQLite store — and from that moment, every MCP-compatible AI you talk to recalls it.
This is the fastest path from "new tool, blank slate" to "new tool, every project context I've ever explained."
# 1. Drop your export into the working directory
ls conversations.json # ChatGPT export
# or
ls claude-export/conversations.jsonl # Claude export
# or
ls slack-export.zip # extract first → ls slack-export/
# 2. Dry-run to see what will land
ai-memory mine ./conversations.json --format chatgpt --dry-run
# 3. Import for real
ai-memory mine ./conversations.json --format chatgpt
# 4. Confirm + recall
ai-memory stats
ai-memory recall "what database did we decide on for the analytics service"
That's it. Your next agent starts with an opinion.
Each conversation that survives the --min-messages threshold (default 3) becomes one memory:
| Field | Source |
|---|---|
title | First user message, truncated |
content | Full conversation text — alternating user/assistant turns |
tier | mid by default (7-day TTL, auto-promotes to long after 5 recalls) |
namespace | Auto-set per format: claude-export, chatgpt-export, slack-export |
source | mine-claude / mine-chatgpt / mine-slack |
metadata.mined_from | Source format tag |
metadata.agent_id | The caller's identity (you) |
created_at | Original conversation timestamp |
Every imported memory is fully integrated with the rest of the system: scope visibility, governance gates, recall scoring, deduplication on (title, namespace), link creation, consolidation. mine is not a sidecar — it's a first-class write path.
Claude exports give you a conversations.jsonl file (one JSON object per line — one conversation per line).
Where to find it:
conversations.jsonlImport:
# Dry run first — see what will be imported, count + sizes
ai-memory mine ./claude-export/conversations.jsonl --format claude --dry-run
# Real import, default mid-tier
ai-memory mine ./claude-export/conversations.jsonl --format claude
# Or pin everything to long-tier (no TTL) into a custom namespace
ai-memory mine ./claude-export/conversations.jsonl \
--format claude \
--namespace personal/claude-history \
--tier long \
--min-messages 5
ChatGPT exports give you a conversations.json file (a single JSON array of conversations).
Where to find it:
conversations.jsonImport:
# Dry run
ai-memory mine ./chatgpt-export/conversations.json --format chatgpt --dry-run
# Real import
ai-memory mine ./chatgpt-export/conversations.json --format chatgpt
# Filter to substantive conversations only (10+ messages)
ai-memory mine ./chatgpt-export/conversations.json \
--format chatgpt \
--min-messages 10 \
--tier long
Slack workspace exports give you a directory of per-channel .json files. mine walks the tree.
Where to find it:
Import:
# Dry run on the whole export
ai-memory mine ./slack-export/ --format slack --dry-run
# Real import — narrow to a high-signal channel
ai-memory mine ./slack-export/eng-decisions/ --format slack \
--namespace work/slack/eng-decisions \
--tier long \
--min-messages 4
| Flag | Default | Meaning |
|---|---|---|
--format, -f | (required) | claude, chatgpt, slack |
--namespace, -n | format-specific | Override target namespace |
--tier, -t | mid | short / mid / long |
--min-messages | 3 | Skip conversations shorter than this |
--dry-run | false | Print what would be imported, don't write |
The miner stamps metadata.agent_id to the caller's identity (see Agent identity) and records the original conversation timestamp on created_at.
Your imported memories are immediately part of the live recall mix:
# Recall ranks across imported + native memories together
ai-memory recall "kubernetes deployment strategy" --tier long
# Filter to just the imported set
ai-memory list --namespace chatgpt-export
ai-memory list --source mine-chatgpt
# Promote the standout decisions to long-tier permanently
ai-memory promote <id> --tier long --priority 9
# Consolidate two redundant decisions into one canonical memory
ai-memory consolidate <id1> <id2> -T "Auth strategy — final decision"
--min-messages 5 to skip the trivial chatter.ai-memory recall for your most common project topics; promote the top hits to long-tier.ai-memory consolidate collapses them — and keeps the lineage in metadata.consolidated_from_agents.This is the workflow Mem0 charges you SaaS pricing for. ai-memory does it locally, in one binary, in five minutes.