User Guide
Installation
MentisDB requires Rust. If you don't have it:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | shThen install MentisDB:
cargo install mentisdbRunning the Daemon
Start the daemon — it listens on port 9471 by default:
mentisdbTo keep it running after closing your terminal:
nohup mentisdb &The daemon serves both MCP (for AI tools) and REST endpoints plus an HTTPS dashboard for human operators.
CLI subcommands for quick operations without an MCP client:
mentisdb add "The sky is blue"
mentisdb search "cache invalidation" --limit 5
mentisdb agentsConfiguration
MentisDB is configured via environment variables:
| Variable | Default | Description |
|---|---|---|
| Storage | ||
MENTISDB_DIR | ~/.cloudllm/mentisdb | Where chain data and TLS files are stored on disk |
MENTISDB_DEFAULT_CHAIN_KEY(deprecated alias: MENTISDB_DEFAULT_KEY) | borganism-brain | The chain key used when no chain_key is specified |
MENTISDB_STORAGE_ADAPTER | binary | Storage format for new chains. Only binary is accepted; the JSONL adapter can no longer be used to create new chains. Existing JSONL chains remain fully readable and migratable — the adapter is retained for reading and migrating legacy data only |
MENTISDB_GROUP_COMMIT_MS | 2 | Group-commit window in milliseconds for the background binary writer. The writer batches appends within this window before flushing to disk. Lower values = lower latency; higher values = better throughput |
MENTISDB_AUTO_FLUSH | true | Flush to disk on every write. Set to false for higher throughput (less durability) |
| Limits | ||
MAX_THOUGHT_PAYLOAD_BYTES | 10 MB | Maximum size of a single thought payload. Lowered from 64 MB as DoS protection. Oversized writes are rejected with an error |
MAX_SKILL_SIZE_BYTES | 1 MB | Upload size limit for a single skill file. Exceeding this limit rejects the upload |
| Networking — HTTP | ||
MENTISDB_BIND_HOST | 127.0.0.1 | IP address the server binds to. Use 0.0.0.0 for network-wide access (combine with a dashboard PIN) |
MENTISDB_MCP_PORT | 9471 | HTTP MCP server port |
MENTISDB_REST_PORT | 9472 | HTTP REST API port |
| Networking — HTTPS | ||
MENTISDB_HTTPS_MCP_PORT | 9473 | HTTPS MCP server port. Set to 0 to disable |
MENTISDB_HTTPS_REST_PORT | 9474 | HTTPS REST API port. Set to 0 to disable |
| TLS | ||
MENTISDB_TLS_CERT | ~/.cloudllm/mentisdb/tls/cert.pem | Path to the TLS certificate PEM (auto-generated on first start) |
MENTISDB_TLS_KEY | ~/.cloudllm/mentisdb/tls/key.pem | Path to the TLS private key PEM (auto-generated on first start) |
| Logging | ||
MENTISDB_VERBOSE | true | Enable verbose startup and request logging |
MENTISDB_LOG_FILE | unset | Optional path to write logs to a file |
RUST_LOG | info | Log level filter (trace, debug, info, warn, error) |
| Audio | ||
MENTISDB_STARTUP_SOUND | true | Play a 4-note jingle on startup |
MENTISDB_THOUGHT_SOUNDS | false | Play a unique sound for each ThoughtType on append |
| Web Dashboard | ||
MENTISDB_DASHBOARD_PORT | 9475 | Dashboard HTTPS port. Set to 0 to disable |
MENTISDB_DASHBOARD_PIN | unset | Optional PIN to gate dashboard access. Unset = open (localhost only) |
| Daemon Self-Update | ||
MENTISDB_UPDATE_CHECK | true | Background GitHub release check for mentisdb. Set it to false to disable the startup-time prompt. |
MENTISDB_UPDATE_REPO | CloudLLM-ai/mentisdb | Optional owner/repo override for the GitHub release source used by the updater. |
| Deduplication | ||
MENTISDB_DEDUP_THRESHOLD | unset | Jaccard threshold for auto-dedup on append (0.0–1.0). When set, MentisDB compares new thoughts against recent memories and auto-Supersedes duplicates above this threshold. Disabled when unset. |
MENTISDB_DEDUP_SCAN_WINDOW | 64 | Number of recent thoughts to scan for dedup comparison. Only used when MENTISDB_DEDUP_THRESHOLD is set. |
Environment Variables Reference
Every MentisDB environment variable, grouped by concern. Each row lists the default, a one-sentence purpose, and a concrete example with a rationale for when you'd reach for it. All variables are read once at mentisdb startup unless noted otherwise.
Storage
| Variable | Default | Purpose | Why / example |
|---|---|---|---|
MENTISDB_DIR | ~/.cloudllm/mentisdb | Root directory where chain files, the chain registry, the skill registry, and TLS certs live. | MENTISDB_DIR=/mnt/ssd/mentisdb — point storage at a large fast disk in production. |
MENTISDB_DEFAULT_CHAIN_KEY(deprecated alias: MENTISDB_DEFAULT_KEY) | borganism-brain | Chain used when a request omits chain_key. | MENTISDB_DEFAULT_CHAIN_KEY=team-core — make a shared team chain the implicit default. |
MENTISDB_STORAGE_ADAPTER | binary | Storage format for new chains; only binary is supported for new chains. | MENTISDB_STORAGE_ADAPTER=binary — set explicitly in ops manifests for future-proofing. |
Behaviour
| Variable | Default | Purpose | Why / example |
|---|---|---|---|
MENTISDB_AUTO_FLUSH | true | true fsyncs on every append (strict durability); false batches up to 16 records in a background writer (higher throughput, up to 15 records may be lost on hard crash). | MENTISDB_AUTO_FLUSH=false — for write-heavy multi-agent hubs where throughput matters more than last-second durability. |
MENTISDB_GROUP_COMMIT_MS | 2 | Window in milliseconds for the strict-mode writer to coalesce concurrent appends into one flush. | MENTISDB_GROUP_COMMIT_MS=5 — bigger window amortises fsync cost at the price of 3ms extra p99 latency. |
MENTISDB_DEDUP_THRESHOLD | unset (disabled) | Jaccard similarity threshold in [0.0, 1.0] for auto-emitting Supersedes relations on near-duplicate appends. | MENTISDB_DEDUP_THRESHOLD=0.85 — collapse near-identical retrospectives. |
MENTISDB_DEDUP_SCAN_WINDOW | 64 | How many recent thoughts to scan when dedup is enabled. | MENTISDB_DEDUP_SCAN_WINDOW=128 — widen the comparison window on chatty chains where near-duplicates arrive in bursts. |
MENTISDB_VERBOSE | true | Log each MentisDB operation to the mentisdb::interaction target. | MENTISDB_VERBOSE=false in production where you've attached your own observability and don't need the built-in stream. |
MENTISDB_LOG_FILE | unset (stdout only) | Optional file path for interaction logs. | MENTISDB_LOG_FILE=/var/log/mentisdb/interactions.log — persist the interaction stream for later audit or log-shipping. |
Networking / TLS
| Variable | Default | Purpose | Why / example |
|---|---|---|---|
MENTISDB_BIND_HOST | 127.0.0.1 | IP address for all server sockets. | MENTISDB_BIND_HOST=0.0.0.0 — bind to all interfaces (use with care; firewall or set a dashboard PIN first). |
MENTISDB_MCP_PORT | 9471 | HTTP MCP server port. | MENTISDB_MCP_PORT=19471 — move off the default when running a second daemon on the same host. |
MENTISDB_REST_PORT | 9472 | HTTP REST server port. | MENTISDB_REST_PORT=19472 — align with the MCP port offset when running a side-by-side instance. |
MENTISDB_HTTPS_MCP_PORT | 9473 | HTTPS MCP port; set to 0 to disable. | MENTISDB_HTTPS_MCP_PORT=0 — turn off HTTPS MCP entirely on a fully-internal host. |
MENTISDB_HTTPS_REST_PORT | 9474 | HTTPS REST port; set to 0 to disable. | MENTISDB_HTTPS_REST_PORT=0 — disable HTTPS REST when terminating TLS at a reverse proxy. |
MENTISDB_TLS_CERT | <MENTISDB_DIR>/tls/cert.pem | PEM cert path. Auto-generated on first boot if absent. | MENTISDB_TLS_CERT=/etc/ssl/mentisdb/fullchain.pem — swap in a CA-signed cert instead of the default self-signed one. |
MENTISDB_TLS_KEY | <MENTISDB_DIR>/tls/key.pem | PEM key path. | MENTISDB_TLS_KEY=/etc/ssl/mentisdb/privkey.pem — pair with a CA-signed cert from a secrets mount. |
Dashboard
| Variable | Default | Purpose | Why / example |
|---|---|---|---|
MENTISDB_DASHBOARD_PORT | 9475 | HTTPS-only dashboard port; set to 0 to disable entirely. | MENTISDB_DASHBOARD_PORT=0 — turn the dashboard off on headless servers where only the MCP/REST APIs are needed. |
MENTISDB_DASHBOARD_PIN | unset | Shared PIN required to access the dashboard; empty string is treated as absent. | MENTISDB_DASHBOARD_PIN=8472-9471 — required whenever you bind the dashboard off 127.0.0.1. |
Updates
| Variable | Default | Purpose | Why / example |
|---|---|---|---|
MENTISDB_UPDATE_CHECK | true | Enable/disable the automatic update check on daemon startup. | MENTISDB_UPDATE_CHECK=false in CI — keeps test runs deterministic and offline. |
MENTISDB_UPDATE_REPO | CloudLLM-ai/mentisdb | Override the GitHub repo the updater polls (useful for forks). | MENTISDB_UPDATE_REPO=my-org/mentisdb-fork — track an internal fork's release channel instead of upstream. |
Audio
| Variable | Default | Purpose | Why / example |
|---|---|---|---|
MENTISDB_STARTUP_SOUND | unset (off) | Set to 1/true to play the startup chime. | MENTISDB_STARTUP_SOUND=true on a dev workstation — audible cue that the daemon is up after a restart. |
MENTISDB_THOUGHT_SOUNDS | unset (off) | Set to 1/true to play a per-thought-type sound on each append. | MENTISDB_THOUGHT_SOUNDS=true — audible feedback while pair-programming with an agent; leave off on shared machines. |
Priming Your Agent
Priming an agent for MentisDB is one sentence. Just type:
use mentisdb as your memory systemThat is all. The agent handles the rest automatically:
- Detect and select chain — calls
mentisdb_list_chainsto see what chains exist, then picks the one whose name best matches the current project, repository, or working folder. If no chains exist yet, it asks you to name a new one before proceeding. - Bootstrap — calls
mentisdb_bootstrapwith the chosen chain key to create or reopen that chain, then callsmentisdb_list_agentsand reuses the existing specialist that best matches the current task, followed bymentisdb_recent_contextto reload prior state. - Load skill rules — reads
mentisdb://skill/coreso it knows exactly how to write thoughts, search, and manage context - Self-seed — writes a Summary checkpoint so every future session recovers state automatically without needing to re-prime
First time? If no memory chains exist yet, the agent will briefly explain what a chain is and ask what to name it — usually your project or repository name works well.
Tip: Add this phrase to your tool's system-prompt or project instructions file and every new agent session will prime itself automatically — you will never need to type it again.
Additional tips
- Give the agent a stable
agent_id— this is how its memories are attributed and retrieved later - Tell it which
chain_keyto use if you run multiple chains (e.g. one per project) - Instruct it to load memories before starting — not after it has already made decisions
- If your client supports MCP resources, tell the agent to read
mentisdb://skill/corefirst; only useGET /mentisdb_skill_mdas a fallback for non-MCP or limited clients - If you do not specify a chain up front, tell the agent to prefer a chain whose name matches the current repo or working folder
- Ask it to write a
Summarycheckpoint before compacting its context or ending a long session
Self-Update
MentisDB's daemon checks GitHub for a newer release after startup by default. On an interactive terminal, the daemon finishes booting, then shows an ASCII prompt asking whether it should update itself with cargo install.
MENTISDB_UPDATE_CHECK=0 mentisdbVersion comparison uses the first three numeric components only. That means a release tag like 0.6.1.14 is treated as core version 0.6.1, and the fourth number is only the monotonically increasing release counter.
If the terminal is non-interactive, MentisDB never blocks on stdin. It prints the exact cargo install --git ... --tag ... command you can run manually instead.
Use MENTISDB_UPDATE_CHECK=0 when you want a quiet daemon with no release prompt.
HTTPS & TLS
MentisDB automatically generates a self-signed TLS certificate on first startup using rcgen. This enables encrypted connections on two dedicated HTTPS ports alongside the plain HTTP ports. No manual certificate management is required.
Port Map
| Port | Protocol | Purpose |
|---|---|---|
9471 | HTTP | MCP server |
9472 | HTTP | REST API |
9473 | HTTPS | MCP server (TLS) |
9474 | HTTPS | REST API (TLS) |
9475 | HTTPS | Web Dashboard |
The my.mentisdb.com Hostname
my.mentisdb.com is a public DNS A-record that resolves to 127.0.0.1. The auto-generated certificate includes it as a Subject Alternative Name alongside localhost and 127.0.0.1. Once you trust the certificate on your machine, you can use https://my.mentisdb.com:9473 as a friendly, stable hostname in MCP configs — no port-forwarding or extra DNS setup required.
Trusting the Self-Signed Certificate
The certificate is saved to ~/.cloudllm/mentisdb/tls/cert.pem on first startup. Run the appropriate command once per machine:
macOS
sudo security add-trusted-cert -d -r trustRoot \
-k /Library/Keychains/System.keychain \
~/.cloudllm/mentisdb/tls/cert.pemLinux
sudo cp ~/.cloudllm/mentisdb/tls/cert.pem \
/usr/local/share/ca-certificates/mentisdb.crt
sudo update-ca-certificatesWindows
Double-click cert.pem → Install Certificate → Local Machine → Trusted Root Certification Authorities → Finish.
https://my.mentisdb.com:9473 as the server URL.Disabling HTTPS
Set both HTTPS ports to 0 to run HTTP-only:
MENTISDB_HTTPS_MCP_PORT=0
MENTISDB_HTTPS_REST_PORT=0Web Dashboard
The web dashboard is a self-contained single-page application embedded directly in the MentisDB binary — no npm, no separate process, no installation required. Open it in any browser at:
https://127.0.0.1:9475/dashboardThe version number is displayed in the nav header. The dashboard connects to the same daemon you already have running, and it keeps showing newly appended thoughts while the daemon stays up — no restart is required to see fresh chain counts or an agent's latest thoughts.
MENTISDB_DASHBOARD_PIN) or keep MENTISDB_BIND_HOST=127.0.0.1 (the default) so it is never exposed to the network.PIN Protection
Set MENTISDB_DASHBOARD_PIN to any string. A login page appears automatically. Leave it unset for open (localhost-only) access.
MENTISDB_DASHBOARD_PIN=my-secret-pin mentisdbPIN verification uses constant-time comparison (subtle::ConstantTimeEq) to prevent timing attacks — the server does not leak information about how many characters matched.
Login Rate Limiting
The /dashboard/login endpoint enforces rate limiting: 5 failed PIN attempts per IP address within a 5-minute window results in HTTP 429 Too Many Requests. Successful attempts reset the counter.
Disabling the Dashboard
MENTISDB_DASHBOARD_PORT=0 mentisdbSections
Chain Manager
Lists all chains with live thought counts and agent counts. Click a chain to open its Thought Explorer. You can bootstrap a new chain or delete an existing one (a type-to-confirm safety gate prevents accidental deletion). Click ↺ Refresh to reload live counts.
Thought Explorer
Paginated table of all thoughts in a chain. Filter by any of the 30 ThoughtTypes using the grouped filter panel — each type is shown with a coloured badge. The explorer also supports chain-scoped text search plus a live agent dropdown; when text search is active it returns hybrid ranked results and grouped context bundles instead of a plain substring list. If a managed vector sidecar is enabled for the chain, the ranking transparently blends lexical, graph, and semantic vector signals. Click any row to open a detail modal showing the full thought content, metadata, positional back-references (displayed as #N), and typed relations (displayed as kind → target_id (chain: other-chain) for cross-chain edges), plus ranked-search provenance such as score breakdowns, matched terms, graph distance, and bundle support preview when the row came from search.
Vector Sidecars
Each chain page also has a Vector Sidecars panel for the daemon's managed embedding indexes. By default `mentisdb` keeps a local `local-text-v1` sidecar in sync for each chain it opens. Operators can expand the panel to inspect freshness, indexed-thought counts, and the sidecar path; disable or re-enable append-time sync; run Sync now; or Rebuild from scratch after an explicit delete-and-recreate confirmation.
Agent Manager
All registered agents grouped by chain. Click an agent to open its detail page where you can edit its display name, description, and owner; revoke or re-activate it; add or revoke Ed25519 signing keys; browse its most recent thoughts; or copy that agent's memories into another chain while preserving its metadata.
Skills Registry
Browse all skills with their version count and lifecycle status (active, deprecated, revoked). Click a skill to view it with three tabs: Rendered (formatted Markdown), Source (raw text), and Diff (side-by-side version comparison). Revoke or deprecate a skill directly from the UI with a confirmation step.
Dashboard API
The dashboard UI communicates with the daemon through a set of internal browser APIs under /dashboard/api/. These endpoints are not intended for external scripting — they are an implementation detail of the single-page dashboard. All endpoints require PIN authentication when MENTISDB_DASHBOARD_PIN is set.
Chains
| Method | Path | Description |
|---|---|---|
GET | /chains | List all chains with live thought and agent counts |
POST | /chains | Bootstrap a new chain; appends a Summary checkpoint if empty |
GET | /chains/{chain_key} | Chain detail including vector sidecar statuses |
DELETE | /chains/{chain_key} | Permanently delete a chain and deregister it |
POST | /chains/merge | Merge all thoughts from source into target chain, then delete source |
POST | /chains/{chain_key}/import-markdown | Import a MEMORY.md-formatted document as new thoughts |
Vector Sidecars
| Method | Path | Description |
|---|---|---|
POST | /chains/{chain_key}/vectors/{provider_key}/enable | Enable append-time sync for a managed vector sidecar |
POST | /chains/{chain_key}/vectors/{provider_key}/disable | Disable append-time sync for a managed vector sidecar |
POST | /chains/{chain_key}/vectors/{provider_key}/sync | Run an immediate sync pass for a managed vector sidecar |
POST | /chains/{chain_key}/vectors/{provider_key}/rebuild | Rebuild a vector sidecar from scratch (requires confirm_delete) |
Thoughts & Search
| Method | Path | Description |
|---|---|---|
GET | /chains/{chain_key}/thoughts | Paginated thought listing, filterable by ThoughtType and scope |
GET | /chains/{chain_key}/search | Ranked hybrid search with context bundles when text is provided. Supports as_of for point-in-time queries and scope for scope-filtered results |
GET | /chains/{chain_key}/search/bundles | Seed-anchored context bundles for a search query. Supports as_of and scope parameters |
GET | /chains/{chain_key}/search/agents | Live thought authors merged with registry display names |
GET | /thoughts/{chain_key}/{thought_id} | Retrieve a single thought by UUID |
GET | /chains/{chain_key}/agents/{agent_id}/thoughts | Paginated thoughts authored by a specific agent |
Agents
| Method | Path | Description |
|---|---|---|
GET | /agents | All registered agents across all chains, grouped by chain |
POST | /agents | Create or update an agent registry entry |
GET | /agents/{chain_key} | All agents registered on a specific chain |
GET | /agents/{chain_key}/{agent_id} | Single agent detail with live thought count |
PATCH | /agents/{chain_key}/{agent_id} | Update display name, description, or owner of an agent |
POST | /agents/{chain_key}/{agent_id}/revoke | Mark an agent as revoked |
POST | /agents/{chain_key}/{agent_id}/activate | Mark an agent as active |
POST | /agents/{chain_key}/{agent_id}/keys | Register a new Ed25519 public verification key on an agent |
DELETE | /agents/{chain_key}/{agent_id}/keys/{key_id} | Revoke a public key from an agent |
GET | /agents/{chain_key}/{agent_id}/memory-markdown | Export an agent's thoughts as a MEMORY.md Markdown document |
POST | /agents/{chain_key}/{agent_id}/copy-to/{target_chain_key} | Copy all of an agent's thoughts to another chain |
Skills
| Method | Path | Description |
|---|---|---|
GET | /skills | List all registered skills with version count and status |
POST | /skills | Upload a new skill version (Markdown or JSON) |
GET | /skills/{skill_id} | Get skill summary and latest content |
GET | /skills/{skill_id}/versions | Full version history for a skill |
GET | /skills/{skill_id}/diff | Unified diff between two skill versions |
POST | /skills/{skill_id}/revoke | Mark a skill as revoked (content preserved for audit) |
POST | /skills/{skill_id}/deprecate | Mark a skill as deprecated |
Version
| Method | Path | Description |
|---|---|---|
GET | /version | Returns the crate version baked in at compile time |
/dashboard/api. When a PIN is set, every endpoint requires either an Authorization: Bearer <pin> header or a valid mentisdb_pin cookie. Without a PIN the dashboard is open on localhost.Import MEMORY.md
The chain detail page in the dashboard has a 📥 Import MEMORY.md button. Use it to bulk-import an existing Markdown memory file into the chain — no CLI or API calls required.
How to use it
- Open the dashboard at
https://127.0.0.1:9475/dashboardand click into a chain. - Click 📥 Import MEMORY.md — a modal appears.
- Paste your Markdown content into the text area.
- Enter a Default Agent ID — all imported thoughts are attributed to this agent. The agent must already be registered in the chain.
- Click Import — the dashboard reports how many thoughts were created.
Expected format
Any Markdown file where each top-level or second-level heading introduces a distinct memory. The mentisdb_memory_markdown MCP tool produces this format automatically, as does any MentisDB export. Example:
## LessonLearned — 2025-01-10
Use `signal()` not `create_signal()` in Leptos 0.7.
All `create_*` APIs were removed in the 0.7 redesign.
## Decision — 2025-01-11
Adopted binary storage adapter for production.
Rationale: binary is the only supported format for new chains.After import the memories are fully indexed — searchable, filterable by ThoughtType, and attributable by agent — exactly like any thought appended via MCP or API.
MEMORY.md file, use this button once to seed your chain. From that point forward let MentisDB be the source of truth — the agent writes directly to the chain, and you export with mentisdb_memory_markdown whenever you need a portable snapshot.Backup & Restore
MentisDB stores all chain data on disk under ~/.cloudllm/mentisdb/. Back up that directory at any time using the built-in mentisdb backup command, and restore from a backup with mentisdb restore.
Archive format
Backups are standard ZIP archives with a .mentis extension. Each archive contains a SHA-256 manifest (MENTISDB_MANIFEST.txt) listing every file path and its digest, so integrity can be verified after download or copy.
mentisdb backup
mentisdb backup [-o <path>] [--dir <path>] [--flush] [--include-tls]| Argument | Description |
|---|---|
source_dir | Path to the MentisDB data directory to back up (e.g. ~/.cloudllm/mentisdb) |
output_path | Optional output path for the .mentis archive. Defaults to ~/.cloudllm/mentisdb/backup-<timestamp>.mentis. |
Options
| Flag | Description |
|---|---|
--flush | Detects if mentisdb is running on the local machine. If so, calls POST /v1/admin/flush to force a durability flush before archiving. The backup then proceeds with the daemon either stopped or freshly flushed. Use this to ensure the archive captures all committed thoughts. |
--include-tls | Include TLS certificate and private key files in the archive. By default these are excluded (they are machine-specific and should be re-generated on the target machine). This flag is opt-in so you consciously choose to include them. |
mentisdb restore
mentisdb restore <archive.mentis> [--dir <path>] [--overwrite] [--yes]| Argument | Description |
|---|---|
archive_path | Path to the .mentis archive to restore. |
target_dir | Directory to extract the archive into. |
Options
| Flag | Description |
|---|---|
--overwrite | Replace existing files without prompting when there is a conflict. |
--yes | Answer yes to all interactive prompts (equivalent to --overwrite). |
Interactive restore behavior
During restore, if any file in the archive already exists in the target directory, mentisdb restore prompts you to decide what to do with that file. Pass --overwrite or --yes to skip the prompt and overwrite unconditionally.
Example commands
# Create a backup (archive written to ./mentisdb-2026-04-28-153022.mentis)
mentisdb backup
# Create a backup to a specific path
mentisdb backup -o /tmp/my-mentisdb-backup.mentis
# Create a backup from a specific source directory
mentisdb backup --dir ~/.cloudllm/mentisdb -o /tmp/backup.mentis
# Create a backup with a running daemon flush first
mentisdb backup --flush
# Include TLS material in the backup (machine-specific — restore on same machine)
mentisdb backup --include-tls
# Restore a backup (prompts for existing files, daemon must be stopped)
mentisdb restore /tmp/my-mentisdb-backup.mentis
# Restore to a specific directory
mentisdb restore /tmp/my-mentisdb-backup.mentis --dir ~/.cloudllm/mentisdb
# Restore, overwriting any conflicting files without prompting
mentisdb restore /tmp/my-mentisdb-backup.mentis --overwriteSecurity note on --include-tls
TLS certificates and private keys are machine-specific. Including them in a backup is only appropriate when restoring to the same physical machine. If you restore to a new machine, omit --include-tls and let MentisDB auto-generate a fresh certificate on first start — then re-trust it on that machine.
mentisdb backup before any chain merge, chain deletion, skill revocation, or daemon self-update that involves storage format changes. Backups take only seconds and let you recover exactly where you were if something goes wrong.Connecting AI Tools
Once the daemon is running, connect your AI tools via MCP. The fastest path is to let MentisDB detect what is installed and configure it automatically. Alternatively, configure any tool manually.
Automated Setup (Recommended)
MentisDB ships with two built-in commands that detect which clients are installed on your machine and write the correct MCP configuration for you.
Setup Wizard
Interactive. Scans your machine, shows every detected tool, lets you choose which to configure, and applies changes with your confirmation.
mentisdb wizardAccept all defaults and skip already-configured integrations (non-interactive):
mentisdb wizard --yesPoint all selected integrations at a custom MCP URL:
mentisdb wizard --url https://my.mentisdb.com:9473Setup One Agent
Target a specific integration by name. Prints a plan first, then writes the config file. Use --dry-run to preview without touching anything.
mentisdb setup claude-codeSetup all detected agents at once:
mentisdb setup allPreview what would be written without writing anything:
mentisdb setup all --dry-runUse a custom MCP URL:
mentisdb setup all --url https://my.mentisdb.com:9473Supported integrations
Use any of these names with mentisdb setup:
| Name | Tool | Config location |
|---|---|---|
claude-code | Claude Code CLI | ~/.claude.json |
claude-desktop | Claude for Desktop | ~/Library/Application Support/Claude/claude_desktop_config.json |
codex | OpenAI Codex CLI | ~/.codex/config.toml |
copilot | GitHub Copilot CLI | ~/.copilot/mcp-config.json |
gemini | Google Gemini CLI | ~/.gemini/settings.json |
opencode | OpenCode | ~/.config/opencode/opencode.json |
qwen | Qwen Code Assistant | ~/.qwen/settings.json |
vscode-copilot | VS Code + Copilot | ~/Library/Application Support/Code/User/mcp.json (macOS) · ~/.config/Code/User/mcp.json (Linux) · %APPDATA%\Code\User\mcp.json (Windows) |
How it works
Both commands use PathEnvironment to resolve paths consistently — honouring HOME, XDG_CONFIG_HOME, USERPROFILE, and APPDATA on their respective platforms. They detect whether an integration is already configured, is installed but unconfigured, or is not present, and adapt their output accordingly. They never overwrite an existing MentisDB entry without explicit confirmation.
Manual MCP Configuration
If you prefer to configure manually, or need a custom URL, copy the relevant snippet below. All tools connect to http://127.0.0.1:9471 by default (HTTP, localhost). After trusting the self-signed TLS certificate you can use https://my.mentisdb.com:9473 instead.
Claude for Desktop
Claude for Desktop supports two connection modes. Stdio mode (recommended) requires no daemon, no Node.js, and no mcp-remote — just point Claude Desktop at the mentisdb binary. The stdio process automatically detects a running daemon and proxies to it, or launches one in the background if none is found.
Option 1: Stdio mode (recommended)
Edit your claude_desktop_config.json and add:
{
"mcpServers": {
"mentisdb": {
"command": "mentisdb",
"args": ["--mode", "stdio"]
}
}
}That's it. No daemon pre-start, no TLS config, no Node.js dependency. The stdio process handles everything.
Option 2: HTTP via mcp-remote
If you prefer the HTTP transport (e.g. for a remote daemon), use the mcp-remote bridge. This requires Node.js >= 20.
Install mcp-remote globally:
npm install -g mcp-remoteConfig file location by OS:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
macOS (mcp-remote)
{
"mcpServers": {
"mentisdb": {
"command": "/opt/homebrew/bin/mcp-remote",
"args": ["https://my.mentisdb.com:9473"],
"env": { "NODE_TLS_REJECT_UNAUTHORIZED": "0" }
}
}
}NODE_TLS_REJECT_UNAUTHORIZED: "0" env var is required because MentisDB uses a self-signed TLS certificate. Node.js rejects self-signed certs by default; this env var disables that check for the mcp-remote process only. Alternatively, add the MentisDB cert to your system keychain (see the HTTPS & TLS section above) and remove the env block.If you installed Node.js via nvm or a non-Homebrew path, find the mcp-remote binary with:
which mcp-remoteand use that full path as the command value.
Windows (mcp-remote)
{
"mcpServers": {
"mentisdb": {
"command": "mcp-remote",
"args": ["https://my.mentisdb.com:9473"],
"env": { "NODE_TLS_REJECT_UNAUTHORIZED": "0" }
}
}
}On Windows, npm install -g mcp-remote places the binary on your PATH automatically. If Claude Desktop cannot find it, supply the full path, e.g.:
"command": "C:\\Users\\YourName\\AppData\\Roaming\\npm\\mcp-remote.cmd"Linux (mcp-remote)
{
"mcpServers": {
"mentisdb": {
"command": "/usr/local/bin/mcp-remote",
"args": ["https://my.mentisdb.com:9473"],
"env": { "NODE_TLS_REJECT_UNAUTHORIZED": "0" }
}
}
}Verify the path with which mcp-remote and substitute it if different.
Claude Code
`mentisdb setup claude-code` writes the MCP entry under mcpServers.mentisdb in ~/.claude.json (or the platform-equivalent home directory on Windows). The older ~/.claude/mcp/mentisdb.json file is treated as a legacy companion path, not the canonical target.
claude mcp add --transport http mentisdb http://127.0.0.1:9471OpenAI Codex
codex mcp add mentisdb --url http://127.0.0.1:9471GitHub Copilot CLI
Add to ~/.copilot/mcp-config.json:
{
"mcpServers": {
"mentisdb": {
"type": "http",
"url": "http://127.0.0.1:9471",
"headers": {},
"tools": ["*"]
}
}
}Or using HTTPS after trusting the certificate:
{
"mcpServers": {
"mentisdb": {
"type": "http",
"url": "https://my.mentisdb.com:9473",
"headers": {},
"tools": ["*"]
}
}
}Qwen Code
qwen mcp add --transport http mentisdb http://127.0.0.1:9471OpenCode
OpenCode stores MCP configuration in ~/.config/opencode/opencode.json on Linux and macOS. Add the mentisdb block under the top-level mcp key:
{
"mcp": {
"mentisdb": {
"type": "remote",
"url": "http://127.0.0.1:9471",
"enabled": true
}
}
}Or using HTTPS after trusting the certificate:
{
"mcp": {
"mentisdb": {
"type": "remote",
"url": "https://my.mentisdb.com:9473",
"enabled": true
}
}
}Google Gemini CLI
Gemini CLI reads MCP server configuration from ~/.gemini/settings.json. Add the mentisdb block under the top-level mcpServers key. The httpUrl field is required alongside url for HTTP transport:
{
"mcpServers": {
"mentisdb": {
"type": "http",
"url": "http://127.0.0.1:9471",
"httpUrl": "http://127.0.0.1:9471"
}
}
}VS Code + Copilot
VS Code stores MCP server configuration in mcp.json inside the VS Code user settings directory. The path varies by OS:
- macOS:
~/Library/Application Support/Code/User/mcp.json - Linux:
~/.config/Code/User/mcp.json - Windows:
%APPDATA%\Code\User\mcp.json
Add the mentisdb block under the top-level servers key:
{
"servers": {
"mentisdb": {
"type": "http",
"url": "http://127.0.0.1:9471"
}
}
}Chain Topologies
MentisDB supports multiple named chains. Each chain is an independent, append-only ledger. You choose how to map agents to chains based on your team structure and privacy needs.
One agent, one chain (simplest)
A single agent uses the default chain for everything. All memories accumulate in one place. Perfect for a solo developer with one long-running assistant.
chain_key: "default" — one brain, one history.One agent, multiple chains (context isolation)
The same agent writes to different chains depending on its current assignment. Work memories stay scoped to the right project and don't pollute unrelated contexts.
Orion writes project lessons to "project-alpha" and company-wide conventions to "company-conventions".
- Each chain has its own agent registry, thought ledger, and skill registry
- An agent can read from and write to as many chains as needed
- Set
MENTISDB_DEFAULT_CHAIN_KEYso the most-used chain requires no explicitchain_keyparameter
Many agents, one shared chain (fleet / organisation)
All agents write to the same chain. Every agent can optionally read thoughts written by its peers by filtering on agent_ids in search queries. This is the foundation of fleet coordination.
Apollo, Orion, and Astro all write to "project-alpha". Each uses its own agent_id. A query without an agent filter returns memories from all three.
Many agents, many chains (per-team or per-tenant)
Use separate chains for separate teams, departments, or clients. Agents that work across teams carry context between chains by reading from one and writing a Summary to another. No data leaks between chains unless you explicitly bridge them.
Fleet Coordination via the Thought Chain
When your harness can spawn background sub-agents in parallel, MentisDB's shared chain becomes a coordination primitive — a lightweight alternative to explicit message passing between agents.
Prefer a walkthrough first? This tutorial shows the fleet coordination flow end to end.
The pattern
- The PM agent decomposes the assignment into a task graph, writes each task as a
PlanorSubgoalthought tagged"task-pending", then spawns a specialist sub-agent per task. - Each specialist agent calls
mentisdb_recent_contexton start to load the task graph and any shared constraints, then works autonomously. - When a specialist finishes, it writes a
TaskCompletethought tagged"task-done"(and anyLessonLearnedorDecisionthoughts it accumulated). - The PM agent polls the chain (or is notified) and queries
tags_any: ["task-done"]to check completion status before unblocking dependent tasks.
Avoiding conflicts in parallel agents
- Each agent uses its own
agent_id— writes are attributed and never collide - Agents doing independent work read only their own subtree (
agent_ids: ["my-id"]) and shared constraints (tags_any: ["constraint", "convention"]) — they don't need to read each other's full history - Dependent tasks read their blocker's
TaskCompletethought to extract outputs before starting - The PM synthesises results by querying all
TaskCompletethoughts in one call at the end
Example task graph
PM writes:
Plan [tag: task-pending, id: design-schema] → dispatches Orion
Plan [tag: task-pending, id: write-tests] → dispatches Apollo (blocks on design-schema)
Plan [tag: task-pending, id: implement-api] → dispatches Astro (blocks on design-schema)
Orion finishes, writes:
TaskComplete [tag: task-done, id: design-schema, content: "...schema decisions..."]
PM queries tags_any=["task-done"] → unblocks Apollo + Astro
Apollo + Astro run in parallel, each reading Orion's TaskComplete for shared context.
PM final query: all TaskComplete thoughts → synthesises result.The Skills Registry
The skills registry is a versioned, immutable store for agent instruction bundles (skill files). Think of it like git for your agent's operating procedures.
Uploading a skill
Skills are uploaded as Markdown files. Each upload to an existing skill_id creates a new immutable version (stored as a diff):
Call mentisdb_upload_skill with three required fields: agent_id (the uploading agent's registered identity), skill_id (a stable slug like "my-project-conventions"), and content (the raw Markdown of the skill file). If the agent has registered public keys, also provide signing_key_id and skill_signature to create a cryptographically verified upload.
Retrieving a skill
Use mentisdb_read_skill(skill_id) to get the latest version, or pass version_id for a specific historical version. Full version history is always preserved.
Cryptographic Signatures
Agents with registered Ed25519 public keys must cryptographically sign their skill uploads. This creates a verifiable, tamper-evident record of authorship.
Registering an agent key
Use mentisdb_add_agent_key to register an Ed25519 public key for an agent. Once registered, all uploads from that agent must include a valid signature over the skill content.
Why this matters
Signed skills mean you always know which agent authored which version. Combined with the immutable version history, this creates a cryptographically auditable record of your fleet's institutional knowledge.
Memory Scopes
MentisDB 0.8.2 introduces memory scopes — a lightweight way to partition thoughts within a single chain. Scopes are stored as tags on each thought and let you isolate memories by visibility level without creating separate chains.
Scope levels
| Scope | Tag | Purpose |
|---|---|---|
User | scope:user | Visible to the owning user across all sessions. Default scope for most memories. |
Session | scope:session | Scoped to a single conversation session. Ephemeral working memory — scratch thoughts, in-progress hypotheses. |
Agent | scope:agent | Private to a specific agent. Not shared with other agents in the fleet. Useful for internal heuristics or private state. |
Using scopes
When appending a thought, set the scope parameter to one of User, Session, or Agent. MentisDB stores the scope as a tag (e.g. scope:user) on the thought. In search, use the scope parameter to filter results to a specific scope level.
Temporal Queries
MentisDB 0.8.2 adds temporal query support, allowing you to query the chain as it existed at a specific point in time and to set time-bounded validity on relations.
Point-in-time queries with as_of
Pass as_of (an RFC 3339 timestamp) to search and traversal tools to see only thoughts that existed at that time. Thoughts appended after the timestamp are excluded from results. This is useful for auditing what an agent knew at a specific moment, or for reproducing decisions made under a previous state of knowledge.
mentisdb_ranked_search(
text: "caching strategy",
as_of: "2025-12-01T00:00:00Z"
)Temporal bounds on relations
Thought relations now support valid_at and invalid_at fields — RFC 3339 timestamps that define when a relation becomes active and when it expires. A relation is considered active if the current time falls between valid_at and invalid_at. If neither field is set, the relation is always active (backward compatible with existing chains).
invalid_at to model time-limited relationships — for example, a Supersedes edge that only takes effect after a transition date, or a Supports link that expires when a deprecation window closes.Advanced Retrieval
MentisDB layers four complementary signals over its append-only chain — lexical (BM25), dense-vector (cosine), graph BFS, and session cohesion — and fuses them into a single ranked result set. The full algorithmic pipeline is described in the ranked-search pipeline blog post and the white paper; this section summarises the knobs you can turn.
Reciprocal Rank Fusion (RRF)
Set enable_reranking: true on any mentisdb_ranked_search call to rerank the top rerank_k candidates (default 50) by merging three independent rankings — lexical-only, vector-only, and graph-only — through Reciprocal Rank Fusion with damping constant k=60. RRF is robust when the absolute magnitudes of the component scores are not directly comparable.
Context Bundles
mentisdb_context_bundles returns seed-anchored grouped results instead of a flat list. Each bundle pairs one lexical seed with its graph-expanded neighbours in provenance order, so the agent can inspect why a supporting thought surfaced. Use bundles when you want to preserve evidence groupings rather than collapse everything into ranked rows.
Dense-Vector Sidecars
Vector state lives in rebuildable per-chain sidecars partitioned by chain, thought id, model, dimension, and embedding version. The daemon ships the fastembed-minilm provider by default — a 384-dimension MiniLM model running locally via ONNX, with no cloud dependency and no API key. Hybrid ranking blends lexical and cosine scores via a smooth exponential fusion that amplifies pure-semantic matches (~36×) and decays to additive composition as lexical evidence grows.
Branching Chains
mentisdb_branch_from forks a new chain from an existing thought. The branch receives a genesis thought with a BranchesFrom relation pointing at the fork point; ranked search on the branch transparently includes results from ancestor chains, so experimental or tenant-scoped branches can read shared context without cross-contaminating it. Ancestor discovery is transitive.
Federated Cross-Chain Search
mentisdb_federated_search runs one ranked-search query across a list of chains concurrently and returns one merged, deduplicated, re-scored result set. Each hit carries the chain_key it originated from, so multi-agent hubs and cross-organisational memory aggregations can share a single query surface. Per-chain overrides let you apply different filters, limits, or RRF settings per chain.
Entity Types & Provenance
Per-Chain Entity Types
Attach an entity_type label to any thought — e.g. "incident", "customer", "deploy" — to categorise memory beyond free-form tags. Entity types are registered per chain through mentisdb_upsert_entity_type and discoverable via mentisdb_list_entity_types; each carries an optional description and a usage counter. Ranked search filters by entity type, and the dashboard explorer surfaces them as a first-class facet.
source_episode — Derived Memory Provenance
When a thought is derived from a larger episode (a conversation turn, an ingested document, a batch job), set the optional source_episode field to a stable identifier so every derived thought can later be traced back to its source. Ranked search filters by source_episode exactly the same way it filters by agent or tag.
Webhook Callbacks
Register an HTTP endpoint that MentisDB will POST to whenever a thought is appended to a chain. Useful for syncing an external index, triggering downstream workflows, or mirroring writes to observability pipelines.
Delivery is fire-and-forget with exponential-backoff retries (up to 5 attempts). Registrations persist to webhooks.json next to the chain registry and survive daemon restarts. Fan-out is bounded by a queue and concurrency semaphore, so bursty appends cannot spawn unlimited outgoing tasks.
MCP tools: mentisdb_register_webhook, mentisdb_list_webhooks, mentisdb_delete_webhook. REST routes are documented in the developer guide.
LLM-Extracted Memories
Turn raw text — a chat transcript, a pasted document, a ticket comment — into a review-ready slate of typed thoughts. The mentisdb_extract_memories tool calls a configured OpenAI-compatible model, validates the JSON schema of the candidate thoughts, and returns them to the caller. Nothing is written to the chain until the caller explicitly appends the reviewed candidates.
Defaults to gpt-4o, configurable via environment variables. The prompt enforces strict JSON output and the server validates schemas before return, so provider quirks (for example OpenAI-compatible endpoints that reject the response_format hint) cannot poison the chain. See llm-extracted-memories-design.md for the full contract.
Python Client (pymentisdb)
pymentisdb is the official Python client, published to PyPI. It wraps the REST surface with typed request and response objects and integrates natively with LangChain via MentisDbMemory.
pip install pymentisdbfrom pymentisdb import MentisDbClient, ThoughtType
client = MentisDbClient("http://127.0.0.1:9472")
client.append_thought(
chain_key="my-chain",
agent_id="planner",
thought_type=ThoughtType.DECISION,
content="Adopt LRU eviction for the response cache",
)
hits = client.ranked_search(chain_key="my-chain", text="cache eviction", limit=5)See the PyPI listing and the pymentisdb/ folder for the full API surface, typed relations, context bundles, and a working LangChain example.
CLI Subcommands
The mentisdb binary includes subcommands for interacting with a running daemon
from the terminal. These are useful for quick manual entries, scripting,
and debugging — no MCP client or dashboard needed.
All three subcommands require a running daemon. They connect to the REST port (default http://127.0.0.1:9472).
add — Add a thought
Adds a new thought to a chain. Use it for quick notes, scripted entries, or piping data into MentisDB.
mentisdb add "The sky is blue"mentisdb add "Session fact" --scope session --tag importantmentisdb add "Insight" --type insight --agent my-agent| Option | Description |
|---|---|
--type | Thought type (default: fact-learned) |
--scope | Memory scope: user, session, or agent |
--tag | Add a tag (repeatable) |
--agent | Agent ID for the thought |
--chain | Chain key (uses daemon default if omitted) |
--url | Daemon REST URL (default: http://127.0.0.1:9472) |
search — Search memories
Searches thoughts using the same ranked retrieval engine as the REST API and MCP tools. Returns JSON with score breakdowns.
mentisdb search "cache invalidation"mentisdb search "performance" --limit 5 --scope sessionPipe results through jq for scripting:
mentisdb search "deploy" --limit 20 | jq '.hits[].thought.content'| Option | Description |
|---|---|
--limit | Maximum results (default: 10) |
--scope | Filter by memory scope: user, session, or agent |
--chain | Chain key (uses daemon default if omitted) |
--url | Daemon REST URL (default: http://127.0.0.1:9472) |
agents — List registered agents
Shows agent IDs, display names, status, and thought counts. Useful for auditing which agents have written to your MentisDB instance.
mentisdb agentsmentisdb agents --chain my-project| Option | Description |
|---|---|
--chain | Chain key (uses daemon default if omitted) |
--url | Daemon REST URL (default: http://127.0.0.1:9472) |