Every AI you use, one shared memory
Runs on your Mac. Private, fast, and entirely yours.
Two copy-pastes. That's it.
Connect your AI client to Covalence's MCP server, then tell it how to use memory.
Connect
Add the MCP server to your AI client's config. Covalence generates the snippet for you.
{
"mcpServers": {
"covalence": {
"command": "/Applications/Covalence.app/Contents/MacOS/cov-mcp"
}
}
} Instruct
Paste the behavioral prompt so your AI knows when to store and search memories.
You have access to Covalence, a persistent
memory layer. Search memory at the start
of conversations. Store important decisions,
context, and preferences. Use retrieved
context naturally … Everything your AI needs to remember
Semantic search
Finds what you mean, not just what you typed. Hybrid vector + keyword search with recency weighting.
Core Memories
Pin your most important knowledge. Always surfaced first across every client.
Multi-client
Claude Desktop, Claude Code, Cursor — any MCP client, simultaneously. No conflicts.
Local embeddings
CoreML on Apple Neural Engine. No API keys, no cost, millisecond inference.
Always running
Menu bar app with one-click capture. Global hotkey for instant search.
Your data
SQLite file on your Mac. Export anytime as markdown or JSON. No cloud. No telemetry.
Under the hood
The stack
- Database: SQLite + sqlite-vec — single file, embedded, battle-tested
- Embeddings: CoreML on Apple Neural Engine — on-device, instant, free
- Search: Hybrid vector + FTS5 keyword, merged via Reciprocal Rank Fusion
- Protocol: MCP (stdio) — the standard for AI tool integration
- UI: Native SwiftUI — belongs on your Mac
Privacy by architecture
Your data never leaves your machine. No accounts. No telemetry. No cloud. Runs entirely on-device.
Every component — the database, the embedding model, the search engine — runs locally. There are no network calls, no API keys, no external dependencies. Your memories are a single SQLite file on your Mac that you own, back up, and export whenever you want.