unsubbed.co

Basic Memory

Basic Memory is a self-hosted AI assistants & chatbots tool that provides persistent knowledge store for LLMs.

Open-source AI memory for knowledge workers, honestly reviewed. No marketing fluff — just what you get when you let your AI write to Markdown files.


TL;DR

  • What it is: An MCP server that gives AI assistants (Claude, ChatGPT, Codex, Cursor) persistent memory stored as plain Markdown files on your machine or in the cloud [README][website].
  • Who it’s for: Developers, writers, researchers, and consultants who use AI daily and are tired of re-explaining their projects from scratch every conversation. Heavy AI users specifically [website].
  • Cost savings: The self-hosted version is free (AGPL-3.0). The cloud version runs $19/mo ($14.25 with beta discount). Compared to paying for AI sessions that waste the first 10 minutes on context recap, the math depends on how much your time costs [pricing].
  • Key strength: Local-first, plain-text Markdown — you own the files, you can read them, edit them in any editor, and your AI sees your changes instantly. Bidirectional read-write, not just retrieval [README].
  • Key weakness: AGPL-3.0 license limits commercial embedding. The cloud tier is $19/mo for a single plan with no tiers — expensive for a memory layer that many technical users can self-host for free. No evidence of independent third-party reviews to validate marketing claims at time of writing.

What is Basic Memory

Basic Memory is an MCP (Model Context Protocol) server that turns your AI conversations into a persistent, searchable knowledge base stored as plain Markdown files. When you ask Claude to “create a note about the auth bug we just diagnosed,” it writes a .md file to ~/basic-memory/ on your machine. When you open a new conversation tomorrow and ask “what do I know about the auth system?”, the AI reads those files and picks up exactly where you left off [README].

This sounds simple because it is. That’s the point.

The problem it solves is real: every LLM conversation starts fresh. Chat history exists, but it’s a flat transcript, not a knowledge structure. RAG (retrieval-augmented generation) lets AI query documents, but the AI can’t write back. Vector databases power sophisticated retrieval but are opaque black boxes you can’t audit. Basic Memory’s bet is that the right substrate for AI memory is the one you’ve always used for your own notes: plain text files [README].

The project is built in Python (3.12+), distributed as a PyPI package, licensed AGPL-3.0, and currently sits at 2,681 GitHub stars with 57,000+ downloads per month [merged profile][website]. It’s backed by Basic Machines, a small team based on their GitHub presence, with a Discord community and active changelog. The most recent major release (v0.19.0) added semantic vector search, a schema validation system, FastMCP 3.0 integration, and per-project cloud routing [README].

The cloud version — launched recently — adds cross-device sync (desktop, web, mobile) while keeping the local-first workflow intact. Cloud is explicitly optional: the open-source self-hosted version remains fully functional [README].


Why people choose it

The marketing is testimonial-heavy, and the testimonials are specific enough to be credible [website]:

One user (Alex, TrainerDay): “Basic Memory changed my whole relationship with LLMs… I switched from GPT and Gemini to exclusively Claude and Claude Code because of this integration and am completely revamping all our companies processes around a basic memory workflow now.”

A developer (@groksrc): “I don’t code without Basic Memory anymore. It’s such a time saver to be able to refer to projects I don’t currently have active and keep a running log of all of my learnings and ProTips.”

A creator (@Nellins): “Basic Memory turned my scattered notes and half-finished ideas into something coherent. It remembers what matters and builds on it — so I don’t have to start from scratch every time I sit down to work.”

Even Claude Code gets a pull quote: “Basic Memory enables true continuity of thought across sessions while keeping everything in standard files that users control completely. It’s a rare example of technology that augments human capabilities without creating new dependencies.” [website] — which is a testimonial from the AI this tool is designed to work with, which is either clever marketing or genuinely on-brand, depending on your cynicism level.

The consistent thread across all quoted users: the pain point isn’t AI capability, it’s AI amnesia. Every power user of Claude or Codex has hit the wall where the AI is smart enough to solve your problem but too stateless to remember the ten decisions you made last week that constrain the solution space. Basic Memory addresses that specific, daily friction.

What it doesn’t claim to be: an autonomous AI memory system that learns from everything you do. You choose what gets written. You can edit the files. The AI writes notes when you ask it to. This is deliberate — it keeps you in control of what the AI “knows” and prevents the knowledge base from becoming a noisy dump of everything that ever passed through a context window [README].


Features

Core memory engine:

  • MCP server that connects to any MCP-compatible client — Claude Desktop, Claude Code, Codex, Cursor, VS Code, Obsidian [README][website]
  • Notes stored as plain Markdown in ~/basic-memory/ (default) — viewable, editable, searchable with any text tool [README]
  • Bidirectional: AI reads existing notes AND writes new ones [README]
  • Knowledge graph that links notes across projects and conversations [website]
  • Import tool for ChatGPT conversation history — migrate everything to plain text [website]

Search and retrieval (v0.19.0+):

  • Hybrid search: full-text + semantic vector similarity via FastEmbed embeddings
  • Matched chunk text returned in search results for better context
  • Schema inference, validation, and diff tools — audit the structure of your knowledge base [README]

Multi-platform and cloud:

  • Per-project cloud routing — some projects local, others synced to cloud, via API key auth (basic-memory project set-cloud)
  • Cross-device sync: desktop, web, mobile (cloud tier)
  • Works with Obsidian vault sync [website]
  • Works with VS Code — edit memory files like any project file [website]

Developer tooling:

  • CLI with JSON output mode (--json) for scripting
  • Workspace-aware commands, htop-inspired project dashboard
  • Auto-update via uv tool and Homebrew installs; silent in MCP mode
  • edit_note append/prepend auto-creates if not exists; write_note has overwrite guard [README]
  • REST API included (listed in canonical features) [merged profile]
  • SQLite (local default) or PostgreSQL for storage [merged profile]

Integrations:

  • Claude: Desktop, Code, and API
  • OpenAI Codex
  • Cursor
  • Obsidian (vault sync)
  • VS Code
  • “Anything that supports MCP” — the list extends beyond the named integrations [website]

Pricing: SaaS vs self-hosted math

Basic Memory Cloud:

  • One plan: $19/month ($14.25/month with current 25% beta discount)
  • Includes: full feature access, ChatGPT + Claude + Gemini support, unlimited notes and projects, mobile and web access, private exportable Markdown files
  • 7-day free trial, cancel anytime, data stays yours
  • No tiered pricing — one plan or self-host [pricing]

Self-hosted (AGPL-3.0):

  • Software: $0
  • A machine to run it on: your laptop is fine for local use
  • If you want a VPS for availability: $5–10/mo on Hetzner or Contabo
  • Setup time: 15 minutes for a technical user (uv install + config file edit)

The AGPL caveat: AGPL-3.0 is not MIT. If you embed Basic Memory in a commercial SaaS product, you must open-source your modifications and the combined work. For personal use or internal tooling, this doesn’t matter. For a product company planning to distribute Basic Memory as part of a commercial offering, read the license carefully [merged profile].

Concrete value math:

The savings aren’t against another subscription — they’re against context overhead. If you spend 10 minutes per AI session re-establishing context, and you run 3 sessions per day at $20/hr effective cost, that’s ~$200/month in lost productivity. Basic Memory cloud at $14.25/mo is a cheap trade. Self-hosted at $0 (if you’re technical) is a no-brainer.

For non-technical users who can’t self-host, $19/mo is the ask. That’s more expensive than many single-purpose SaaS tools — positioned as a premium knowledge layer rather than a budget utility. Whether it’s worth it depends entirely on how much AI you use daily. Casual users (1–2 sessions per week) probably won’t feel the pain strongly enough to justify a subscription.


Deployment reality check

Self-hosted path (technical user, ~15 minutes):

# Install with uv (recommended)
uv tool install basic-memory

# Edit ~/Library/Application Support/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "basic-memory": {
      "command": "uvx",
      "args": ["basic-memory", "mcp"]
    }
  }
}

That’s it. Files land in ~/basic-memory/. The README also shows Homebrew as an install path, with auto-update built in for both [README].

What you actually need:

  • A Mac or Linux machine with Python 3.12+ (or just uv which handles Python)
  • Claude Desktop, Cursor, or another MCP-compatible client
  • About 5 minutes to write the config file

What can go sideways:

  • The AGPL license creates friction for any commercial embedding scenario. Worth checking before building a product on top of it.
  • SQLite is the default backend — fine for one person, but if you’re sharing a knowledge base across a team or want multi-device without the cloud tier, PostgreSQL setup adds complexity [merged profile].
  • The cloud tier is new and in beta (the 25% discount code in the README says BMFOSS and references “3 months” — this is launch-phase pricing, not necessarily permanent) [README]. Long-term cloud pricing is unconfirmed.
  • No third-party audits of the cloud sync infrastructure are publicly documented. For sensitive knowledge bases (client work, proprietary research), self-hosted is the more defensible choice.
  • The auto-update behavior runs silently in MCP mode but fires every 24 hours — worth knowing if you’re running this in a constrained environment [README].

For non-technical founders: Setup requires editing a JSON config file. That’s within reach of anyone who follows a guide, but it’s not a web UI install. Budget 30–60 minutes if you’ve never touched config files before, including the Claude Desktop restart and verification step.


Pros and Cons

Pros

  • Plain text, you own it. Every note is a Markdown file in a directory you control. No proprietary database, no vendor lock-in. Edit your memory in Obsidian, VS Code, or vim. Your AI sees the change instantly [README][website].
  • Bidirectional read-write. Most “AI memory” tools are retrieval-only — the AI queries your docs but can’t update them. Basic Memory lets AI write notes during conversations, building the knowledge base passively as you work [README].
  • MCP-native. Works with anything that supports MCP: Claude, Codex, Cursor, and future tools as MCP adoption grows. Not locked to one AI vendor [README].
  • Hybrid search that actually works. v0.19.0 combines full-text and semantic vector search — better than pure keyword matching for finding connected ideas [README].
  • Knowledge graph. Notes link to each other across projects. Mention someone Monday, ask about their project Friday — the connection is already there [website].
  • Cross-device via optional cloud. Local-first by default; add cloud when you need mobile or multi-device without losing the plain-text architecture [README].
  • Free self-hosted tier is real. The AGPL-3.0 version is not crippled. Full features, local use, no subscription required [README][pricing].
  • ChatGPT history import. If you’re migrating from ChatGPT, you can pull in your entire conversation history as searchable plain-text files [website].

Cons

  • AGPL-3.0 limits commercial embedding. Not MIT. If you’re building a product that bundles Basic Memory, the license requires open-sourcing your modifications. Fine for personal use; a consideration for product teams [merged profile].
  • $19/mo for cloud is premium pricing for a memory layer. There are no tiers — it’s full price or self-host. Casual AI users won’t recoup the cost [pricing].
  • No third-party reviews available. At time of writing, we could not locate independent assessments from tech publications or community sites. This review relies entirely on primary sources: the README, official website, and testimonials curated by the vendor. Treat claims accordingly.
  • Young product, beta cloud. The 25% “beta discount” on cloud and the rapid changelog suggest active development. The core CLI appears stable; the cloud sync infrastructure is newer and less battle-tested.
  • Python dependency. Unlike Electron apps or browser extensions, Basic Memory requires Python 3.12+ and uv for the best install experience. Not every non-technical user will clear this bar without help.
  • No team/multi-user features. This is a single-user knowledge layer. If you want shared team memory with permissions and collaborative editing, that’s not what this tool is today [merged profile].
  • You have to build the habit. Basic Memory only works if you actually ask the AI to write notes. Unlike systems that passively record everything, this requires intentional use. Users who don’t build the habit will find an empty knowledge base after a month.

Who should use this / who shouldn’t

Use Basic Memory if:

  • You run multiple AI sessions per day — coding, writing, research, consulting — and you spend real time re-establishing context in each one.
  • You already use Markdown-based tools (Obsidian, VS Code, any text editor) and want your AI knowledge to live in the same format.
  • You’re on Claude Desktop or Cursor and want persistent memory without managing a separate database.
  • You’re a developer who wants to give Claude Code context about ongoing projects without manually pasting README files every session.
  • You want to self-host and pay nothing — the AGPL-3.0 license is fully functional for personal use.

Skip it (stay simple) if:

  • You use AI a few times per week and don’t feel the amnesia problem. The overhead of building a memory system may outweigh the benefit.
  • You’re non-technical and unwilling to edit a JSON config file. The setup isn’t hard, but it’s not a one-click install either.
  • You need team-wide shared memory with access controls. This is a personal knowledge layer, not a team knowledge base.

Skip it (use a different tool) if:

  • You’re building a commercial product that needs to embed AI memory — AGPL-3.0 may constrain your architecture.
  • You want fully passive memory that captures everything without any prompting. Basic Memory requires intentional note-writing; if you want a system that auto-ingests your Slack messages, emails, and docs, look elsewhere.
  • Your compliance requirements prohibit data leaving your machine and you need cross-device sync — the cloud tier syncs to external servers. Self-hosted local-only is your option.

Alternatives worth considering

  • Mem0 — commercial AI memory service (managed, API-first). More automated than Basic Memory — it can ingest from many sources — but it’s a black box, the data isn’t yours in plain text, and pricing is usage-based. Better fit if you want passive ingestion and don’t care about owning the format.
  • Obsidian + local plugins — if you’re already deep in Obsidian, Smart Connections and similar plugins provide some AI-over-notes capability. More customizable, no MCP standard, higher setup complexity.
  • A simple context file — the zero-cost alternative. Maintain a CONTEXT.md per project, paste it manually at session start. Works, costs nothing, scales poorly as knowledge grows, and requires you to manage it entirely yourself.
  • Zep — open-source long-term memory for AI agents, database-backed, more oriented toward agents and production apps than personal knowledge. Different use case: Zep is for apps you build, Basic Memory is for AI tools you use.
  • Custom RAG pipeline — for technical teams, building your own embedding + retrieval layer on top of a vector DB (Qdrant, Weaviate) gives maximum control. Much higher setup cost. Only worth it if your requirements exceed what Basic Memory’s standard pipeline provides.

The honest competitive comparison: Basic Memory is the only MCP-native, local-first, plain-text AI memory tool with a managed cloud option at this level of polish. The category is new enough that alternatives are either more complex (build-your-own), more opaque (managed black-box services), or less integrated (plugin-based). That positioning will change as MCP adoption grows.


Bottom line

Basic Memory is solving a real problem — AI amnesia across sessions — with the right architectural instinct: store knowledge as plain text you own, let the AI write to it, let hybrid search find it. The MCP-native approach means it works across Claude, Codex, and Cursor without per-tool integrations. For a developer or knowledge worker who runs AI sessions daily and keeps re-explaining themselves, the self-hosted version is a free install that pays back in the first week.

The friction points are real too: AGPL-3.0 limits commercial embedding, the cloud tier at $19/mo is a significant ask for a single-user utility, the product is early (beta cloud, rapid changelog), and there are no independent third-party assessments yet to validate the vendor’s own claims. If you’re cautious, self-host first — it’s fully featured and costs nothing. If the local-only workflow covers your needs, you may never need the cloud.

The non-technical founder who wants this but can’t self-host is the target for a one-time deployment service — exactly the kind of setup that upready.dev handles.


Sources

Note: Third-party review sources provided in the research brief for this article were unrelated to Basic Memory (returned movie theater listings). This review is based entirely on primary sources.

  1. Basic Memory GitHub READMEbasicmachines-co/basic-memory, AGPL-3.0, Python. https://github.com/basicmachines-co/basic-memory
  2. Basic Memory official website — homepage, features, testimonials. https://basicmemory.com
  3. Basic Memory pricing page — cloud plan, trial terms, OSS alternative. https://basicmemory.com/pricing
  4. Basic Memory documentation — install guides, CLI reference, MCP configuration. https://docs.basicmemory.com
  5. PyPI: basic-memory — download stats, version history. https://pypi.org/project/basic-memory/

Features

Integrations & APIs

  • REST API