You AI memory, beside you.
beside sits next to your work, capturing the apps, decisions, and half-finished threads that usually disappear. It indexes them into local memory, surfaces what matters, and keeps every AI agent grounded in your actual context — 100% on your machine, every line of code open source.
Your day, written beside you as it happens.
While you move through apps, beside keeps pace: writing a Markdown wiki on your disk with the topics, tags, and decisions that belong next to the work itself, all under ~/beside/wiki. No filing. No formatting. No forgetting.
- Pages re-organise themselves as your work evolves
- Plain Markdown — grep it, edit it, version it
- Tags emerge from your actual signals, not a fixed schema
From beside your screen to inside your AI's memory.
Four quiet loops run beside everything you do. Together they turn the signals on your computer into structured memory your AI agents can actually use.
Watches every app, quietly.
Screenshots, active window, URLs, idle state — appended locally with negligible overhead.
Shapes raw signals into knowledge.
A local model extracts entities and topics, then continuously refactors the wiki.
Pins the moments that matter.
Patterns, follow-ups, half-finished threads — beside surfaces them when you'll need them.
Remembers it — for every AI you use.
Claude, Cursor, ChatGPT — any MCP agent — gets persistent context, on demand.
The context beside you, available inside every AI.
beside speaks MCP. Plug it into Claude, Cursor, ChatGPT — or any MCP-compatible agent — and ask questions that normally send you digging through six apps, three meetings, and yesterday's tabs.
The agent asks; beside remembers. Matching context comes from your local knowledge base, so answers are grounded in what was actually beside you this week. Your raw data never leaves your machine.
- “What are my open items?”
- “Summarise this week with Acme.”
- “What did we decide on pricing?”
- “Draft a follow-up from yesterday's call.”
The layer between your work and the AI beside it.
LLMs forget. Agents start from zero. beside stays close to the work itself, continuously turning what happens on your computer into recallable memory that any tool can use.
100% local-first
Captures, embeddings, and indexes live on your disk as JSONL + SQLite. Bring your own model — Ollama, llama.cpp, OpenAI, Anthropic — or run fully offline.
Open source · MIT
Every capture path, every prompt, every byte we touch is auditable on GitHub. Fork it, extend it, self-host it. No black boxes.
Silent capture
Screenshots, active window, URLs, idle state — captured locally with negligible overhead. Nothing leaves your machine unless you say so.
Self-organising knowledge
A local model turns captures into structured notes, topics, and timelines. The wiki re-organises itself as your work evolves.
Proactive surfacing
beside notices the moments that should stay beside you — patterns, follow-ups, half-finished threads — and quietly pins them where you'll see them.
Memory for any agent
Ship rich context to Claude, ChatGPT, Cursor and any MCP-compatible agent — so they remember yesterday, last week, last quarter.
How beside turns activity into living memory.
The same four loops, in technical detail. Each stage is a swappable plugin — capture, storage, model, index, export.
- 1
Capture
The capture layer records screenshots, focused windows, URLs and idle events — running silently in the background with negligible overhead.
- 2
Store
Raw events are appended to immutable JSONL + SQLite locally. Nothing is destructive; everything is replayable.
- 3
Index & surface
A local LLM extracts entities, topics, and intents, continuously refactors the wiki, and surfaces patterns worth your attention.
- 4
Recall
Expose your memory to any AI agent over MCP, Markdown, or a simple API. Context engineering, finally automated.
Put your memory beside every AI you use.
Install beside once. Claude, Cursor, ChatGPT, and every MCP agent get the context that has been working beside you all along.
Windows & Linux — coming soon.
