SYNAPTIC · ORGANIC MEMORY MANAGER · v2.0.0b1

Organic Agent Memory

This project started out as a fun way to visualize my agents memories, but it evolved into a fully functional memory management system for AI agents. It is inspired by the way humans store and retrieve memories, and is built on a foundation of peer-reviewed neuroscience research.

A 3D rendering of a brain visualization. Neurons (glowing dots) are placed in anatomical regions; synaptic pathways pulse between them as the brain rotates.

Oh, you're here early!

I'm still working on the public beta of this program, but here's a preview of what it can do. Images and features are subject to change in minor ways, they are already implemented, just need refinement.

Why Organic Memory?Utilization of natural processses to optimize cycles. Electric SheepWhat the system does while it sleeps. Medical DashboardInspired by Sci-Fi Medical Records, your data comes to life The scienceThe 25 peer reviewed studies the architecture is built on. InstallGet it running in a couple minutes. Wire your AIClaude Code, MCP, OTel, custom SDK.

Why an organic memory manager

I started this project with a relatively simple goal; I wanted to see what my agents looked like when they were thinking. I wanted to see the gears in the machine spinning and churning away. I wanted to see the patterns, the way they connect to one another. I wanted to see the forest for the trees. I found that the memory managers I was using did a great job at managing memories, but not really explaining the relationship between them.

So, I built the brain visualizer, and it helped me better understand the relationships between memory modules, but as I looked at this bundle of neruons and synapses I stitched together, I realized that I was one step away from a full memory management system. I was already storing the memories, indexing and sorting them, but I was using my agent's tokens to help tag things as they were stored, and the other multi-agent memory manager I was using needed an external API to process incoming memories. It's just indexing, why not run it locally?

So, by the time I added a local 3.1b Ollama agent in a docker container to index the memories for me and build the synapses, I discovered I could take this a step further, and build a full on custom memory manager that didn't rely on external API, but could also be built from the ground up to include advance features I wanted to add as a cost saving measure. That's when I came up with the idea of running the system when the host PC or server is idle, and since I was running wild with Brain terminology, I referred to it as Dreaming, and that's when it all clicked.

A week later, and I built a fully functional memory manager inspired by how we humans store and retrieve memories, powered by the actuall sciecnce that explains how we ourselves do it. What started as a visualizer, became this system that could grow with it's user, associate and develop complex neural connections between memories, and slowly archive inactive ones into a long term archive. All the benefits of the Human OS, without the data loss.

What's wrong with traditional memories?

  • LLM Memories are just more text to be processed in the current context, bloating more and more ovet time.
  • Recall quality degrades over time as the embedding space saturates.
  • The system never thinks about its own contents — it just retrieves them.
  • Surprising connections between distant memories never surface unless you ask the exact right query.
  • There's no concept of understanding a topic — just facts arrayed against each other.
  • If tags aren't strictly curated, they can fragment or mislabel, causing recall to fail.
  • The Memory System doesn't teach what's important: The How and The Why, only fact strung together.

What Synaptic does differently

  • Triage — every memory gets a salience score; weak ones decay, strong ones reinforce.
  • Consolidation — duplicate memories merge; clusters synthesize into single richer entries.
  • Cross-region bridges — surprising connections between distant memories surface automatically.
  • Schemas — the system abstracts patterns from groups of related memories, developing complex pathways
  • Replay — REM-style recombination produces new associations the traditional tagging can't explicitly develop on it's own.
  • Context memory — composite frameworks distilled from many specific experiences develop habits and patterns
  • Archival — Old Memories lose salience, Synaptic archives them, preventing halluciantions of outdated info.

Do Agents Dream of Electric Sheep?

Once a night (or on demand), Synaptic runs a 12-phase consolidation pipeline modelled on biological sleep stages. Each phase corresponds to a real brain mechanism. You can see every phase fire in the Dream Journal panel; you can tune any of them; you can read about why it exists in the citation linked from the panel.

PHASE 0a
Light encoding
Embed each new memory via Tier 1 model, classify into one of 14 anatomical regions, mark light_encoded. Sub-second per memory.
PHASE 0b
Deep enrichment (REM-style)
Per-memory Tier 2 re-summarisation, retag, region refinement. Resumable across runs; raw text is preserved (R10 immutability) — refined output lives in enriched_text.
PHASE 1
Dedup
Cosine-NN merges near-duplicates. Older memory retains; merged-from list preserves provenance.
PHASE 2
Synthesis
Cluster similar memories into a single richer one. The synthesis lists its sources; sources stay searchable.
PHASE 5
SHY weighted decay
Synaptic homeostasis. Decay multiplier scaled by salience + recall + schema fit. Weak memories fade; strong ones stay.
PHASE 7
Cross-region bridges
Two-pool design: high-cosine NN for the obvious, low-cosine + shared rare tags for the surprising. The latter is where genuine creativity lives.
PHASE 8
Schema formation
Abstracts patterns from groups of related syntheses. Schemas decay if not reinforced; survive only if the user keeps engaging with the topic.
PHASE 8.5
Schema reframing
For each schema, finds prior memories drifting toward its abstraction and links them as sources. Text never rewritten.
PHASE 8.7
Context memory (planned)
REM-style composite frameworks distilled from many specific events in the same context. The "operational understanding" of a topic.
PHASE 9
Reinforcement + rebalancing
Bumps strength on recalled + weak-but-salient memories. Then a global rebalance keeps the recall-strength distribution well-shaped.
PHASE 10
Replay
REM-style narrative replays mix recent and pre-existing memories. Outputs feed back into Phase 7 + 8 as new associations.
PHASE 12
Dream entry
A short generated prose entry summarising what consolidated. Surreal, archetype-tagged, written in dream voice. Your morning record of how the system thought about its bank overnight.

The Atlas

The dashboard's Atlas Connectome Explorer is where you actually look at your AI's mind. Eight tabs, each with focused sub-pages. Here's what you'll see when you walk through it.

CT Scan — live activity

Brain page

A CT Scan of your Agents' memory bank. Each bright spot is an active Memory, and every strand a Synapse.

Dream Journal — last night's work

Dream Journal

Dream Journal: progress card + generated dream entry per nightly run, with archetype caption + expandable phase details.

Maps — concepts, projects, people, technologies

Maps

Maps. Connected Topics in your Agents' memory bank.

Trace detail — a single memory's lifecycle

Trace

Trace detail: lifecycle badges, salience, recall strength, synthesis lineage, surprising cross-region partners.

What it can do

For the AI agent

  • Auto-classified memories — every saved memory lands in the right brain region without manual tagging
  • Recall that improves over time — recently used memories get reinforced, irrelevant ones decay
  • Surprising associations — the cross-region bridges surface connections you didn't know to ask about
  • Schemas — abstract patterns the agent can use to reason about families of memories
  • Tier 3 research augmentation — when a topic is thinly populated, the system can fill it via OpenAI / Anthropic / local AirLLM
  • Sensitive-aware — flagged memories never reach Tier 3 or external egress; redacted in audit; sticky-on policy

For you

  • A morning dream entry — short surreal prose summarizing what consolidated overnight
  • Live brain visualization — see your AI's "mind" pulse as it works in real time
  • Map view — what concepts/projects/people/technologies are in the bank
  • Atlas search — "/" anywhere; prefix-scoped, ranked, recent-history-aware
  • Audit log — every consolidation operation, with before/after diffs
  • Privacy panel — what's flagged sensitive and why
  • Per-tier model controls — local Ollama, OpenAI, Anthropic, AirLLM
  • Mobile + remote — Cloudflare Tunnel + bearer-token auth; works from any network

The science

Synaptic's architecture is grounded in 25 peer-reviewed studies on memory, sleep, and consolidation. Every nightly phase maps to a specific finding; every refinement we've shipped (R1 through R15) cites a source. The citations below are not decoration — they are the load-bearing structure of the system.

If you're citing Synaptic in academic work, please also cite the underlying primary sources. The architecture is novel; the mechanisms it implements are not.

Foundations — system + synaptic consolidation

[1] Stickgold & Walker (2013). Sleep-dependent memory triage: Evolving generalization through selective processing. Nature Neuroscience, 11(2), 139-145.
[2] Stickgold & Walker (2010). Overnight alchemy: Sleep-dependent memory evolution. Nature Reviews Neuroscience, 11(3), 218.
[3] Paller, Creery & Schechtman (2021). Memory and sleep: How sleep cognition can change the waking mind for the better. Annual Review of Psychology, 72, 123-150.

Synaptic homeostasis (SHY)

[4] Tononi & Cirelli (2014). Sleep and the price of plasticity: From synaptic and cellular homeostasis to memory consolidation and integration. Neuron, 81(1), 12-34.
[5] Tononi & Cirelli (2020). Sleep and synaptic down-selection. European Journal of Neuroscience, 51(1), 413-421.

Hippocampal replay and sharp-wave ripples

[6] Buzsaki (2015). Hippocampal sharp wave-ripple: A cognitive biomarker for episodic memory and planning. Hippocampus, 25(10), 1073-1188.
[7] Schapiro, McDevitt, Rogers, Mednick & Norman (2018). Human hippocampal replay during rest prioritizes weakly learned information and predicts memory performance. Nature Communications, 9(1), 3920.
[8] Lewis, Knoblich & Poe (2018). How memory replay in sleep boosts creative problem-solving. Trends in Cognitive Sciences, 22(6), 491-503.

Synaptic tagging and capture

[9] Ibrahim, Wang & Sajikumar (2024). Synapses tagged, memories kept: Synaptic tagging and capture hypothesis in brain health and disease. Philosophical Transactions of the Royal Society B, 379(1906), 20230237.
[10] Moncada, Ballarini & Viola (2015). Behavioral tagging: A translation of the synaptic tagging and capture hypothesis. Neural Plasticity, 2015, 650780.

Sleep-dependent transformation

[11] Lacaux, Andrillon, Bastoul, Idir, Fonteix-Galet, Arnulf & Oudiette (2021). Sleep onset is a creative sweet spot. Science Advances, 7(50), eabj5866.

Schema integration

[12] Aghayan Golkashani, Ghorbani, Leong, Ong & Chee (2023). Advantage conferred by overnight sleep on schema-related memory may last only a day. SLEEP Advances, 4(1), zpad019.
[13] Ashton, Staresina & Cairney (2022). Sleep bolsters schematically incongruent memories. PLOS One, 17(7), e0269439.

Emotional memory and selectivity at encoding

[14] Payne & Kensinger (2018). Stress, sleep, and the selective consolidation of emotional memories. Current Opinion in Behavioral Sciences, 19, 36-43.
[15] Hutchison & Rathore (2015). The role of REM sleep theta activity in emotional memory. Frontiers in Psychology, 6, 1439.
[16] Nishida, Pearsall, Buckner & Walker (2009). REM sleep, prefrontal theta, and the consolidation of human emotional memory. Cerebral Cortex, 19(5), 1158-1166.

REM-specific mechanisms (and the skeptical view)

[17] Liu, Pikovsky, Cohen et al. (2023). Human REM sleep recalibrates neural activity in support of memory formation. Science Advances, 9(34), eadj1895.
[18] Johnson (2005). REM sleep and the development of context memory. Medical Hypotheses, 64(3), 499-504.
[19] Siegel (2001). The REM sleep-memory consolidation hypothesis. Science, 294(5544), 1058-1063.
[20] Sarangi & Paital (2021). Association between REM sleep and strengthening memory: A mini review. Journal of Clinical Images and Medical Case Reports, 2(6), 1451.
[21] Liu, Chen, Xia, Zeng, Xue & Hu (2025). Slow-wave sleep and REM sleep differentially contribute to memory representational transformation. Communications Biology, 8, 1302.

Forgetting and pruning

[22] Poe (2017). Sleep is for forgetting. Journal of Neuroscience, 37(3), 464-473.
[23] Genzel & Wixted (2019). Cellular and systems consolidation of declarative memory. Frontiers in Cellular Neuroscience, 13, 71.

REM-specific recombination + bizarre dream content

[24] Cai, Mednick, Harrison, Kanady & Mednick (2009). REM, not incubation, improves creativity by priming associative networks. PNAS, 106(25), 10130-10134.
[25] Wamsley, Trost & Tucker (2024). Memory updating in dreams. SLEEP Advances, 5(1), zpae096.

Full citations with mapping to architectural decisions →

Install

Option A — Everything in Docker

git clone https://github.com/nomadsgalaxy/Synaptic-Disorder.git
cd synaptic-disorder
docker compose up
# open http://localhost:9911

Brings up the SD Core stack: a single Go binary that serves the dashboard, hosts the API + WebSocket, runs the consolidation pipeline, persists memories to SQLite. Plus per-tier Ollama containers (one per LLM model so each can be spun up on demand). No Python on host, no Node, no manual ollama pull.

Option B — Mock-only dashboard (no backend)

python serve.py
# Dashboard at http://localhost:8765

Brain runs on the built-in mock event engine. Real adapter events can't reach it without SD Core; useful for evaluating the UI before committing to the full install.

Option C — Native Go binary (no Docker)

cd bridge/core
go build -o sd-core .
./sd-core --listen 127.0.0.1:9911 --data-dir ../..

Single Go binary, ~15 MB static, no glibc dependency. Native Ollama for thought-bubble generation: ollama serve & ollama pull llama3.2:3b.

For lower-spec hardware (Raspberry Pi 4/5), use a smaller model:
SD_OLLAMA_MODEL=llama3.2:1b docker compose up
Pi-friendly options include qwen2.5:0.5b, tinyllama:1.1b, gemma2:2b, phi3:mini.

Remote operation

Run the dashboard on one machine, wire AI clients on others. See docs/REMOTE.md for two transport options:

Wire your AI client

Adapters push live events to SD Core so the dashboard reflects what your agent is doing. Each bridge ships with a config.json — edit two fields (URL + token) and you're set.

Claude Code (recommended — hooks + MCP)

Marketplace install. Hooks observe PreToolUse, PostToolUse, UserPromptSubmit, SessionStart/End, etc.; the bundled MCP server adds 17 tools (memory CRUD, research, audit/budget, dream-pipeline control).

/plugin marketplace add https://github.com/nomadsgalaxy/Synaptic-Disorder
/plugin install synaptic-disorder-claude-code@synaptic-disorder
Claude Desktop / Cursor / Cline / Continue / Gemini CLI — MCP

Edit bridge/mcp-adapter/config.json, then register the MCP server in your client. See AGENT_QUICKSTART.md for per-client snippets.

OpenTelemetry-instrumented apps (LangChain, LlamaIndex, AutoGen, CrewAI…)

Edit bridge/otlp-receiver/config.json, start the receiver, point your OTEL_EXPORTER_OTLP_ENDPOINT at it. Works with any OpenInference instrumentation.

Anything custom — Adapter SDK
import synaptic_disorder as sd

with sd.connect(adapter_id="my-agent", model="claude-opus-4-7") as client:
    client.emit("tool_call", {"tool_name": "bash"}, region_hint="motor_cortex")

TypeScript SDK has the same shape. Examples in bridge/sdk/python/examples/ and bridge/sdk/typescript/examples/.