A standalone memory layer is proposed that includes features like importance scoring, temporal decay, and batch conflict resolution, allowing for better control over memory in LangChain applications.
If you've been using LangChain's built-in memory modules and wanted more control over how memories are scored, decayed, and conflict-resolved, I built widemem as a standalone alternative. Key differences from LangChain memory: \- Importance scoring: each fact gets a 1-10 score, retrieval is weighted by similarity + importance + recency \- Temporal decay: configurable exponential/linear/step decay so old trivia fades naturally \- Batch conflict resolution: adding contradicting info triggers automatic resolution in 1 LLM call \- Hierarchical memory: facts roll up into summaries and themes with automatic query routing \- YMYL prioritization: health/legal/financial facts are immune to decay It's not a LangChain replacement, it handles memory specifically. You can use it alongside LangChain for the rest of your pipeline. Works with OpenAI, Anthropic, Ollama, FAISS, Qdrant, and sentence-transformers. SQLite + FAISS out of the box, zero config. pip install widemem-ai GitHub: [https://github.com/remete618/widemem-ai](https://github.com/remete618/widemem-ai)