Memory for AI Agents; Announcing OpenMemory MCP - local and secure memory management.
Learn more · Join Discord · Demo · OpenMemory
📄 Building Production-Ready AI Agents with Scalable Long-Term Memory →
⚡ +26% Accuracy vs. OpenAI Memory • 🚀 91% Faster • 💰 90% Fewer Tokens
Mem0 (“mem-zero”) enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. It remembers user preferences, adapts to individual needs, and continuously learns over time—ideal for customer support chatbots, AI assistants, and autonomous systems.
Core Capabilities:
Applications:
Choose between our hosted platform or self-hosted package:
Get up and running in minutes with automatic updates, analytics, and enterprise security.
Install the sdk via pip:
pip install mem0ai
Install sdk via npm:
npm install mem0ai
Mem0 requires an LLM to function, with gpt-4o-mini
from OpenAI as the default. However, it supports a variety of LLMs; for details, refer to our Supported LLMs documentation.
First step is to instantiate the memory:
from openai import OpenAI
from mem0 import Memory
openai_client = OpenAI()
memory = Memory()
def chat_with_memories(message: str, user_id: str = "default_user") -> str:
# Retrieve relevant memories
relevant_memories = memory.search(query=message, user_id=user_id, limit=3)
memories_str = "\n".join(f"- {entry['memory']}" for entry in relevant_memories["results"])
# Generate Assistant response
system_prompt = f"You are a helpful AI. Answer the question based on query and memories.\nUser Memories:\n{memories_str}"
messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": message}]
response = openai_client.chat.completions.create(model="gpt-4o-mini", messages=messages)
assistant_response = response.choices[0].message.content
# Create new memories from the conversation
messages.append({"role": "assistant", "content": assistant_response})
memory.add(messages, user_id=user_id)
return assistant_response
def main():
print("Chat with AI (type 'exit' to quit)")
while True:
user_input = input("You: ").strip()
if user_input.lower() == 'exit':
print("Goodbye!")
break
print(f"AI: {chat_with_memories(user_input)}")
if __name__ == "__main__":
main()
For detailed integration steps, see the Quickstart and API Reference.
We now have a paper you can cite:
@article{mem0,
title={Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory},
author={Chhikara, Prateek and Khant, Dev and Aryan, Saket and Singh, Taranjeet and Yadav, Deshraj},
journal={arXiv preprint arXiv:2504.19413},
year={2025}
}
Apache 2.0 — see the LICENSE file for details.
Share code with LLMs via Model Context Protocol or clipboard. Rule-based customization enables easy switching between different tasks (like code review and documentation). Includes smart code outlining.
Connect your chat repl to wolfram alpha computational intelligence