Building AI Agents with Long-Term Memory
Explore techniques for building AI agents capable of learning from past interactions and retaining user preferences using long-term memory.
Course Timeline
🤔 The Need for Long-Term Memory in AI Agents
Discussion on the limitations of dataless AI agents and the benefits of incorporating memory for improved user experience and task efficiency.
🧠 How Humans Learn: A Guiding Principle
Analogy of human learning process (attention, encoding, consolidation, long-term memory) to illustrate the concept of long-term memory in AI agents. Discusses the difference between how human memories can fade and how AI agents can maintain perfect memory.
🤖 Architecting a Knowledge Agent
Explanation of a knowledge agent architecture which processes user-agent interactions to identify and store valuable information in a vector database for later retrieval. Illustrative example of storing user preferences (e.g., dietary restrictions).
🚀 Optimizing Knowledge Retrieval
Strategies for optimizing knowledge retrieval to balance speed, accuracy and cost. This includes using cheaper models for initial checks, prioritization of information, and archiving older, less-frequently used data.
💡 Real-world Examples and Implementations
Examples of projects implementing long-term memory in AI agents: MIM GPT (Memory GPT), continuously learning language agent (CLNA), and enhanced customer support agents. These examples highlight different approaches and applications.
🛠️ Implementing Long-Term Memory in AutoGen (Practical Example)
Step-by-step tutorial on adding long-term memory to an AutoGen agent in under 10 minutes. This covers setting up configurations, installing necessary packages, and using the `teachable` agent ability.
🔎 Deep Dive into AutoGen's Teachable Agent
Detailed explanation of the `teachable` agent's components, including the `MemoStore` class for database interaction, the `TextAnalyzer` agent, and the `storage` and `retrieval` functions.