How MemGPT uses memory blocks for agent memory

This title was summarized by AI from the post below.

Back when we first started working on MemGPT we thought a lot about the correct abstraction for agent memory - ideally, this abstraction should be as flat and close to the true transformer inputs/outputs (tokens) as possible, while allowing for easy organization of sections of memory within the context window. The abstraction we landed on was "memory blocks": chunks of text/strings in the context window, some read-only (like system instructions), and some read-write by the agent itself, via tools (like a "persona" block). In the Letta framework, these memory blocks are persisted to storage (Postgres or SQLite), which allows you to do really cool things, like connect the memory of thousands of agents together (with 'instant broadcast'), as well as have sleep-time agents that can spend hundreds of inference cycles to refine a single “memory block” (we call this “learned context”). So far, we’ve found this abstraction works extremely well and is particularly well-suited to the prompt engineering and post-training for models like Claude Sonnet (eg via nested XML tags). Check the video below for an example of a "subconscious" sleep-time agent editing the memory of a "primary" agent chatting with an end-user: this is only possible through the power of shared memory blocks, synchronized via agent state persisted to a database.

  • diagram

An example of a sleep-time agent editing the memory of a main agent (while the main agent chats with the end-user):

  • No alternative text description for this image

Really interesting direction—especially the concept of “learned context” during agent sleep cycles. Curious how you’re handling conflicting writes across shared memory blocks when agents operate asynchronously. Do you serialize writes or resolve with versioning logic?

Like
Reply

Thanks for sharing, Charles

Like
Reply

Fascinating concept Charles Packer! The memory block idea is smart simple, flexible, and powerful. Love how it allows agents to work together and learn over time. Thanks for sharing! 

See more comments

To view or add a comment, sign in

Explore content categories