Agents can decide what to remember

Some ideas which an Agentic AI considers while carrying out a task or solving a problem will be useful in the course of future tasks.

If an agent is aware of its goals and the relevant context, it can simply be asked what of the currently “in-context” ideas should be stored in its memory.

This may be a useful initial strategy for deciding what information to store.

A limitation of this approach may be that an idea might be relevant to one of the agent’s interests/goals/tasks but the LLM may not be currently aware of said goal/interest due to it not being “in-context”. Hence the LLM may erroneously decide not to store a piece of information or idea.

As a result, this approach may be improved by first searching for relevant notes/ideas in the agent’s memory/task list to provide as additional context when asking the agent whether an idea is worthy of being remembered.

If you found this interesting, have feedback or are working on something related, let’s chat: twitter (@0xdist) or schedule a 20 min call

Distbit

Distbit

Interested in econ, cryptoecon, agents, finance, epistemology, liberty.