What problem does selective memory sharing solve in multi-agent systems?
It gives a specialist agent the context it needs without dumping the entire history (noise, cost, distraction).
What is the core idea of selective memory sharing?
Use the LLM to choose which memory items are relevant to a specific task, then share only those.
How is selective memory sharing different from basic message passing?
Message passing shares only a task and returns a final answer; selective sharing also sends a curated subset of relevant context.
Why assign a unique ID to each memory item (e.g., mem_0, mem_1)?
So the LLM can reference memories cheaply and precisely without rewriting their contents.
Why is “IDs first, then retrieval” an efficient design?
The LLM outputs short IDs (cheap output), and code fetches the full memory text for those IDs (reliable inflation).
What is the purpose of forcing the LLM to output a JSON object for selection?
It makes selection machine-checkable (selected IDs + reasoning) and reduces ambiguity.
What are the two key outputs in the selection schema in this lesson?
selected_memories (IDs to include) and reasoning (why they were chosen).
Why preserve the LLM’s selection reasoning in memory?
It creates an audit trail explaining what context was shared and why.
What is the main safety benefit of selecting by ID instead of rewriting memory?
The model can’t “edit” facts in the memory; it can only choose which existing items to include.
What happens after the LLM selects memory IDs?
The system builds a filtered memory containing only those items and runs the specialist agent with that filtered memory.
What does the delegated agent receive in this pattern?
A focused “mini-memory” containing only task-relevant items, not the whole history.
What gets added back to the original agent’s memory after the call?
The selection reasoning plus the invoked agent’s results (so the system can trace what happened).
In the budget-review example, what kinds of memories get selected?
Cost estimates, cost breakdown, and the request to reduce cost.
In the budget-review example, what kinds of memories are excluded as irrelevant?
Timeline/deadline updates and general discussion that doesn’t affect the budget question.
Why is LLM-based selection better than rule-based filtering in many cases?
It can understand meaning and implications, not just keywords or rigid patterns.
What is a key adaptability advantage of LLM-based selection?
It can handle many task types without changing the filtering code each time.
When is selective memory sharing especially valuable?
When the second agent needs context, but full memory would overwhelm it or waste tokens.
What is the “recap” list of memory-sharing patterns in this module?
Message passing, memory reflection, memory handoff, and selective memory sharing.
What is the simplest decision question for choosing a memory-sharing pattern?
“How much context does the second agent actually need to do good work?”
What additional decision questions help pick the right pattern?
Do we need the delegate’s process, should history be preserved, and do we need to filter sensitive info.