Skip to content

Persist Agent History Across Sessions in AutoGen Studio #6466

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
MrEdwards007 opened this issue May 5, 2025 · 4 comments
Open

Persist Agent History Across Sessions in AutoGen Studio #6466

MrEdwards007 opened this issue May 5, 2025 · 4 comments

Comments

@MrEdwards007
Copy link

What happened?

Issue Description:

In AutoGen Studio version 0.4, we’ve observed that agents start each run without access to prior session history. This results in agents behaving as if each session is brand new, with no memory of past interactions.

Use Case:
As a user of AutoGen Studio, I want to ensure that agents can retain context between sessions. In earlier versions, agent memory seemed to persist across sessions in a more integrated way, which allowed for more natural and continuous interactions.

Current Behavior:

  • Each new run in AutoGen Studio discards prior history.
  • Agents begin conversations from scratch, with no embedded memory or context.

Expected Behavior:

  • Agents should be able to recall prior interactions, either through built-in memory persistence or by easily embedding history at the start of each session.

Questions:

  • Is there an intended way to embed previous conversation history in each new session?
  • Are there recommended patterns for persisting and restoring history in a seamless way?

I initially thought that RAG might work but then you would lack context, since the entire history would not be present. Chunks of related information would only be available from session to session.

Would something like the pseudo-code below work and if so, how could this be made mainstream.
I'm struggling to obtain value in a one-turn session, rather than a continuation or ongoing conversation.

from autogen_agentchat.agents import AssistantAgent
from autogen_core.memory import ListMemory, MemoryContent, MemoryMimeType
from autogen_ext.models.openai import OpenAIChatCompletionClient

# Initialize memory
user_memory = ListMemory()

# Add previous conversation content to memory
await user_memory.add(MemoryContent(content="Previous conversation content here", mime_type=MemoryMimeType.TEXT))

# Configure the assistant agent with memory
assistant_agent = AssistantAgent(
    name="assistant_agent",
    model_client=OpenAIChatCompletionClient(model="gpt-4o-2024-08-06"),
    memory=[user_memory],
)

Environment:

  • AutoGen Studio version: 0.4.x
  • Platform: Python / AutoGen SDK
  • Deployment: Local and experimental workflow prototyping

Which packages was the bug in?

AutoGen Studio (autogensudio)

AutoGen library version.

Studio 0.4.1

Other library version.

No response

Model used

No response

Model provider

None

Other model provider

No response

Python version

3.11

.NET version

None

Operating system

Ubuntu

@victordibia
Copy link
Collaborator

Hi @MrEdwards007 ,

Thanks for the issue

Is there an intended way to embed previous conversation history in each new session?
Are there recommended patterns for persisting and restoring history in a seamless way?

There are a few ways to do this.

  • [Preferred approach] Load / Save State
    We can add explicit state persistance using save/load state api for teams. E.g, once a run in a session is completed (or has an error), we save the state; and then load it prior to any future runs.
  • Memory
    We can also include the concept of Memory in AGS. Here we would need to add an interface to show memory instances, and establish a protocol for how/what is added to this memory. For example, after a run, we can use an LLM to summarize the run and then add it to the memory store (this will work well only if the underlying memory implementation does retrieve the right chunk)

For the use case you mention above (simply the ability to continue a conversation), I think load/save state is the way to go.

model_client = OpenAIChatCompletionClient(model="gpt-4o-2024-08-06")

# Define a team.
assistant_agent = AssistantAgent(
    name="assistant_agent",
    system_message="You are a helpful assistant",
    model_client=model_client,
)
agent_team = RoundRobinGroupChat([assistant_agent], termination_condition=MaxMessageTermination(max_messages=2))

# Run the team and stream messages to the console.
stream = agent_team.run_stream(task="Write a beautiful poem 3-line about lake tangayika")

# Use asyncio.run(...) when running in a script.
await Console(stream)

# Save the state of the agent team.
team_state = await agent_team.save_state()

load as follows

print(team_state)

# Load team state.
await agent_team.load_state(team_state)
stream = agent_team.run_stream(task="What was the last line of the poem you wrote?")
await Console(stream)

Are you looking to implement this in your own application or you were more interested in working towards an implementation in autogenstudio?

@MrEdwards007
Copy link
Author

MrEdwards007 commented May 5, 2025

Good day Victor,

I really appreciate your feedback. I plan on utilizing both Autogen and AutogenStudio but most of my time is spent experimenting with autogenstudio.

So, the most direct answer is implementation in AutogenStudio.

@victordibia
Copy link
Collaborator

victordibia commented May 5, 2025

Got it.

A good first step would be to tryout load/save state api
At the same time, we can work together on an early implementation in AutoGen Studio (to enable experimentation). Are you open to working on a PR?

@MrEdwards007
Copy link
Author

Thank you for the information about the load/save state and yes, I am open to working on a PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants