From 157d173b133c74c75af5cbcd90b99995750dc15d Mon Sep 17 00:00:00 2001 From: Jonathan James Date: Tue, 30 Sep 2025 17:53:51 -0400 Subject: [PATCH 1/3] chore: Add README for MemorySessionManager --- src/bedrock_agentcore/memory/README.md | 338 +++++++++++++++++-------- 1 file changed, 226 insertions(+), 112 deletions(-) diff --git a/src/bedrock_agentcore/memory/README.md b/src/bedrock_agentcore/memory/README.md index 74a524a..6608a8c 100644 --- a/src/bedrock_agentcore/memory/README.md +++ b/src/bedrock_agentcore/memory/README.md @@ -1,179 +1,293 @@ # Bedrock AgentCore Memory SDK -High-level Python SDK for AWS Bedrock AgentCore Memory service with flexible conversation handling and complete branch management. +High-level Python SDK for AWS Bedrock AgentCore Memory service with streamlined session management and flexible conversation handling. + +## Recommended Classes + +### MemorySessionManager (Recommended) +The primary interface for managing conversational AI sessions with both short-term (conversational events) and long-term (semantic memory) storage. Provides a clean, session-oriented API for memory operations. + +### MemorySession (Recommended) +Session-scoped interface that simplifies operations by automatically handling memory_id, actor_id, and session_id parameters. + +### MemoryClient (Legacy) +The original client interface. While still supported, we recommend migrating to MemorySessionManager for new projects. ## Key Features +### Streamlined Session Management +- Session-scoped operations with automatic parameter handling +- Create MemorySession instances for simplified API calls +- Built-in actor and session tracking + ### Flexible Conversation API -- Save any number of messages in a single call -- Support for USER, ASSISTANT, TOOL, OTHER roles +- Save any number of messages in a single call with `add_turns()` +- Support for USER, ASSISTANT, TOOL, OTHER roles via `ConversationalMessage` +- Support for binary data via `BlobMessage` - Natural conversation flow representation ### Complete Branch Management - List all branches in a session -- Navigate specific branches -- Get conversation tree structure +- Fork conversations from specific events +- Navigate specific branches with simplified API - Build context from any branch -- Continue conversations in existing branches - -### Simplified Memory Operations -- Semantic search with vector store -- Automatic namespace handling -- Polling helpers for async operations -### LLM Integration Support +### Enhanced LLM Integration +- Built-in `process_turn_with_llm()` method for complete conversation turns - Callback pattern for any LLM (Bedrock, OpenAI, etc.) -- Separated retrieve/generate/save pattern for flexibility -- Complete conversation turn in one method call +- Automatic memory retrieval, LLM processing, and response storage +- Flexible retrieval configuration with namespace templating +### Simplified Memory Operations +- Semantic search with `search_long_term_memories()` +- Automatic namespace handling with template variables +- List and manage memory records +- Actor and session management ## Quick Start ```python -from bedrock_agentcore.memory import MemoryClient -from bedrock_agentcore.memory.constants import StrategyType - -client = MemoryClient() +from bedrock_agentcore.memory import MemorySessionManager +from bedrock_agentcore.memory.constants import ConversationalMessage, MessageRole -# Create memory with strategies -# MemoryStrategies determine how memory records are extracted from conversations -memory = client.create_memory_and_wait( - name="MyAgentMemory", - strategies=[{ - StrategyType.SEMANTIC.value: { - "name": "FactExtractor", - "namespaces": ["/food/{actorId}"] - } - }] +# Initialize the session manager +manager = MemorySessionManager( + memory_id="your-memory-id", # Use existing memory id + region_name="us-east-1" ) -# Save conversations, which will be used for memory extraction (if memory strategies are configured when calling create_memory) -event = client.create_event( - memory_id=memory['id'], +# Create a session for a specific actor +session = manager.create_memory_session( actor_id="user-123", - session_id="session-456", - messages=[ - ("I love eating apples and cherries", "USER"), - ("Apples are very good.", "ASSISTANT"), - ("What is your favorite thing about apples", "USER"), - ("I enjoy their flavor -- and their nutritional benefits", "ASSISTANT") - ] + session_id="session-456" # Optional - will generate UUID if not provided ) -# Then after some time has passed and memory records are extracted, you can do -memory_records = client.retrieve_memories( - memory_id=memory['id'], - namespace="/food/user-123", - query="what food does the user like" +# Add conversation turns +session.add_turns([ + ConversationalMessage("I love eating apples and cherries", MessageRole.USER), + ConversationalMessage("Apples are very good for you!", MessageRole.ASSISTANT), + ConversationalMessage("What's your favorite thing about apples?", MessageRole.USER), + ConversationalMessage("I enjoy their flavor and nutritional benefits", MessageRole.ASSISTANT) +]) + +# Search long-term memories (after memory extraction has occurred) +memories = session.search_long_term_memories( + query="what food does the user like", + namespace_prefix="/food/user-123", + top_k=5 ) -# Or if you have multiple namespaces (say you have multiple users (denoted by actor_id)) and want to search across all of them: -memory_records = client.retrieve_memories( - memory_id=memory['id'], - namespace="/", # we can use any prefix of the namespace that we defined in create_memory_and_wait - query="Food" +# Or search across multiple users +memories = manager.search_long_term_memories( + query="Food preferences", + namespace_prefix="/food/", # Search all food-related memories + top_k=10 ) - ``` ## Core Usage Examples -### Natural Conversation Flow +### Enhanced LLM Integration with Memory Context ```python -# Multiple user messages, tool usage, flexible patterns -event = client.create_event( - memory_id=memory_id, - actor_id=actor_id, - session_id=session_id, - messages=[ - ("I need help with my order", "USER"), - ("Order #12345", "USER"), - ("Let me look that up", "ASSISTANT"), - ("lookup_order('12345')", "TOOL"), - ("Found it! Your order ships tomorrow.", "ASSISTANT") - ] +from bedrock_agentcore.memory.constants import RetrievalConfig + +def my_llm(user_input: str, memories: List[Dict]) -> str: + # Format context from retrieved memories + context = "\n".join([ + m.get('content', {}).get('text', '') + for m in memories + ]) + + # Call your LLM (Bedrock, OpenAI, etc.) + # This is just an example - use your actual LLM integration + response = f"Based on our previous discussions about {context}, here's my response to: {user_input}" + return response + +# Configure memory retrieval with multiple namespaces +retrieval_config = { + "support/facts/{sessionId}": RetrievalConfig(top_k=5, relevance_score=0.3), + "user/preferences/{actorId}": RetrievalConfig(top_k=3, relevance_score=0.5) +} + +# Process complete conversation turn with automatic memory integration +memories, response, event = session.process_turn_with_llm( + user_input="What did we discuss about my preferences?", + llm_callback=my_llm, + retrieval_config=retrieval_config ) + +print(f"Retrieved {len(memories)} relevant memories") +print(f"LLM Response: {response}") +print(f"Stored event ID: {event.event_id}") +``` + +### Natural Conversation Flow + +```python +from bedrock_agentcore.memory.constants import ConversationalMessage, BlobMessage, MessageRole + +# Multiple message types in a single turn +session.add_turns([ + ConversationalMessage("I need help with my order", MessageRole.USER), + ConversationalMessage("Order #12345", MessageRole.USER), + BlobMessage({"image_data": "base64_encoded_receipt"}), # Binary data + ConversationalMessage("Let me look that up", MessageRole.ASSISTANT), + ConversationalMessage("lookup_order('12345')", MessageRole.TOOL), + ConversationalMessage("Found it! Your order ships tomorrow.", MessageRole.ASSISTANT) +]) ``` ### Branch Management ```python -# Create branches for different scenarios -branch = client.fork_conversation( - memory_id=memory_id, - actor_id=actor_id, - session_id=session_id, - root_event_id=event_id, +# Get conversation history +turns = session.get_last_k_turns(k=3) +print(f"Last 3 conversation turns: {len(turns)}") + +# Fork conversation for alternative scenario +branch_event = session.fork_conversation( + root_event_id="event-123", branch_name="premium-option", - new_messages=[ - ("What about expedited shipping?", "USER"), - ("I can upgrade you to overnight delivery for $20", "ASSISTANT") + messages=[ + ConversationalMessage("What about expedited shipping?", MessageRole.USER), + ConversationalMessage("I can upgrade you to overnight delivery for $20", MessageRole.ASSISTANT) ] ) -# Navigate branches -branches = client.list_branches(memory_id, actor_id, session_id) -events = client.list_branch_events( - memory_id=memory_id, - actor_id=actor_id, - session_id=session_id, - branch_name="premium-option" -) -``` +# List all branches in the session +branches = session.list_branches() +for branch in branches: + print(f"Branch: {branch.name}, Events: {branch.event_count}") -### LLM Integration Patterns +# Get events from specific branch +branch_events = session.list_events(branch_name="premium-option") +``` -#### Pattern 1: Callback-based (Simple cases) +### Session and Actor Management ```python -def my_llm(user_input: str, memories: List[Dict]) -> str: - # Your LLM logic here - context = "\n".join([m['content']['text'] for m in memories]) - # Call Bedrock, OpenAI, etc. - return "AI response based on context" +# Manager-level operations +actors = manager.list_actors() +print(f"Found {len(actors)} actors in memory") -memories, response, event = client.process_turn_with_llm( - memory_id=memory_id, +# Actor-specific operations +actor = session.get_actor() +actor_sessions = actor.list_sessions() +print(f"Actor has {len(actor_sessions)} sessions") + +# Create multiple sessions for the same actor +session2 = manager.create_memory_session( actor_id="user-123", - session_id="session-456", - user_input="What did we discuss?", - llm_callback=my_llm, - retrieval_namespace="support/facts/{sessionId}" + session_id="session-789" +) +``` + +### Memory Record Management + +```python +# List all memory records in a namespace +records = session.list_long_term_memory_records( + namespace_prefix="/user/preferences/user-123", + max_results=20 ) + +# Get specific memory record +record = session.get_memory_record("record-id-123") +print(f"Record content: {record.content}") + +# Delete memory record +session.delete_memory_record("record-id-123") ``` -#### Pattern 2: Separated calls (More control) +### Alternative Pattern: Separated Operations ```python -# Step 1: Retrieve -memories = client.retrieve_memories( - memory_id=memory_id, - namespace="support/facts/{sessionId}", - query="previous discussion" +# For more control, you can separate the steps: + +# Step 1: Retrieve relevant memories +memories = session.search_long_term_memories( + query="previous discussion", + namespace_prefix="support/facts/session-456", + top_k=5 ) -# Step 2: Your LLM logic +# Step 2: Process with your LLM +user_input = "What did we discuss?" response = your_llm_logic(user_input, memories) -# Step 3: Save +# Step 3: Save the conversation +event = session.add_turns([ + ConversationalMessage(user_input, MessageRole.USER), + ConversationalMessage(response, MessageRole.ASSISTANT) +]) +``` + +## Migration from MemoryClient + +If you're currently using MemoryClient, here's how to migrate: + +### Before (MemoryClient) +```python +from bedrock_agentcore.memory import MemoryClient + +client = MemoryClient() event = client.create_event( - memory_id=memory_id, - actor_id="user-123", - session_id="session-456", - messages=[(user_input, "USER"), (response, "ASSISTANT")] + memory_id="memory-123", + actor_id="user-456", + session_id="session-789", + messages=[("Hello", "USER"), ("Hi there", "ASSISTANT")] +) +``` + +### After (MemorySessionManager) +```python +from bedrock_agentcore.memory import MemorySessionManager +from bedrock_agentcore.memory.constants import ConversationalMessage, MessageRole + +manager = MemorySessionManager(memory_id="memory-123") +session = manager.create_memory_session( + actor_id="user-456", + session_id="session-789" ) + +event = session.add_turns([ + ConversationalMessage("Hello", MessageRole.USER), + ConversationalMessage("Hi there", MessageRole.ASSISTANT) +]) ``` -### Environment Variables +### Key Migration Benefits +- **Cleaner API**: No need to pass memory_id, actor_id, session_id to every method +- **Type Safety**: Use `ConversationalMessage` and `BlobMessage` instead of tuples +- **Better Organization**: Session-scoped vs manager-scoped operations +- **Enhanced Features**: Built-in LLM integration with `process_turn_with_llm()` + +## Environment Variables(Legacy) - AGENTCORE_MEMORY_ROLE_ARN - IAM role for memory execution -- AGENTCORE_CONTROL_ENDPOINT - Override control plane endpoint +- AGENTCORE_CONTROL_ENDPOINT - Override control plane endpoint - AGENTCORE_DATA_ENDPOINT - Override data plane endpoint -### Best Practices +## Best Practices -- Separate retrieval and storage: Use retrieve_memories() and create_event() as separate steps -- Wait for extraction: Use wait_for_memories() after creating events -- Handle service errors: Retry on ServiceException errors -- Use branches: Create branches for different scenarios or A/B testing +### Session Management +- Use `MemorySessionManager` for multi-session, multi-actor scenarios +- Use `MemorySession` for session-specific operations to avoid parameter repetition +- Create separate sessions for different conversation contexts + +### Memory Operations +- Use `process_turn_with_llm()` for integrated LLM workflows +- Separate retrieval and storage with `search_long_term_memories()` and `add_turns()` for custom workflows +- Use namespace prefixes effectively for organized memory retrieval +- Handle service errors with appropriate retry logic + +### Message Handling +- Use `ConversationalMessage` for text-based interactions +- Use `BlobMessage` for binary data (images, files, etc.) +- Group related messages in single `add_turns()` calls for logical conversation units + +### Branch Management +- Create branches for A/B testing different responses +- Use descriptive branch names for easier navigation +- Fork from specific events to maintain conversation context From 41478cc8b4a3935f28392d5547c926b07c67226c Mon Sep 17 00:00:00 2001 From: Jonathan James Date: Tue, 30 Sep 2025 18:09:11 -0400 Subject: [PATCH 2/3] fix: fixed failing unit test from previous add_turns refactor --- tests/bedrock_agentcore/memory/test_session.py | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/tests/bedrock_agentcore/memory/test_session.py b/tests/bedrock_agentcore/memory/test_session.py index 8012677..2a81868 100644 --- a/tests/bedrock_agentcore/memory/test_session.py +++ b/tests/bedrock_agentcore/memory/test_session.py @@ -2491,8 +2491,8 @@ def test_memory_session_add_turns_parameter_order(self): # Call with all parameters to test the exact order session.add_turns(messages=messages, branch=branch, event_timestamp=custom_timestamp) - # Verify the exact parameter order: actor_id, session_id, messages, event_timestamp, branch - mock_add_turns.assert_called_once_with("user-123", "session-456", messages, custom_timestamp, branch) + # Verify the exact parameter order: actor_id, session_id, messages, branch, event_timestamp + mock_add_turns.assert_called_once_with("user-123", "session-456", messages, branch, custom_timestamp) def test_process_turn_with_llm_no_relevance_score_config(self): """Test process_turn_with_llm when RetrievalConfig has no relevance_score.""" @@ -2559,8 +2559,8 @@ def test_memory_session_add_turns_branch_parameter_order(self): # Call with branch parameter only (no timestamp) session.add_turns(messages=messages, branch=branch) - # Verify the exact parameter order: actor_id, session_id, messages, event_timestamp, branch - mock_add_turns.assert_called_once_with("user-123", "session-456", messages, None, branch) + # Verify the exact parameter order: actor_id, session_id, messages, branch, event_timestamp + mock_add_turns.assert_called_once_with("user-123", "session-456", messages, branch, None) def test_list_long_term_memory_records_memoryRecordSummaries_fallback(self): """Test list_long_term_memory_records fallback to memoryRecordSummaries.""" From 3190b4a37f80a079dbfffa4180aeb99265315311 Mon Sep 17 00:00:00 2001 From: Jonathan James Date: Tue, 30 Sep 2025 20:59:38 -0400 Subject: [PATCH 3/3] fix: Add more to README.md --- pyproject.toml | 1 + src/bedrock_agentcore/memory/README.md | 285 ++++++++++++++++++++++++- 2 files changed, 276 insertions(+), 10 deletions(-) diff --git a/pyproject.toml b/pyproject.toml index 7bd63bf..f748f05 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -68,6 +68,7 @@ ignore_missing_imports = false [tool.ruff] line-length = 120 include = ["examples/**/*.py", "src/**/*.py", "tests/**/*.py", "tests-integ/**/*.py"] +exclude = ["**/*.md"] [tool.ruff.lint] select = [ diff --git a/src/bedrock_agentcore/memory/README.md b/src/bedrock_agentcore/memory/README.md index 6608a8c..b1cb53f 100644 --- a/src/bedrock_agentcore/memory/README.md +++ b/src/bedrock_agentcore/memory/README.md @@ -1,44 +1,147 @@ # Bedrock AgentCore Memory SDK -High-level Python SDK for AWS Bedrock AgentCore Memory service with streamlined session management and flexible conversation handling. +High-level Python SDK for AWS Bedrock AgentCore Memory service with streamlined session management and flexible +conversation handling. + +## Table of Contents + +- [Overview](#overview) +- [Setup](#setup) + - [Installation](#installation) + - [Authentication](#authentication) + - [Environment Variables](#environment-variables) +- [Recommended Classes](#recommended-classes) +- [Key Features](#key-features) +- [Quick Start](#quick-start) +- [Usage](#usage) + - [Enhanced LLM Integration with Memory Context](#enhanced-llm-integration-with-memory-context) + - [Natural Conversation Flow](#natural-conversation-flow) + - [Branch Management](#branch-management) + - [Session and Actor Management](#session-and-actor-management) + - [Memory Record Management](#memory-record-management) + - [Alternative Pattern: Separated Operations](#alternative-pattern-separated-operations) +- [Error Handling](#error-handling) + - [Common Exceptions](#common-exceptions) + - [Best Practices for Error Handling](#best-practices-for-error-handling) +- [Migration from MemoryClient](#migration-from-memoryclient) +- [Best Practices](#best-practices) +- [API Reference](#api-reference) + +## Overview + +The Bedrock AgentCore Memory SDK provides a comprehensive solution for managing conversational AI memory with both short-term (conversational events) and long-term (semantic memory) storage capabilities. The SDK is designed around three main components: + +### Core Components + +1. **MemorySessionManager** - The primary interface for managing multiple sessions and actors +2. **MemorySession** - Session-scoped interface that simplifies operations by automatically handling memory_id, actor_id, and session_id parameters +3. **MemoryClient** - Legacy client interface (still supported but not recommended for new projects) + +### Architecture + +The memory system operates on a hierarchical structure: + +- **Memory** - Top-level container for all data +- **Actor** - Represents individual users or entities +- **Session** - Conversation contexts within an actor +- **Events** - Individual conversation turns or actions +- **Branches** - Alternative conversation paths for A/B testing or exploration + +## Setup + +### Installation + +Install the Bedrock AgentCore SDK using pip: + +```bash +pip install bedrock-agentcore +``` + +### Authentication + +The SDK uses AWS credentials for authentication. Ensure you have one of the following configured: + +1. **AWS CLI credentials** (recommended for development): + + ```bash + aws configure + ``` +2. **Environment variables**: + + ```bash + export AWS_ACCESS_KEY_ID=your_access_key + export AWS_SECRET_ACCESS_KEY=your_secret_key + export AWS_DEFAULT_REGION=us-east-1 + ``` +3. **IAM roles** (recommended for production): + + - EC2 instance roles + - ECS task roles + - Lambda execution roles +4. **AWS credentials file**: + + ```ini + [default] + aws_access_key_id = your_access_key + aws_secret_access_key = your_secret_key + region = us-east-1 + ``` + +### Environment Variables + +The following environment variables can be used to configure the SDK: + +- `AGENTCORE_MEMORY_ROLE_ARN` - IAM role for memory execution (legacy) +- `AGENTCORE_CONTROL_ENDPOINT` - Override control plane endpoint +- `AGENTCORE_DATA_ENDPOINT` - Override data plane endpoint +- `AWS_DEFAULT_REGION` - Default AWS region (e.g., us-east-1) ## Recommended Classes ### MemorySessionManager (Recommended) -The primary interface for managing conversational AI sessions with both short-term (conversational events) and long-term (semantic memory) storage. Provides a clean, session-oriented API for memory operations. + +The primary interface for managing conversational AI sessions with both short-term (conversational events) and +long-term (semantic memory) storage. Provides a clean, session-oriented API for memory operations. ### MemorySession (Recommended) + Session-scoped interface that simplifies operations by automatically handling memory_id, actor_id, and session_id parameters. ### MemoryClient (Legacy) + The original client interface. While still supported, we recommend migrating to MemorySessionManager for new projects. ## Key Features ### Streamlined Session Management + - Session-scoped operations with automatic parameter handling - Create MemorySession instances for simplified API calls - Built-in actor and session tracking ### Flexible Conversation API + - Save any number of messages in a single call with `add_turns()` - Support for USER, ASSISTANT, TOOL, OTHER roles via `ConversationalMessage` - Support for binary data via `BlobMessage` - Natural conversation flow representation ### Complete Branch Management + - List all branches in a session - Fork conversations from specific events - Navigate specific branches with simplified API - Build context from any branch ### Enhanced LLM Integration + - Built-in `process_turn_with_llm()` method for complete conversation turns - Callback pattern for any LLM (Bedrock, OpenAI, etc.) - Automatic memory retrieval, LLM processing, and response storage - Flexible retrieval configuration with namespace templating ### Simplified Memory Operations + - Semantic search with `search_long_term_memories()` - Automatic namespace handling with template variables - List and manage memory records @@ -85,7 +188,7 @@ memories = manager.search_long_term_memories( ) ``` -## Core Usage Examples +## Usage ### Enhanced LLM Integration with Memory Context @@ -98,7 +201,7 @@ def my_llm(user_input: str, memories: List[Dict]) -> str: m.get('content', {}).get('text', '') for m in memories ]) - + # Call your LLM (Bedrock, OpenAI, etc.) # This is just an example - use your actual LLM integration response = f"Based on our previous discussions about {context}, here's my response to: {user_input}" @@ -223,11 +326,132 @@ event = session.add_turns([ ]) ``` +## Error Handling + +### Common Exceptions + +The SDK raises specific exceptions for different error conditions: + +```python +from bedrock_agentcore.memory import MemorySessionManager +from bedrock_agentcore.memory.constants import ConversationalMessage, MessageRole +import boto3 +from botocore.exceptions import ClientError, NoCredentialsError + +try: + manager = MemorySessionManager( + memory_id="your-memory-id", + region_name="us-east-1" + ) + + session = manager.create_memory_session( + actor_id="user-123", + session_id="session-456" + ) + + # Add conversation turns + event = session.add_turns([ + ConversationalMessage("Hello", MessageRole.USER), + ConversationalMessage("Hi there!", MessageRole.ASSISTANT) + ]) + +except NoCredentialsError: + print("AWS credentials not found. Please configure your credentials.") + +except ClientError as e: + error_code = e.response['Error']['Code'] + error_message = e.response['Error']['Message'] + + if error_code == 'ResourceNotFoundException': + print(f"Memory not found: {error_message}") + elif error_code == 'ValidationException': + print(f"Invalid input: {error_message}") + elif error_code == 'AccessDeniedException': + print(f"Access denied: {error_message}") + elif error_code == 'ThrottlingException': + print(f"Request throttled: {error_message}") + else: + print(f"AWS error ({error_code}): {error_message}") + +except Exception as e: + print(f"Unexpected error: {str(e)}") +``` + +### Best Practices for Error Handling + +1. **Always handle authentication errors**: + + ```python + try: + manager = MemorySessionManager(memory_id="test") + except NoCredentialsError: + # Guide user to configure credentials + print("Please run 'aws configure' or set AWS environment variables") + ``` +2. **Validate inputs before API calls**: + + ```python + def validate_user_input(user_input: str) -> bool: + if validate_input(user_input) + raise ValueError("user_input must be a non-empty string") + return True + + validate_memory_id(memory_id) + ``` +3. **Handle rate limiting gracefully**: + + ```python + try: + memories = session.search_long_term_memories(query="test") + except ClientError as e: + if e.response['Error']['Code'] == 'ThrottlingException': + print("Request rate exceeded. Please reduce request frequency.") + time.sleep(5) # Wait before retrying + ``` +4. **Log errors for debugging**: + + ```python + import logging + + logging.basicConfig(level=logging.INFO) + logger = logging.getLogger(__name__) + + try: + event = session.add_turns(messages) + except Exception as e: + logger.error(f"Failed to add turns: {str(e)}", exc_info=True) + raise + ``` +5. **Use context managers for cleanup**: + + ```python + from contextlib import contextmanager + + @contextmanager + def memory_session_context(manager, actor_id, session_id): + session = None + try: + session = manager.create_memory_session(actor_id, session_id) + yield session + except Exception as e: + logger.error(f"Error in memory session: {str(e)}") + raise + finally: + # Cleanup if needed + if session: + logger.info(f"Session {session_id} operations completed") + + # Usage + with memory_session_context(manager, "user-123", "session-456") as session: + session.add_turns(messages) + ``` + ## Migration from MemoryClient If you're currently using MemoryClient, here's how to migrate: ### Before (MemoryClient) + ```python from bedrock_agentcore.memory import MemoryClient @@ -241,6 +465,7 @@ event = client.create_event( ``` ### After (MemorySessionManager) + ```python from bedrock_agentcore.memory import MemorySessionManager from bedrock_agentcore.memory.constants import ConversationalMessage, MessageRole @@ -258,36 +483,76 @@ event = session.add_turns([ ``` ### Key Migration Benefits + - **Cleaner API**: No need to pass memory_id, actor_id, session_id to every method - **Type Safety**: Use `ConversationalMessage` and `BlobMessage` instead of tuples - **Better Organization**: Session-scoped vs manager-scoped operations - **Enhanced Features**: Built-in LLM integration with `process_turn_with_llm()` -## Environment Variables(Legacy) - -- AGENTCORE_MEMORY_ROLE_ARN - IAM role for memory execution -- AGENTCORE_CONTROL_ENDPOINT - Override control plane endpoint -- AGENTCORE_DATA_ENDPOINT - Override data plane endpoint - ## Best Practices ### Session Management + - Use `MemorySessionManager` for multi-session, multi-actor scenarios - Use `MemorySession` for session-specific operations to avoid parameter repetition - Create separate sessions for different conversation contexts ### Memory Operations + - Use `process_turn_with_llm()` for integrated LLM workflows - Separate retrieval and storage with `search_long_term_memories()` and `add_turns()` for custom workflows - Use namespace prefixes effectively for organized memory retrieval - Handle service errors with appropriate retry logic ### Message Handling + - Use `ConversationalMessage` for text-based interactions - Use `BlobMessage` for binary data (images, files, etc.) - Group related messages in single `add_turns()` calls for logical conversation units ### Branch Management + - Create branches for A/B testing different responses - Use descriptive branch names for easier navigation - Fork from specific events to maintain conversation context + +### Performance Optimization + +- Batch operations when possible using `add_turns()` with multiple messages +- Use appropriate `top_k` values for memory searches to balance relevance and performance +- Implement caching for frequently accessed memory records +- Monitor and optimize namespace structures for efficient retrieval + +### Security + +- Use IAM roles instead of hardcoded credentials in production +- Implement proper access controls for memory resources +- Validate and sanitize user inputs before storing in memory +- Use encryption for sensitive data in memory records + +## API Reference + +### Core Classes + +- **MemorySessionManager**: Primary interface for managing sessions and actors +- **MemorySession**: Session-scoped operations interface +- **MemoryClient**: Legacy client interface (deprecated) + +### Data Models + +- **ConversationalMessage**: Text-based conversation messages +- **BlobMessage**: Binary data messages +- **Event**: Individual conversation events +- **Branch**: Alternative conversation paths +- **ActorSummary**: Actor information summary +- **SessionSummary**: Session information summary +- **MemoryRecord**: Long-term memory records + +### Configuration Classes + +- **RetrievalConfig**: Configuration for memory retrieval operations +- **MessageRole**: Enumeration of message roles (USER, ASSISTANT, TOOL, OTHER) +- **MemoryStatus**: Memory resource status enumeration +- **StrategyType**: Memory strategy type enumeration + +For detailed API documentation, refer to the inline docstrings and type hints in the source code.