This demo demonstrates how Redis Agent Memory Server can extend Amazon Alexa with conversational memory. Built using Java, LangChain4J, AWS Lambda, and Redis Cloud, it enables Alexa to recall past conversations and deliver contextual, intelligent responses. It showcases how Redis can act as a memory layer for AI assistants, enriching the natural language experience through state persistence and fast retrieval.
- Demo Objectives
- Setup
- Running the Demo
- Slide Deck
- Architecture
- Known Issues
- Resources
- Maintainers
- License
- Demonstrate Redis as a memory persistence layer for conversational AI.
- Show how to integrate Redis Agent Memory Server via REST API calls.
- Automate Alexa skill deployment using Terraform, AWS Lambda, and the ASK CLI.
- Illustrate how Redis Cloud can support scalable AI use cases.
- Demonstrate how to implement context engineering with LangChain4J.
| Account | Description |
|---|---|
| AWS account | Required to create Lambda, IAM, EC2, and CloudWatch resources. |
| Amazon developer account | Needed to register and deploy Alexa skills. |
| Redis Cloud | Hosts the Redis database used by the Redis Agent Memory Server. |
- Install the AWS CLI: Installation Guide
- Configure your credentials:
aws configure
- Install the ASK CLI: Installation Guide
- Configure your credentials:
ask configure
- Enable your APIs from Redis Cloud.
- Export them as environment variables:
export REDISCLOUD_ACCESS_KEY=<YOUR_API_ACCOUNT_KEY> export REDISCLOUD_SECRET_KEY=<YOUR_API_USER_KEY>
- Create your variables file:
cp infrastructure/terraform/terraform.tfvars.example infrastructure/terraform/terraform.tfvars
- Edit
infrastructure/terraform/terraform.tfvarswith your information:
| Variable | Description |
|---|---|
payment_card_type |
Credit card type linked to Redis Cloud (e.g., “Visa”). |
payment_card_last_four |
Last four digits of your card (e.g., “1234”). |
essentials_plan_cloud_provider |
Cloud provider for Redis Cloud (e.g., “AWS”). |
essentials_plan_cloud_region |
Region for hosting Redis (e.g., “us-east-1”). |
openai_api_key |
API key used by the Alexa skill and Agent Memory Server. |
Once configured, deploy everything using:
./deploy.shWhen the deployment completes, note the output values including the Lambda ARN, Redis Agent Memory Server endpoint, and SSH command for validation.
You can verify if the Agent Memory Server is operational by saying:
“Alexa, ask my jarvis to check the memory server.”
Invoke your Alexa device with the invocation 'my jarvis' and try commands like:
- "Alexa, tell my javis to remember that my favorite programming language is Java."
- "Alexa, ask my jarvis to recall if Java is my favorite programming language."
- "Alexa, tell my jarvis to remember I have a doctor appointment next Monday at 10 AM."
- "Alexa, ask my jarvis to suggest what should I do for my birthday party."
To remove all deployed resources:
./undeploy.sh📑 Agent Memory Server with Alexa Presentation
Covers demo goals, motivations for a memory layer, and architecture overview.
This architecture uses an Alexa skill written in Java and hosted as an AWS Lambda function. At the code of the Lambda function, it implements a stream handler that processes user requests and responses using the Agent Memory Server as memory storage.
As part of the stream handler implementation, it uses a Chat Assistant Service that leverages LangChain4J to manage interactions with the Agent Memory Server. This service implements context engineering, ensuring that conversations are enriched with relevant historical data stored in Redis. OpenAI is the LLM used to process and generate responses.
- Initial Agent Memory Server boot-up may take several minutes before becoming reachable.
- Alexa Developer Console may require manual linking if credentials are not fully synchronized.
Maintainers:
- Ricardo Ferreira — @riferrei
This project is licensed under the MIT License.