Workshop: Building a React PWA Chat Application
- Workshop: Building a React PWA Chat Application
- Add AI Features to your Serverless Chat Application
This is a Starter React Progressive Web Application (PWA) that uses AWS AppSync to implement offline and real-time capabilities in a chat application with AI/ML features such as image recognition, text-to-speech, language translation, sentiment analysis as well as conversational chatbots developed as part of the re:Invent session Bridging the Gap Between Real Time/Offline and AI/ML Capabilities in Modern Serverless Apps. In the chat app, users can search for users and messages, have conversations with other users, upload images and exchange messages.
The application demonstrates GraphQL Mutations, Queries and Subscriptions with AWS AppSync integrating with other AWS Services:
- Amazon Cognito for user management as well as Authentication and Authorisation (AuthN/Z)
- Amazon DynamoDB with multiple data sources (Users, Messages, Conversations, ConvoLink)
- Amazon Elasticsearch data source for full text search on messages and users
- Amazon S3 for Media Storage
- AWS Lambda as a Serverless integration layer for connecting to AI Services
- Amazon Comprehend for sentiment and entity analysis as well as language detection
- Amazon Rekognition for object, scene and celebrity detection on images
- Amazon Lex for conversational chatbots
- Amazon Polly for text-to-speech on messages
- Amazon Translate for language translation
- AWS Account with appropriate permissions to create the related resources
- NodeJS with NPM
- AWS CLI with output configured as JSON
(pip install awscli --upgrade --user)
- AWS Amplify CLI configured for a region where AWS AppSync and all other services in use are available
(npm install -g @aws-amplify/cli)
- AWS SAM CLI
(pip install --user aws-sam-cli)
- AWS SAM Install Instructions
- Create React App
(npm install -g create-react-app)
- Install JQ macOS
(brew install jq)or Windows
(chocolatey install jq)
- If using Windows, you'll need the Windows Subsystem for Linux (WSL)
Please Note: If you wish to integrate the AI features as part of this workshop. It's recommended that you launch the entire solution in one of these regions:
At the time of writing this workshop, Amazon Lex is only available in these regions.
Back-end Setup Instructions
Launch a Serverless Chat Application with AWS Amplify
First, clone this repository and navigate to the created folder:
git clone https://github.com/StefanBuchman/aws-appsync-chat-workshop.git cd aws-appsync-chat-workshop
Set the region we are deploying resources to:
export AWS_REGION=$(jq -r '.providers.awscloudformation.Region' amplify/#current-cloud-backend/amplify-meta.json) echo $AWS_REGION
Make sure ALL services are supported in this region or else you'll get errors in the next steps.
Log into the console and head over to CloudFormation.
You should see a new stack has been created with a:
- Deployment bucket
- Two IAM roles
Add an Amazon Cognito User Pool auth resource.
amplify add auth
"Do you want to use the default authentication and security configuration?" Answer: Use the default configuration
This will create a local resource for a Cognito user pool. We'll push this configuration up to the cloud shortly.
Add an AppSync GraphQL API with Amazon Cognito User Pool for the API Authentication. Follow the default options. When prompted with "Do you have an annotated GraphQL schema?", select "YES" and provide the schema file path
amplify add api
Amplify will create your API for you according to the schema defined. It will additionally build out the DynamoDB tables and populate the data sources when Step 7 is executed.
Add S3 Private Storage for Content (Images, audio, video, etc.) to the project with the default options. Select private read/write access for Auth users only:
amplify add storage
Now it's time to provision your cloud resources based on the local setup and configured features. When asked:
Do you want to generate code for your newly created GraphQL API, answer "No" as it would overwrite the current custom files in the
Wait for the provisioning to complete. Once done, a
src/aws-exports.jsfile with the resources information is created.
At this point, go check out the resources that have been created as part of the
amplify push command:
src directory you should have a file called:
aws-exports.js. This file contains the references to the series of resources we created in the cloud.
AppSync you should see your new API, made up of the Schema and Data Sources.
DynamoDB you should see 4 new tables to support the chat application. These were created by Amplify based off of the GraphQL schema.
Cognito a new Cognito User Pool should have been created to store credentials for your chat users.
S3 there should be a deployment bucket as well as a new bucket to hold media from the chat application.
CloudFormation finally, you should see 4 new stacks, a primary and 3 nested stacks. These ultimately provisioned the resouces during
push. When cleaning up, these stacks will also tear down the provisioned resources.
Testing the chat app before adding AI features
Install the required modules:
Execute the following command to install your project package dependencies and run the application locally:
Access your chat app at http://localhost:3000
Use two different browsers or one in Incognito/InPrivate mode. Sign up at least 2 different users, authenticate with each user to get them registered in the backend Users table.
Search for your new users to start a conversation and test real-time/offline messaging.
Try to send an image, you should be able to go to your S3 bucket where you'll see the file being committed to.
Head back to Cognito and validate you can see your two new users. You'll be able to view their details, validate that they confimed their identity and even initiate a reset of their password.
Add AI Features to your Serverless Chat Application
Getting AI supporting resources
Look up the S3 bucket name created for user storage:
export USER_FILES_BUCKET=$(sed -n 's/.*"aws_user_files_s3_bucket": "\(.*\)".*/\1/p' src/aws-exports.js) echo $USER_FILES_BUCKET
Retrieve the API ID of your AppSync GraphQL endpoint
export GRAPHQL_API_ID=$(jq -r '.api[(.api | keys)].output.GraphQLAPIIdOutput' ./amplify/#current-cloud-backend/amplify-meta.json) echo $GRAPHQL_API_ID
Retrieve the project's deployment bucket and stackname . It will be used for packaging and deployment with SAM
export DEPLOYMENT_BUCKET_NAME=$(jq -r '.providers.awscloudformation.DeploymentBucketName' ./amplify/#current-cloud-backend/amplify-meta.json) export STACK_NAME=$(jq -r '.providers.awscloudformation.StackName' ./amplify/#current-cloud-backend/amplify-meta.json) echo $DEPLOYMENT_BUCKET_NAME echo $STACK_NAME
Now we need to deploy 3 Lambda functions (one for AppSync and two for Lex) and configure the AppSync Resolvers to use Lambda accordingly.
First, we install the npm dependencies for each lambda function. We then package and deploy the changes with SAM.
Please Note: If you have defined an AWS Profile for the AWS CLI remember to add
--profile profile-name to the SAM or CLI commands below.
First, head to the Lambda functions under the backend directory. Take a look through the functions and see what they're doing.
Let's get the dependencies installed and the functions packaged.
cd ./backend/chuckbot-lambda; npm install; cd ../.. cd ./backend/moviebot-lambda; npm install; cd ../.. sam package --template-file ./backend/deploy.yaml --s3-bucket $DEPLOYMENT_BUCKET_NAME --output-template-file packaged.yaml export STACK_NAME_AIML="$STACK_NAME-extra-aiml" sam deploy --template-file ./packaged.yaml --stack-name $STACK_NAME_AIML --capabilities CAPABILITY_IAM --parameter-overrides appSyncAPI=$GRAPHQL_API_ID s3Bucket=$USER_FILES_BUCKET --region $AWS_REGION
Head over to CloudFormation and validate that the stack has been created. Wait for the stack to complete deploying.
Head over to your new Lambda functions and validate they have been created.
We'll now retrieve the ARN for the Lambda functions. We'll need the ARN's to allow Lex to point to the correct function to fullfil an intent.
export CHUCKBOT_FUNCTION_ARN=$(aws cloudformation describe-stacks --stack-name $STACK_NAME_AIML --query "Stacks.Outputs" --region $AWS_REGION | jq -r '. | select(.OutputKey == "ChuckBotFunction") | .OutputValue') export MOVIEBOT_FUNCTION_ARN=$(aws cloudformation describe-stacks --stack-name $STACK_NAME_AIML --query "Stacks.Outputs" --region $AWS_REGION | jq -r '. | select(.OutputKey == "MovieBotFunction") | .OutputValue') echo $CHUCKBOT_FUNCTION_ARN echo $MOVIEBOT_FUNCTION_ARN
Your ARN's should look something like the following:
Let's set up Lex. We will create 2 chatbots: ChuckBot and MovieBot.
Execute the following commands to add permissions so Lex can invoke the chatbot related Lambda functions you created in the previous section:
aws lambda add-permission --statement-id Lex --function-name $CHUCKBOT_FUNCTION_ARN --action lambda:\* --principal lex.amazonaws.com --region $AWS_REGION aws lambda add-permission --statement-id Lex --function-name $MOVIEBOT_FUNCTION_ARN --action lambda:\* --principal lex.amazonaws.com --region $AWS_REGION
Update the bots intents with the Lambda ARN:
jq '.fulfillmentActivity.codeHook.uri = $arn' --arg arn $CHUCKBOT_FUNCTION_ARN backend/ChuckBot/intent.json -M > tmp.txt ; cp tmp.txt backend/ChuckBot/intent.json; rm tmp.txt jq '.fulfillmentActivity.codeHook.uri = $arn' --arg arn $MOVIEBOT_FUNCTION_ARN backend/MovieBot/intent.json -M > tmp.txt ; cp tmp.txt backend/MovieBot/intent.json; rm tmp.txt
And, deploy the slot types, intents and bots:
aws lex-models put-slot-type --cli-input-json file://backend/ChuckBot/slot-type.json --region $AWS_REGION aws lex-models put-intent --cli-input-json file://backend/ChuckBot/intent.json --region $AWS_REGION aws lex-models put-bot --cli-input-json file://backend/ChuckBot/bot.json --region $AWS_REGION aws lex-models put-slot-type --cli-input-json file://backend/MovieBot/slot-type.json --region $AWS_REGION aws lex-models put-intent --cli-input-json file://backend/MovieBot/intent.json --region $AWS_REGION aws lex-models put-bot --cli-input-json file://backend/MovieBot/bot.json --region $AWS_REGION
Head over to Lex and walk-through what has been created.
You should see two Lex chatbots (ChuckBot & MovieBot), the bots may still be building. While they are building take a look at:
- The intents, these are the actions your chatbot users will attempt to have the chatbot undertake. We can take slots (variables) as part of the intent to make the chatbot more dynamic
- The slots, these are pieces of information we may need to fullfil an intent.
- Finally, make sure each chatbot has a Lambda function defined in the Fullfilment section. This is what will be executed once when know what our chatbot user is looking for.
Interacting with Chatbots
Execute the following command to install your project package dependencies and run the application locally:
Access your chat app at http://localhost:3000
Resume your previous conversation or start a new one.
In order to initiate or respond to a chatbot conversation, you need to start the message with either
@moviebotto trigger or respond to the specific bot, for example:
- @chuckbot Give me a Chuck Norris fact
- @moviebot Tell me about a movie
Each subsequent response needs to start with the bot handle (@chuckbot or @moviebot) so the app can detect the message is directed to Lex and not to the other user in the same conversation. Both users will be able to view Lex chatbot responses in real-time powered by GraphQL subscriptions.
Alternatively you can start a chatbot conversation from the message drop-down menu:
- Just selecting
ChuckBotwill display options for further interaction
- Send a message with a nothing but a movie name and selecting
MovieBotsubsequently will retrieve the details about the movie
- Just selecting
Interacting with AWS AI Services
./backend/ai-lambda/index.js function contains the code to call the higher-order AI services on AWS. Take a look at this function for how it calls and interacts with those services.
The chatbot integrates with this function either through the drop-down menu on a container of text or through clicking on an image.
To have Amazon Rekognition detect the contents of a photo, either upload a new photo inside of a chat and then click on the photo or select an existing photo by clicking on it.
To have Amazon Polly convert text to speach, from the drop-down menu, select under Listen, select Text to Speech to trigger Amazon Polly and listen to messages read out loud.
The voice is determined by the automatically detected source language (supported languages: English, Mandarin, Portuguese, French and Spanish) the mappings can be found in
To have Amazon Translate convert the message to another language, select the desired language under Translate in the drop-down menu (supported languages: English, Mandarin, Portuguese, French and Spanish). In the translation pane, click on the microphone icon to listen to the translated message.
To have the chatbot peform sentiment and entity analysis on a message using Amazon Comprehend, select Sentiment under the Analyze section on the drop-down menu.
Building, Deploying and Publishing with the Amplify CLI
To have your new chat/chatbot service hosted on AWS, follow the steps below.
amplify add hostingfrom the project's root folder and follow the prompts to create an S3 bucket (DEV) and/or a CloudFront distribution (PROD).
Build, deploy, upload and publish the application with a single command:
Note: If you are deploying a CloudFront distribution, be mindful it needs to be replicated across all points of presence globally and it might take up to 15 minutes to do so.
Access your public chat application using the S3 Website Endpoint URL or the CloudFront URL returned by the
amplify publish command.
Share the link with friends, sign up some users, and start creating conversations, uploading images, translating, executing text-to-speech in different languages, performing sentiment analysis and exchanging messages. Be mindful PWAs require SSL, in order to test PWA functionality access the CloudFront URL (HTTPS) from a mobile device and add the site to the mobile home screen.
AWS Amplify Console
The AWS Amplify Console allows you to build a complete CI/CD pipeline directly from your GitHub repository.
The AWS Amplify Console is a continuous deployment and hosting service for mobile web applications. The AWS Amplify Console makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of simultaneously updating the frontend and backend of your applications.
In order to get up and running using AWS Amplify Console
Fork this repository into your own GitHub account and clone it
Install the Amplify CLI with multienv support:
npm install -g @aws-amplify/cli@multienv
More info here.
Repeat Steps 3 to 6 from the Back-end Setup Instructions in the previous section.
Do not perform step 7 (
Commit the changes to your forked repository. A new folder
amplifywill be commited with the project details.
Connect your repository to the Amplify Console as per the instructions here, making sure the name of the branch in your repository matches the name of the environment configured on
amplify init(i.e. master).
When prompted with "We detected a backend created with the Amplify Framework. Would you like Amplify Console to deploy these resources with your frontend?", select "YES" and provide or create an IAM role with appropriate permissions to build the backend resources
- Wait for the build, deployment and verification steps
At this point you have an usable serverless chat application with no AI features. The next steps are only needed to deploy and configure the integration with services that provide image recognition, text-to-speech, language translation, sentiment analysis as well as conversational chatbots. From here you can skip to step 8 if there's no interest to setup the AI integration.
Now perform steps 7 to 12 from the Back-end Setup Instructions and continue with the instructions to setup the Lex chatbot.
Access your app from the hosted site generated by the Amplify Console(https://master.xxxxxxxx.amplifyapp.com)
To clean up the project, you can simply delete the bots, delete the stack created by the SAM CLI:
aws lex-models delete-bot --name `jq -r .name backend/ChuckBot/bot.json` --region $AWS_REGION aws lex-models delete-bot --name `jq -r .name backend/MovieBot/bot.json` --region $AWS_REGION aws lex-models delete-intent --name `jq -r .name backend/ChuckBot/intent.json` --region $AWS_REGION aws lex-models delete-intent --name `jq -r .name backend/MovieBot/intent.json` --region $AWS_REGION aws lex-models delete-slot-type --name `jq -r .name backend/ChuckBot/slot-type.json` --region $AWS_REGION aws lex-models delete-slot-type --name `jq -r .name backend/MovieBot/slot-type.json` --region $AWS_REGION aws cloudformation delete-stack --stack-name $STACK_NAME_AIML --region $AWS_REGION
to delete the resources created by the Amplify CLI.
We hope you have enjoyed this workshop!
If you have any suggestions on how we can make it better please let us know.