Skip to content

AI, AWS, API, Python, streamlit, UI. Claude 3 Generative AI App via Amazon Bedrock. A research assistant for analyzing a large number of PDF documents.

License

Notifications You must be signed in to change notification settings

tim-andes/uap-playground

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Thanks to Trevor Spires for the POC template we're starting with!

Amazon-Bedrock-Claude3-Multi-Modal-Sample

This is sample code demonstrating the use of Amazon Bedrock and Anthropic Claude 3 to satisfy multi-modal use cases. The application is constructed with a simple streamlit frontend where users can input zero shot requests to satisfy a broad range of use cases, including image to text multi-modal style use cases.

Alt text

Goal of this Repo:

The goal of this repo is to provide users the ability to use Amazon Bedrock (specifically Claude3) and generative AI to leverage its multi-modal capabilities, allowing users to insert text questions, images, or both to get a comprehensive description/or answer based on the image and/or question that was passed in. This repo comes with a basic frontend to help users stand up a proof of concept in just a few minutes.

The architecture and flow of the sample application will be:

Alt text

When a user interacts with the GenAI app, the flow is as follows:

  1. (1a) The user uploads an image file to the streamlit app, with or without a text question. (app.py). (1b) The user inserts a text question into to the streamlit app, with or without an image. (app.py).
  2. The streamlit app, takes the image file and/or text and saves it. The image and/or text is passed into Amazon Bedrock (Anthropic Claude 3). (llm_multi_modal_invoke.py).
  3. A natural language response is returned to the end user, either describing the image, answering a question about the image, or answering a question in general. (app.py).

How to use this Repo:

Prerequisites:

  1. Amazon Bedrock Access and CLI Credentials.
  2. Ensure Python 3.10 installed on your machine, it is the most stable version of Python for the packages we will be using, it can be downloaded here.

Step 1:

The first step of utilizing this repo is performing a git clone of the repository.

git clone https://github.com/aws-samples/genai-quickstart-pocs.git

After cloning the repo onto your local machine, open it up in your favorite code editor. The file structure of this repo is broken into 3 key files, the app.py file, the llm_multi_modal_invoke.py file, and the requirements.txt. The app.py file houses the frontend application (a streamlit app). The llm_multi_modal_invoke.py file houses the logic of the application, including the image encoding and Amazon Bedrock API invocations. The requirements.txt file contains all necessary dependencies for this sample application to work.

Step 2:

Set up a python virtual environment in the root directory of the repository and ensure that you are using Python 3.9. This can be done by running the following commands:

pip install virtualenv
python3.10 -m venv venv

The virtual environment will be extremely useful when you begin installing the requirements. If you need more clarification on the creation of the virtual environment please refer to this blog. After the virtual environment is created, ensure that it is activated, following the activation steps of the virtual environment tool you are using. Likely:

cd venv
cd bin
source activate
cd ../../

After your virtual environment has been created and activated, you can install all the requirements found in the requirements.txt file by running this command in the root of this repos directory in your terminal:

pip install -r requirements.txt

Step 3:

Now that the requirements have been successfully installed in your virtual environment we can begin configuring environment variables. You will first need to create a .env file in the root of this repo. Within the .env file you just created you will need to configure the .env to contain:

profile_name=<AWS_CLI_PROFILE_NAME>
save_folder=<PATH_TO_ROOT_OF_THIS_REPO>

Please ensure that your AWS CLI Profile has access to Amazon Bedrock!

Step 4:

As soon as you have successfully cloned the repo, created a virtual environment, activated it, installed the requirements.txt, and created a .env file, your application should be ready to go. To start up the application with its basic frontend you simply need to run the following command in your terminal while in the root of the repositories' directory:

streamlit run app.py

As soon as the application is up and running in your browser of choice you can begin uploading images and or text questions and generating natural language responses detailing the images or the specific questions that were asked.

About

AI, AWS, API, Python, streamlit, UI. Claude 3 Generative AI App via Amazon Bedrock. A research assistant for analyzing a large number of PDF documents.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages