diff --git a/content/dynamodb-opensearch-zetl/integrations/index.en.md b/content/dynamodb-opensearch-zetl/integrations/index.en.md index d4c203d5..4a21617d 100644 --- a/content/dynamodb-opensearch-zetl/integrations/index.en.md +++ b/content/dynamodb-opensearch-zetl/integrations/index.en.md @@ -4,4 +4,6 @@ menuTitle: "Integrations" date: 2024-02-23T00:00:00-00:00 weight: 30 --- -In this section, you will configure integrations between services. You'll first set up ML and Pipeline connectors in OpenSearch Service followed by a zero ETL connector to move data written to DynamoDB to OpenSearch. Once these integrations are set up, you'll be able to write records to DynamoDB as your source of truth and then automatically have that data available to query in other services. \ No newline at end of file +In this section, you will configure integrations between services. First you will set up machine learning (ML) and Pipeline connectors in OpenSearch Service. Then you will setup a zero-ETL connector to move data stored in DynamoDB into OpenSearch for indexing. Once both these integrations are set up, you'll be able to write records to DynamoDB as your source of truth and then automatically have that data available to query in the other services. + +![Integrations](/static/images/connectionsandpipelines.png) \ No newline at end of file diff --git a/content/dynamodb-opensearch-zetl/integrations/os-connectors.en.md b/content/dynamodb-opensearch-zetl/integrations/os-connectors.en.md index d54b5d8e..28fb34c0 100644 --- a/content/dynamodb-opensearch-zetl/integrations/os-connectors.en.md +++ b/content/dynamodb-opensearch-zetl/integrations/os-connectors.en.md @@ -4,19 +4,21 @@ menuTitle: "Load DynamoDB Data" date: 2024-02-23T00:00:00-00:00 weight: 20 --- -In this section you'll configure ML and Pipeline connectors in OpenSearch Service. These configurations are set up by a series of POST and PUT requests that are authenticated with AWS Signature Version 4 (sig-v4). Sigv4 is the standard authentication mechanism used by AWS services. While in most cases an SDK abstracts away sig-v4 but in this case we will be building the requests ourselves with curl. +In this section you'll configure OpenSearch so it will preprocess and enrich data as it is written to its indexes, by connecting to an externally hosted machine learning embeddings model. This is a simpler application design than having your application write the embeddings as an attribute in the Item within DynamoDB. Instead, the data is kept as text in DynamoDB and when it arrives in OpenSearch, OpenSearch will connect out using Bedrock to generate and store the embeddings. -Building a sig-v4 signed request requires a session token, access key, and secret access key. You'll first retrieve these from your Cloud9 Instance metadata with the provided "credentials.sh" script which exports required values to environmental variables. In the following steps, you'll also export other values to environmental variables to allow for easy substitution into listed commands. +More information on this design can be around at [ML and Pipeline connectors in OpenSearch Service](https://opensearch.org/docs/latest/ml-commons-plugin/remote-models/index/). - 1. Run the credentials.sh script to retrieve and export credentials. These credentials will be used to sign API requests to the OpenSearch cluster. Note the leading "." before "./credentials.sh", this must be included to ensure that the exported credentials are available in the currently running shell. - ```bash - . ./credentials.sh - ``` - 1. Next, export an environmental variable with the OpenSearch endpoint URL. This URL is listed in the CloudFormation Stack Outputs tab as "OSDomainEndpoint". This variable will be used in subsequent commands. - ```bash - export OPENSEARCH_ENDPOINT="https://search-ddb-os-xxxx-xxxxxxxxxxxxx.us-west-2.es.amazonaws.com" - ``` - 1. Execute the following curl command to create the OpenSearch ML model connector. +We will perform these configurations using a series of POST and PUT requests made to OpenSearch endpoints. The calls will be made using the IAM role that was previously mapped to the OpenSearch "all_access" role. + +The calls are authenticated with AWS Signature Version 4 (sig-v4). Sigv4 is the standard authentication mechanism used by AWS services. In most cases an SDK abstracts away the sig-v4 details, but in this case we will be building the requests ourselves with curl. + +Building a sig-v4 signed request requires a session token, access key, and secret access key. These are available to your VS Code Instance as metadata. These values were retrieved by the "credentials.sh" script you ran during setup. It pulled the required values and then exported them as environmental variables for your use. In the following steps, you'll also export other values to environmental variables to allow for easy substitution into the various commands. + +If any of the following commands fail, try re-running the credentials.sh script in the :link[Environment Setup]{href="/setup/step1"} step. + +As you run these steps, be very careful about typos. Also remember the Copy icon in the corner. + + 1. Execute the following curl command to **create the OpenSearch ML model connector**. You can use ML connectors to connect OpenSearch Service to a model hosted on bedrock or a model hosted on a third party platform. Here we are connecting to the Titan embedding model hosted on bedrock. ```bash curl --request POST \ ${OPENSEARCH_ENDPOINT}'/_plugins/_ml/connectors/_create' \ @@ -53,11 +55,11 @@ Building a sig-v4 signed request requires a session token, access key, and secre ] }' ``` - 1. Note the "connector_id" returned in the previous command. Export it to an environmental variable for convenient substitution in future commands. + 1. Note the **"connector_id"** returned in the previous command. **Export it to an environmental variable** for convenient substitution in future commands. ```bash export CONNECTOR_ID='xxxxxxxxxxxxxx' ``` - 1. Run the next curl command to create the model group. + 1. Run the next curl command to **create the model group**. ```bash curl --request POST \ ${OPENSEARCH_ENDPOINT}'/_plugins/_ml/model_groups/_register' \ @@ -71,7 +73,7 @@ Building a sig-v4 signed request requires a session token, access key, and secre "description": "This is an example description" }' ``` - 1. Note the "model_group_id" returned in the previous command. Export it to an environmental variable for later substitution. + 1. Note the **"model_group_id"** returned in the previous command. **Export it to an environmental variable** for later substitution. ```bash export MODEL_GROUP_ID='xxxxxxxxxxxxx' ``` @@ -92,15 +94,17 @@ Building a sig-v4 signed request requires a session token, access key, and secre "connector_id": "'${CONNECTOR_ID}'" }' ``` - 1. Note the "model_id" and export it. + 1. Note the **"model_id"** (NOT the task_id) and export it. ```bash export MODEL_ID='xxxxxxxxxxxxx' ``` - 1. Run the following command to verify that you have successfully exported the connector, model group, and model id. + 1. Run the following command to **verify that you have successfully exported the connector, model group, and model id**. ```bash echo -e "CONNECTOR_ID=${CONNECTOR_ID}\nMODEL_GROUP_ID=${MODEL_GROUP_ID}\nMODEL_ID=${MODEL_ID}" ``` - 1. Next, we'll deploy the model with the following curl. + + ::alert[_Make sure the environment variables are exported well. Otherwise, it will cause errors in the next commands_] + 1. Next, we'll **deploy the model** with the following curl. ```bash curl --request POST \ ${OPENSEARCH_ENDPOINT}'/_plugins/_ml/models/'${MODEL_ID}'/_deploy' \ @@ -111,11 +115,13 @@ Building a sig-v4 signed request requires a session token, access key, and secre --user "${METADATA_AWS_ACCESS_KEY_ID}:${METADATA_AWS_SECRET_ACCESS_KEY}" ``` - With the model created, OpenSearch can now use Bedrock's Titan embedding model for processing text. An embeddings model is a type of machine learning model that transforms high-dimensional data (like text or images) into lower-dimensional vectors, known as embeddings. These vectors capture the semantic or contextual relationships between the data points in a more compact, dense representation. + With the model created, **OpenSearch can now use Bedrock's Titan embedding model** for processing text. - The embeddings represent the semantic meaning of the input data, in this case product descriptions. Words with similar meanings are represented by vectors that are close to each other in the vector space. For example, the vectors for "sturdy" and "strong" would be closer to each other than to "warm". + **An embeddings model** is a type of machine learning model that transforms high-dimensional data (like text or images) into lower-dimensional vectors, known as embeddings. These vectors capture the semantic or contextual relationships between the data points in a more compact, dense representation. - 1. Now we can test the model. If you recieve results back with a "200" status code, everything is working properly. + The embeddings represent the semantic meaning of the input data, in this case product descriptions. Words with similar meanings are represented by vectors that are close to each other in the vector space. For example, the vectors for "sturdy" and "strong" would be closer to each other than to "stringy". + + 1. Now we can *test the model*. With the below command, we are sending some text to OpenSearch and asking it to return the Vector embeddings using the configured "MODEL_ID". If you receive results back with a "200" status code, everything is working properly. ```bash curl --request POST \ ${OPENSEARCH_ENDPOINT}'/_plugins/_ml/models/'${MODEL_ID}'/_predict' \ @@ -130,7 +136,9 @@ Building a sig-v4 signed request requires a session token, access key, and secre } }' ``` - 1. Next, we'll create the Details table mapping pipeline. + ::alert[_Output will have vector embeddings as well. So, try to find the statuscode variable to check the status._] + + 1. Next, we'll create the **ProductDetails table mapping ingest pipeline**. An **ingest pipeline** is a sequence of processors that are applied to documents as they are ingested into an index. This uses the configured model to generate the embeddings. Once this is created, as new data arrives into OpenSearch from the DynamoDB "ProductDetails" table the embeddings will be created and indexed. ```bash curl --request PUT \ ${OPENSEARCH_ENDPOINT}'/_ingest/pipeline/product-en-nlp-ingest-pipeline' \ @@ -158,7 +166,8 @@ Building a sig-v4 signed request requires a session token, access key, and secre ] }' ``` - 1. Followed by the Reviews table mapping pipeline. We won't use this in this version of the lab, but in a real system you will want to keep your embeddings indexes separate for different queries. + ::alert[_Here, we have created the processor which is going to take the source and create embedding which will be under 'product_embedding'_] + 1. Followed by the **Reviews table mapping pipeline**. We won't use this in this version of the lab, but in a real system you will want to keep your embeddings indexes separate for different queries. Note the different endpoint pipeline path. ```bash curl --request PUT \ ${OPENSEARCH_ENDPOINT}'/_ingest/pipeline/product-reviews-nlp-ingest-pipeline' \ @@ -177,7 +186,7 @@ Building a sig-v4 signed request requires a session token, access key, and secre }, { "text_embedding": { - "model_id": "m6jIgowBXLzE-9O0CcNs", + "model_id": "'${MODEL_ID}'", "field_map": { "combined_field": "product_reviews_embedding" } @@ -187,4 +196,4 @@ Building a sig-v4 signed request requires a session token, access key, and secre }' ``` - These pipelines allow OpenSearch to preprocess and enrich data as it is written to the index by adding embeddings through the Bedrock connector. \ No newline at end of file +**These pipelines allow OpenSearch to preprocess and enrich data as it is written to the index by adding embeddings through the Bedrock connector**. diff --git a/content/dynamodb-opensearch-zetl/integrations/zetl.en.md b/content/dynamodb-opensearch-zetl/integrations/zetl.en.md index 63d6b9a7..6d29dc27 100644 --- a/content/dynamodb-opensearch-zetl/integrations/zetl.en.md +++ b/content/dynamodb-opensearch-zetl/integrations/zetl.en.md @@ -6,112 +6,96 @@ weight: 30 --- Amazon DynamoDB offers a zero-ETL integration with Amazon OpenSearch Service through the DynamoDB plugin for OpenSearch Ingestion. Amazon OpenSearch Ingestion offers a fully managed, no-code experience for ingesting data into Amazon OpenSearch Service. - 1. Open [OpenSearch Service Ingestion Pipelines](https://us-west-2.console.aws.amazon.com/aos/home?region=us-west-2#opensearch/ingestion-pipelines) - 1. Click "Create pipeline" - - ![Create pipeline](/static/images/ddb-os-zetl13.jpg) - - 1. Name your pipeline, and include the following for your pipeline configuration. The configuration contains multiple values that need to be updated. The needed values are provided in the CloudFormation Stack Outputs as "Region", "Role", "S3Bucket", "DdbTableArn", and "OSDomainEndpoint". - ```yaml - version: "2" - dynamodb-pipeline: - source: - dynamodb: - acknowledgments: true - tables: - # REQUIRED: Supply the DynamoDB table ARN - - table_arn: "{DDB_TABLE_ARN}" - stream: - start_position: "LATEST" - export: - # REQUIRED: Specify the name of an existing S3 bucket for DynamoDB to write export data files to - s3_bucket: "{S3BUCKET}" - # REQUIRED: Specify the region of the S3 bucket - s3_region: "{REGION}" - # Optionally set the name of a prefix that DynamoDB export data files are written to in the bucket. - s3_prefix: "pipeline" - aws: - # REQUIRED: Provide the role to assume that has the necessary permissions to DynamoDB, OpenSearch, and S3. - sts_role_arn: "{ROLE}" - # REQUIRED: Provide the region - region: "{REGION}" - sink: - - opensearch: - hosts: - # REQUIRED: Provide an AWS OpenSearch endpoint, including https:// - [ - "{OS_DOMAIN_ENDPOINT}" - ] - index: "product-details-index-en" - index_type: custom - template_type: "index-template" - template_content: | - { - "template": { - "settings": { - "index.knn": true, - "default_pipeline": "product-en-nlp-ingest-pipeline" - }, - "mappings": { - "properties": { - "ProductID": { - "type": "keyword" - }, - "ProductName": { - "type": "text" - }, - "Category": { - "type": "text" - }, - "Description": { - "type": "text" - }, - "Image": { - "type": "text" - }, - "combined_field": { - "type": "text" - }, - "product_embedding": { - "type": "knn_vector", - "dimension": 1536, - "method": { - "engine": "nmslib", - "name": "hnsw", - "space_type": "l2" - } - } - } - } - } - } - aws: - # REQUIRED: Provide the role to assume that has the necessary permissions to DynamoDB, OpenSearch, and S3. - sts_role_arn: "{ROLE}" - # REQUIRED: Provide the region - region: "{REGION}" - ``` - 1. Under Network, select "Public access", then click "Next". - - ![Create pipeline](/static/images/ddb-os-zetl14.jpg) - - 1. Click "Create pipeline". +Please follow the steps to setup zero-ETL. Here we use the AWS Console instead of Curl commands: + + 1. Open [OpenSearch Service](https://us-west-2.console.aws.amazon.com/aos/home?region=us-west-2#opensearch) within the Console + + 2. Select **Pipelines** from the left pane and click on **"Create pipeline"**. +![Create pipeline](/static/images/ddb-os-zetl13.jpg) + + 3. Select **"Blank"** from the Ingestion pipeline blueprints. +![BluePrint Selection](/static/images/CreatePipeline.png) + + 4. Configure the source by selecting the source as **"Amazon DynamoDB"** and fill the details as below. Once done, click "Next" +![Configure source](/static/images/configure_source.png) + + 5. Skip the **Processor** configuration + +![Skip processor](/static/images/processor_blank.png) + + 6. Configure the sink by filling up the Opensearch details as below: +![Configure Sink](/static/images/configure_sink.png) + + 7. Use the following content under **Schema mapping**: + +```yaml +{ + "template": { + "settings": { + "index.knn": true, + "default_pipeline": "product-en-nlp-ingest-pipeline" + }, + "mappings": { + "properties": { + "ProductID": { + "type": "keyword" + }, + "ProductName": { + "type": "text" + }, + "Category": { + "type": "text" + }, + "Description": { + "type": "text" + }, + "Image": { + "type": "text" + }, + "combined_field": { + "type": "text" + }, + "product_embedding": { + "type": "knn_vector", + "dimension": 1536 + } + } + } + } +} +``` + +Once done, click on **"Next"** + + 8. Configure pipeline and then click "Next". + + ![Configure pipeline](/static/images/ddb-os-zetl14.jpg) + + + 9. Click "Create pipeline". ![Create pipeline](/static/images/ddb-os-zetl15.jpg) - 1. **Wait until the pipeline has finished creating**. This will take 5 minutes or more. + 10. **Wait until the pipeline has finished creating and status is "Active"**. This will take 5 minutes or more. - After the pipeline is created, it will take some additional time for the initial export from DynamoDB and import into OpenSearch Service. After you have waited several more minutes, you can check if items have replicated into OpenSearch by making a query in Dev Tools in the OpenSearch Dashboards. + After the pipeline is created, it will take some additional time for the initial export from DynamoDB and import into OpenSearch Service. After you have waited several more minutes, you can check if items have replicated into OpenSearch by making a query using the OpenSearch Dashboards feature called Dev Tools. - To open Dev Tools, click on the menu in the top left of OpenSearch Dashboards, scroll down to the `Management` section, then click on `Dev Tools`. Enter the following query in the left pane, then click the "play" arrow. +- To open Dev Tools, click on the menu in the top left of OpenSearch Dashboards, scroll down to the `Management` section, then click on `Dev Tools`. + + ![Devtools](/static/images/Devtools.png) + +- Enter the following query in the left pane, then click the "play" arrow to execute it. ```text GET /product-details-index-en/_search ``` -You may encounter a few types of results: -- If you see a 404 error of type *index_not_found_exception*, then you need to wait until the pipeline is `Active`. Once it is, this exception will go away. -- If your query does not have results, wait a few more minutes for the initial replication to finish and try again. + +- The output will the list of documents that have all the fields mentioned under the zero-ETL pipeline mapping. + + You may encounter a few types of results: + - If you see a 404 error of type *index_not_found_exception*, then you need to wait until the pipeline is `Active`. Once it is, this exception will go away. + - If your query does not have results, wait a few more minutes for the initial replication to finish and try again. ![Create pipeline](/static/images/ddb-os-zetl16.jpg) diff --git a/content/dynamodb-opensearch-zetl/queries/index.en.md b/content/dynamodb-opensearch-zetl/queries/index.en.md index 6ffea68b..6b031c92 100644 --- a/content/dynamodb-opensearch-zetl/queries/index.en.md +++ b/content/dynamodb-opensearch-zetl/queries/index.en.md @@ -4,13 +4,14 @@ menuTitle: "Query and Conclusion" date: 2024-02-23T00:00:00-00:00 weight: 40 --- -Now that you've created all required connectors and pipelines and data has replicated from DynamoDB into OpenSearch Service, you have quite a few options for how you want to query your data. You can do key/value looksups directly to DynamoDB, execute search queries against OpenSearch, and use Bedrock togther with Opensearch for natural language product recommendation. +At this point you've setup all the required connectors and pipelines and data has replicated from DynamoDB into OpenSearch Service, so it's time to reap the rewards and query your data in various ways based on the use-case. +- Do key/value lookup directly to DynamoDB +- Execute search queries against OpenSearch +- Use Bedrock together with OpenSearch for natural language product recommendation. -This query will use OpenSearch as a vector database to find the product that most closely matches your desired intent.The contents of the OpenSearch index were created through the DynamoDB Zero ETL connector. When records are added to DynamoDB, the connector automatically moves them into OpenSearch. OpenSearch then uses the Titan Embeddings model to decorate that data. +Please follow the steps to test the query system: -The script constructs a query that searches the OpenSearch index for products that are most relevant to your input text. This is done using a "neural" query, which leverages the embeddings stored in OpenSearch to find products with similar textual content. After retrieving relevant products, the script uses Bedrock to generate a more sophisticated response through the Claude model. This involves creating a prompt that combines your original query with the retrieved data and sending this prompt to Bedrock for processing. - - 1. Return to the Cloud9 IDE Console. + 1. Return to the VS Code IDE Console. 1. First, let's make a request to DynamoDB directly @@ -20,9 +21,9 @@ The script constructs a query that searches the OpenSearch index for products th --key '{"ProductID": {"S": "S020"}}' ``` - This is an example of a key/value lookup that DynamoDB excels at. It returns product details for a specific product, identified by its ProductID. + This is an example of a key/value lookup where DynamoDB excels. It returns product details for a specific product, identified by its ProductID. - 1. Next, let's make a search query to OpenSearch. We'll find skirts that include "Spandex" in their description. + 1. Next, let's make a search query to OpenSearch. Ask for skirts that include "Spandex" in their description. Here, we are **not performing neural search** but text search against OpenSearch. ```bash curl --request POST \ @@ -55,23 +56,53 @@ The script constructs a query that searches the OpenSearch index for products th }' | jq . ``` - Try changing `Spandex` to `Polyester` and see how the results change. + Try changing `Spandex` to `Polyester` and see how the results change. 1. Finally, let's ask Bedrock to provide some product recommendations using one of the scripted provided with the lab. - This query will use OpenSearch as a vector database to find the product that most closely matches your desired intent. The contents of the OpenSearch index were created through the DynamoDB Zero ETL connector. When records are added to DynamoDB, the connector automatically moves them into OpenSearch. OpenSearch then uses the Titan Embeddings model to decorate that data. - The script constructs a query that searches the OpenSearch index for products that are most relevant to your input text. This is done using a "neural" query, which leverages the embeddings stored in OpenSearch to find products with similar textual content. After retrieving relevant products, the script uses Bedrock to generate a more sophisticated response through the Claude model. This involves creating a prompt that combines your original query with the retrieved data and sending this prompt to Bedrock for processing. + This query will use OpenSearch as a vector database to find the product that most closely matches your desired intent. The contents of the OpenSearch index were created through the DynamoDB Zero-ETL connector. When records are added to DynamoDB, the connector automatically moved them into OpenSearch. OpenSearch then uses the Titan Embeddings model to decorate that data. + + The script constructs a query that searches the OpenSearch index for products that are most relevant to your input text. This is done using a **"neural" query**, which leverages the **embeddings stored in OpenSearch** to find products with similar textual content. After retrieving relevant products, the script uses **Bedrock** to generate a more sophisticated response through the **Claude model**. This involves creating a prompt that combines your original query with the retrieved data and sending this prompt to Bedrock for processing. This is called **Retrieval augmented generation (RAG)** and combines LLMs with external knowledge bases to improve their outputs. Here, we are using OpenSearch as our knowledge base to provide the relevant context to the LLM to generate accurate response. + + - If you open the **"bedrock_query.py"** and check the query, you will find below snippet: + ```bash + query = { + "size": 5, + "sort": [ + { + "_score": { + "order": "desc" + } + } + ], + "_source": { + "includes": ["ProductName", "Category", "Description", "ProductID","Image"] + }, + "query": { + "neural": { + "product_embedding": { + "query_text": input_text, + "model_id": model_id, + "k": 10 + } + } + } + } + ``` + - This is a **neural based retrieval to find the most relevant documents with the help of product_embedding (contains Vector embeddings). It also provides the model_id and "k" to indicate the number of results to return**. + + + Let's execute the script. Go to VS Code and execute the Python script like so. The result will have the LLM response and the OpenSearch-retrieved data as well. - In the console, execute the provided python script to make a query to Bedrock and return product results. ```bash python bedrock_query.py product_recommend en "I need a warm winter coat" $METADATA_AWS_REGION $OPENSEARCH_ENDPOINT $MODEL_ID | jq . ``` ![Query results](/static/images/ddb-os-zetl17.jpg) - 1. Try adding a new item to your DynamoDB table. + 1. Try adding a new item to your DynamoDB table, some wool socks. ```bash aws dynamodb put-item \ @@ -85,10 +116,10 @@ The script constructs a query that searches the OpenSearch index for products th }' ``` - 1. Try modifying the DynamoDB get-item above to retrieve your new item. Next, try modifying the OpenSearch query to search for "Socks" that contain "Wool". Finally, tell Bedrock "I need warm socks for hiking in winter". Did it recommend your new item? + 1. Try modifying the DynamoDB get-item above to retrieve your new item. It should appear. Then try modifying the OpenSearch query to search for "Socks" that contain "Wool". Finally, tell Bedrock "I need warm socks for hiking in winter". Did it recommend your new item? ::alert[Don't just stop there with your queries. Trying asking for clothing for winter (will it recommend products with wool?) or for bedtime. Note that there is a very small catalog of products to be embedded, so your search terms should be limited based on what you saw when you reviewed the DynamoDB table.]{header="Keeping querying!" type="info"} -Congratulations! You have completed the lab. +Congratulations! You have completed the workshop. ::alert[_If running in you own account, remember to delete the CloudFormation Stack after completing the lab to avoid unexpected charges._]{type="warning"} \ No newline at end of file diff --git a/content/dynamodb-opensearch-zetl/service-config/bedrock.en.md b/content/dynamodb-opensearch-zetl/service-config/bedrock.en.md index fd292461..b8694d2e 100644 --- a/content/dynamodb-opensearch-zetl/service-config/bedrock.en.md +++ b/content/dynamodb-opensearch-zetl/service-config/bedrock.en.md @@ -6,15 +6,6 @@ weight: 30 --- Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. -In this application, Bedrock will be used to make natural language product recommendation queries using OpenSearch Service as a vector database. +In this application, Bedrock will be used to make natural language product recommendation queries using OpenSearch Service as a vector database. We will use the Titan Embeddings G1 for text embeddings and Claude Haiku for generative AI. -Bedrock requires different FMs to be enabled before they are used. - - 1. Open [Amazon Bedrock Model Access](https://us-west-2.console.aws.amazon.com/bedrock/home?region=us-west-2#/modelaccess) - 1. Click on "Manage model access" - - ![Manage model access](/static/images/ddb-os-zetl10.jpg) - 1. Select "Titan Embeddings G1 - Text" and "Claude", then click `Request model access` - -1. Wait until you are granted access to both models before continuing. The *Access status* should say *Access granted* before moving on. -::alert[_Do not continue unless the base models "Claude" and "Titan Embeddings G1 - Text" are granted to your account._] +Bedrock no longer requires models to be explicitly enabled, so we have nothing to configure here! diff --git a/content/dynamodb-opensearch-zetl/service-config/ddb.en.md b/content/dynamodb-opensearch-zetl/service-config/ddb.en.md index 798c16d3..5db891be 100644 --- a/content/dynamodb-opensearch-zetl/service-config/ddb.en.md +++ b/content/dynamodb-opensearch-zetl/service-config/ddb.en.md @@ -4,18 +4,22 @@ menuTitle: "Load DynamoDB Data" date: 2024-02-23T00:00:00-00:00 weight: 40 --- -Next, you'll load example product data into your DynamoDB Table. Pipelines will move this data into OpenSearch Service in later steps. +Next, you'll load example product data into your DynamoDB Table. In later step, we'll setup a pipeline to move this data into OpenSearch Service. ## Load and Review Data -Return to the Cloud9 IDE. If you accidentally closed the IDE, you may search for the service in the AWS Management Console or use the Cloud9IDE URL found in the `Outputs` section of the CloudFormation stack. +Return to the VS Code IDE. If you accidentally closed the IDE, you may find the URL in the `Outputs` section of the CloudFormation stack. + +Load the sample data into your DynamoDB Table. You can look at the JSON file if you want to see the small number of item being loaded. -Load the sample data into your DynamoDB Table. ```bash -cd ~/environment/OpenSearchPipeline +cd ~/workshop/LBED aws dynamodb batch-write-item --request-items=file://product_en.json ``` - ![CloudFormation Outputs](/static/images/ddb-os-zetl11.jpg) + ![CloudFormation Outputs](/static/images/ddb-os-zetl11.png) + +::alert[_You should see an empty **UnprocessedItems** list. If it is not, it means some of the operations might have failed which needs to be investigated._] + - Next, navigate to the DynamoDB section of the AWS Management Console and click `Explore items` and then select the `ProductDetails` table. This is where the product information for this exercise originates from. Review the product names to get an idea for what kind of natural language searches you might want to provide later at the end of the lab. + Next, navigate to the [DynamoDB section of the AWS Management Console](https://us-west-2.console.aws.amazon.com/dynamodbv2/home?region=us-west-2#dashboard). Click `Explore items` on the left panel and then select the `ProductDetails` table. You can review the product names to get an idea for what kind of natural language searches you might want to test with at the end of the lab. - ![DynamoDB Console](/static/images/ddb-os-zetl19.jpg) \ No newline at end of file + ![DynamoDB Console](/static/images/ddb-os-zetl19-small.jpg) \ No newline at end of file diff --git a/content/dynamodb-opensearch-zetl/service-config/index.en.md b/content/dynamodb-opensearch-zetl/service-config/index.en.md index 58d28f87..23f80149 100644 --- a/content/dynamodb-opensearch-zetl/service-config/index.en.md +++ b/content/dynamodb-opensearch-zetl/service-config/index.en.md @@ -4,17 +4,18 @@ menuTitle: "Service Configuration" date: 2024-02-23T00:00:00-00:00 weight: 20 --- -In this section, you will load data into your DynamoDB table and configure your OpenSearch Service resources. -Before beginning this section, make sure that :link[setup]{href="/dynamodb-opensearch-zetl/setup/"} has been completed for whichever way you're running this lab. Setup will deploy several resources. +At this point, some configuration has already been done by the CloudFormation Template that prepared the environment. -Dependencies from Cloud9 CloudFormation Template: - - S3 Bucket: Used to store the initial export of DynamoDB data for the Zero-ETL Pipeline. - - IAM Role: Used to grant permissions for pipeline integration and queries. - - Cloud9 IDE: Console for executing commands, building integrations, and running sample queries. + - **DynamoDB Table**: We have a DynamoDB table intended to store product descriptions. It has Point-in-time Recovery (PITR) and DynamoDB Streams enabled. + - **Amazon OpenSearch Service Domain**: We have a single-node OpenSearch Service cluster built. It will recieve data from DynamoDB and act as a vector database. -zETL CloudFormation Template Resources: - - DynamoDB Table: DynamoDB table to store product descriptions. Has Point-in-time Recovery (PITR) and DynamoDB Streams enabled. - - Amazon OpenSearch Service Domain: Single-node OpenSearch Service cluster to recieve data from DynamoDB and act as a vector database. +![Final Deployment Architecture](/static/images/serviceconf.png) -![Final Deployment Architecture](/static/images/ddb-os-zetl.png) +We also have a few supporting resources that were created from the CloudFormation Template: + + - **Visual Studio Code Server IDE**: Console for executing commands, building integrations, and running sample queries. + - **S3 Bucket**: Used to store the initial export of DynamoDB data for the Zero-ETL Pipeline. + - **IAM Role**: Used to grant permissions for pipeline integration and queries. + +We have a few tasks to perform still. We must load data into DynamoDB and configure OpenSearch permissions before we can begin building the required pipelines. We will start by configuring OpenSearch. \ No newline at end of file diff --git a/content/dynamodb-opensearch-zetl/service-config/os.en.md b/content/dynamodb-opensearch-zetl/service-config/os.en.md index ee3af79b..4340be3a 100644 --- a/content/dynamodb-opensearch-zetl/service-config/os.en.md +++ b/content/dynamodb-opensearch-zetl/service-config/os.en.md @@ -4,38 +4,37 @@ menuTitle: "Configure OpenSearch Service Permissions" date: 2024-02-23T00:00:00-00:00 weight: 20 --- -The OpenSearch Service Domain deployed by the CloudFormation Template uses Fine-grained access control. Fine-grained access control offers additional ways of controlling access to your data on Amazon OpenSearch Service. In order to configure integrations between OpenSearch Service, DynamoDB, and Bedrock certain OpenSearch Service permissions will need to be mapped to the IAM Role being used. +The OpenSearch Service Domain deployed by the CloudFormation Template uses Fine-grained access control (FGAC). In order to configure easy integrations between OpenSearch Service, DynamoDB, and Bedrock, we need to map the IAM role that was created by the CloudFormation Template and will be used by the integration into an OpenSearch Service role called "all_access". With that, the IAM role will have permissions to access OpenSearch. -Links to the OpenSearch Dashboards, credentials, and necessary values are provided in the Outputs of the DynamoDBzETL [CloudFormation](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/) Template. It is recommended that you leave Outputs open in one browser tab to easily refer to while following through the lab. +For this section, we will need to know the names and paths of various resources constructed by the CloudFormation Template. For example, the URL to the OpenSearch Dashboards, the access credentials, and other values. These can be found in the Outputs of the **CloudFormation-Outputs.txt** file created by the credentials.sh script you ran in the previous section. Open this file so that the values are easily available. -In a production environment as a best practice, you would configure roles with the least privilege required. For simplicity in this lab, we will use the "all_access" OpenSearch Service role. +In a production environment as a best practice, you should configure roles with the least privilege required for exactly what's needed by your application. For simplicity in this lab, we will use the "all_access" OpenSearch Service role. -::alert[_Do not continue unless the CloudFormation Template has finished deploying._] + 1. In the **CloudFormation-Outputs.txt** file, find the value for **OpenSearchPassword** and copy it to your clipboard. Next, find the value for **OSDashboardsURL** and hold ctrl while you click it to follow the link. - 1. Open the "Outputs" tab of the stack named `dynamodb-opensearch-setup` in the CloudFormation Console. + ![OpenSearch Dashboard Login](/static/images/code-cloudformation-opensearch.png) + 1. Type "master-user" as the username, paste the password from your clipboard, then click **Log in**. - ![CloudFormation Outputs](/static/images/ddb-os-zetl3.jpg) - 1. Open the link for SecretConsoleLink in a new tab. This will take you to the AWS Secrets Manager secret which contains the login information for OpenSearch. Click on the `Retrieve secret value` button to see the username and password for the OpenSearch Cluster. - 1. Return to the CloudFormation Console "Outputs" and open the link for **OSDashboardsURL** in a new tab. - 1. Login to Dashboards with the username and password provided in Secrets Manager. + ![OpenSearch Service Dashboards](/static/images/ddb-os-zetl4-small.jpg) + 1. You will get the 'Welcome to OpenSearch Dashboards'. Select **Explore on my own**. - ![OpenSearch Service Dashboards](/static/images/ddb-os-zetl4.jpg) -1. When prompted to select your tenant, choose *Global* and click **Confirm**. Dismiss any pop ups. + ![Welcome Page OpenSearch](/static/images/Welcome-os.png) + 1. When prompted to select your tenant, choose *Global* and click **Confirm**. Dismiss any pop ups. - ![OpenSearch Service Dashboards](/static/images/ddb-os-zetl18.jpg) + ![OpenSearch Service Dashboards](/static/images/ddb-os-zetl18-small.jpg) 1. Open the top left menu and select **Security** under the *Management* section. - ![Security Settings](/static/images/ddb-os-zetl5.jpg) - 1. Open the "Roles" tab, then click on the "all_access" role. + ![Security Settings](/static/images/ddb-os-zetl5-small.jpg) + 1. Open the "Roles" tab, search for "all_access" and then click on the "all_access" role. - ![Roles Settings](/static/images/ddb-os-zetl6.jpg) + ![Roles Settings](/static/images/ddb-os-zetl6-small.jpg) 1. Open the "Mapped users" tab, then select "Manage mapping". - ![Mapping Settings](/static/images/ddb-os-zetl7.jpg) - 1. In the "Backend roles" field, enter the Arn provided in the CloudFormation Stack Outputs. The attribute named "Role" provides the correct Arn. + ![Mapping Settings](/static/images/ddb-os-zetl7-small.jpg) + 1. In the "Backend roles" field, enter the ARN provided in the **CloudFormation-Outputs.txt** file. The attribute named "Role" provides the correct ARN. Be absolutely sure you have removed any white space characters from the start and end of the ARN to ensure you do not have permissions issues later. Click "Map". - ![ Settings](/static/images/ddb-os-zetl8.jpg) + ![ Settings](/static/images/ddb-os-zetl8-small.jpg) 1. Verify that the "all_access" Role now has a "Backend role" listed. - ![ Settings](/static/images/ddb-os-zetl9.jpg) + ![ Settings](/static/images/ddb-os-zetl9-small.jpg) diff --git a/content/dynamodb-opensearch-zetl/setup/Step1.en.md b/content/dynamodb-opensearch-zetl/setup/Step1.en.md index 895df230..5b6d40e2 100644 --- a/content/dynamodb-opensearch-zetl/setup/Step1.en.md +++ b/content/dynamodb-opensearch-zetl/setup/Step1.en.md @@ -8,19 +8,31 @@ description: "To get started, you configure your environment and download code t --- -[AWS Cloud9](https://aws.amazon.com/cloud9/) is a cloud-based integrated development environment (IDE) that lets you write, run, and debug code with just a browser. AWS Cloud9 includes a code editor, debugger, and terminal. It also comes prepackaged with essential tools for popular programming languages and the AWS Command Line Interface (CLI) preinstalled so that you don’t have to install files or configure your laptop for this lab. Your AWS Cloud9 environment will have access to the same AWS resources as the user with which you signed in to the AWS Management Console. +Visual Studio Code (VS Code) is a lightweight, cross-platform source code editor designed for fast, modular software development. VS Code includes a code editor, debugger, and terminal. VS Code Server enables remote instances of VS Code to run on an EC2 instance, while the user interface runs locally in a browser. -### To set up your AWS Cloud9 development environment: +Visual Studio Code Server is the environment where you will be executing the majority of this workshop. Your VS Code instance comes deployed with all the scripts and tools you'll need downloaded and pre-installed. -1. Choose **Services** at the top of the page, and then choose **Cloud9** under **Developer Tools**. +As a first step, you'll need to log into your VS Code instance and familiarize yourself with the user interface. + +### To log in to your VS Code development environment: + +1. Make sure you've logged into your workshop AWS account at least once. If you havn't opened it yet, do so now by clicking on the **Open AWS Console (us-west-2)** link located at the bottom left pane of this guide. -2. There would be an environment ready to use under **Your environments**. +2. Click on the following link to navigate to your [CloudFormation Stacks](https://us-west-2.console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks), or browse to the CloudFormation Service on your own from the AWS Console. Next, click on the **dynamodb-opensearch-setup** stack. + +![VS Code Environment](/static/images/cloudformation-stack.png) + +3. Click on the **Outputs** tab. Find the output values for **VSCodeUrl** and **VSCodePassword**. Copy **VSCodePassword** to your clipboard, and then click on **VSCodeUrl**. + +![CloudFormation Outputs](/static/images/cloudformation-outputs.png) -3. Click on **Open IDE**, your IDE should open with a welcome note. +4. Paste your password into the **PASSWORD** field, then click **SUBMIT**. -You should now see your AWS Cloud9 environment. You need to be familiar with the three areas of the AWS Cloud9 console shown in the following screenshot: +![VS Code Login](/static/images/code-login.png) -![Cloud9 Environment](/static/images/zetl-cloud9-environment.png) +You're now logged in to your VS Code environment. Take a moment to get familar with the user interface. You'll be using three main areas. + +![VS Code Environment](/static/images/zetl-code-environment.png) - **File explorer**: On the left side of the IDE, the file explorer shows a list of the files in your directory. @@ -28,19 +40,29 @@ You should now see your AWS Cloud9 environment. You need to be familiar with the - **Terminal**: On the lower right area of the IDE, this is where you run commands to execute code samples. +You will see a directory in the VS Code file explorer named **LBED**. Click on the **>** to expand it. + +The _LBED_ directory contains three things: +- **product_en.json**: Example items that will be loaded into a DynamoDB table +- **credentials.sh**: Bash script that simplifies managing credentials when signing requests for OpenSearch +- **bedrock_query.py**: Python script that executes a query to Bedrock -In this lab, you use Bash and Python scripts to interact with AWS services. Run the following commands in your AWS Cloud9 terminal to download and unpack this lab’s code. +Let's run the credentials script now to complete setup. + +Run the following command in your VS Code terminal to set up your environment. This script exports multiple environmental variables containing your AWS credentials, IAM Role, and OpenSearch endpoint. These variables will allow you to run BASH and Python scripts throughout the workshop without needing to customize their contents to your speficic environment. Since we want the exports to be available in your current shell, make sure you include the "source" command ahead of the script. If you launch a new shell, you'll need to run the script again.(You can copy the commands using the Copy icon in the top right corner of the code block.) ```bash -cd ~/environment -curl -sL https://amazon-dynamodb-labs.com/assets/OpenSearchPipeline.zip -o OpenSearchPipeline.zip && unzip -oq OpenSearchPipeline.zip && rm OpenSearchPipeline.zip +cd LBED +source ./credentials.sh ``` -You should see a directory in the AWS Cloud9 file explorer **OpenSearchPipeline**: +The script also has created a new file called **CloudFormation-Outputs.txt**. This file contains the same outputs you would see in the CloudFormation console. One less tab you need to keep open! + +![VS Code CloudFormation Outputs](/static/images/code-cloudformation.png) -The _OpenSearchPipeline_ directory contains example items that will be loaded into a DynamoDB table, as Bash script to simplify managing credentials when signing requests for OpenSearch, and a python script for executing a query to Bedrock. +You are now ready to start the lab! In the next module, you will complete setup for each of the services used in this lab before moving on to integrate them. -You are now ready to start the lab. In the next module, you will complete setup for each of the three services used in this lab before moving on to integrate them. +You may now move on to the next step: :link[Continue to Service Configuration]{href="/dynamodb-opensearch-zetl/service-config"}. diff --git a/content/dynamodb-opensearch-zetl/setup/index.en.md b/content/dynamodb-opensearch-zetl/setup/index.en.md index 8f05c9be..9bf99f60 100644 --- a/content/dynamodb-opensearch-zetl/setup/index.en.md +++ b/content/dynamodb-opensearch-zetl/setup/index.en.md @@ -19,7 +19,7 @@ In this lab, we will learn how to integrate DynamoDB with Amazon OpenSearch Serv To set up this workshop, choose one of the following paths, depending on whether you are: -::alert[**If following the lab in your own AWS Account, you will create OpenSearch Service clusters, DynamoDB tables, and Secrets Manager resources that will incur a cost that could approach tens of dollars per day. Ensure you delete the CloudFormation stacks as soon as the lab is complete and verify all resources are deleted by checking the DynamoDB console, OpenSearch Service console, and Secrets Manager console. Make sure you [delete the Cloud9 environment](https://docs.aws.amazon.com/cloud9/latest/user-guide/delete-environment.html) as soon as the lab is complete**.]{type="warning"} +::alert[**If following the lab in your own AWS Account, you will create OpenSearch Service clusters, DynamoDB tables, and Secrets Manager resources that will incur a cost that could approach tens of dollars per day. Ensure you delete the CloudFormation stacks as soon as the lab is complete and verify all resources are deleted by checking the DynamoDB console, OpenSearch Service console, and Secrets Manager console. Make sure you delete the environment as soon as the lab is complete**.]{type="warning"} - :link[…running the workshop on your own (in your own AWS account)]{href="/dynamodb-opensearch-zetl/setup/on-your-own"}, which guides you to create resources using CloudFormation @@ -27,4 +27,4 @@ To set up this workshop, choose one of the following paths, depending on whether Once you have completed with either setup, continue on to: -- :link[Step 1: Setup AWS Cloud9 IDE]{href="/dynamodb-opensearch-zetl/setup/step1"} +- :link[Step 1: Setup AWS Visual Studio Code Server IDE]{href="/dynamodb-opensearch-zetl/setup/step1"} diff --git a/content/dynamodb-opensearch-zetl/setup/on-your-own.en.md b/content/dynamodb-opensearch-zetl/setup/on-your-own.en.md index 8e664a92..900d6401 100644 --- a/content/dynamodb-opensearch-zetl/setup/on-your-own.en.md +++ b/content/dynamodb-opensearch-zetl/setup/on-your-own.en.md @@ -5,11 +5,11 @@ weight: 5 chapter: true --- -::alert[The first half of these setup instructions are identitical for LADV, LHOL, LMR, LBED, and LGME - all of which use the same Cloud9 template. Only complete this section once, and only if you're running it on your own account. If you have already launched the Cloud9 stack in a different lab, skip to the **Launch the zETL CloudFormation stack** section]{type="warning"} +::alert[The first half of these setup instructions are identitical for LADV, LHOL, LMR, LBED, and LGME - all of which use the same Code Server template. Only complete this section once, and only if you're running it on your own account. If you have already launched the Code Server stack in a different lab, skip to the **Launch the zETL CloudFormation stack** section]{type="warning"} ::alert[Only complete this section if you are running the workshop on your own. If you are at an AWS hosted event (such as re\:Invent, Immersion Day, etc), go to :link[At an AWS hosted Event]{href="/dynamodb-opensearch-zetl/setup/aws-ws-event"}] -## Launch the Cloud9 CloudFormation stack +## Launch the Code Server CloudFormation stack ::alert[During the course of the lab, you will create resources that will incur a cost that could approach tens of dollars per day. Ensure you delete the CloudFormation stack as soon as the lab is complete and verify all resources are deleted.] 1. Launch the CloudFormation template in US West 2 to deploy the resources in your account: [![CloudFormation](/static/images/cloudformation-launch-stack.png)](https://console.aws.amazon.com/cloudformation/home?region=us-west-2#/stacks/new?stackName=DynamoDBID&templateURL=:param{key="design_patterns_s3_lab_yaml"}) @@ -17,15 +17,10 @@ chapter: true 1. Click *Next* on the first dialog. -1. In the Parameters section, note the *Timeout* is set to zero. This means the Cloud9 instance will not sleep; you may want to change this manually to a value such as 60 to protect against unexpected charges if you forget to delete the stack at the end. -Leave the *WorkshopZIP* parameter unchanged and click *Next* - -![CloudFormation parameters](/static/images/awsconsole1.png) - 1. Scroll to the bottom and click *Next*, and then review the *Template* and *Parameters*. When you are ready to create the stack, scroll to the bottom, check the box acknowledging the creation of IAM resources, and click *Submit*. ![Acknowledge IAM role capabilities](/static/images/awsconsole2.png) - The stack will create a Cloud9 lab instance, a role for the instance, and a role for the AWS Lambda function used later on in the lab. It will use Systems Manager to configure the Cloud9 instance. + The stack will create a Visual Studio Code Server lab instance, a role for the instance, and a role for the AWS Lambda function used later on in the lab. It will use Systems Manager to configure the Code Server instance. 1. After the CloudFormation stack is `CREATE_COMPLETE`, continue with the next stack. @@ -44,4 +39,4 @@ Leave the *WorkshopZIP* parameter unchanged and click *Next* 1. Scroll to the bottom and click *Next*, and then review the *Template* and *Parameters*. When you are ready to create the stack, scroll to the bottom and click *Submit*. -1. After the CloudFormation stack is `CREATE_COMPLETE`, :link[continue onto connecting to Cloud9]{href="/dynamodb-opensearch-zetl/setup/Step1"}. \ No newline at end of file +1. After the CloudFormation stack is `CREATE_COMPLETE`, :link[continue onto connecting to Visual Code Server]{href="/dynamodb-opensearch-zetl/setup/Step1"}. \ No newline at end of file diff --git a/contentspec.yaml b/contentspec.yaml index cd6cdd40..a68c743c 100644 --- a/contentspec.yaml +++ b/contentspec.yaml @@ -6,7 +6,7 @@ localeCodes: - en-US params: latest_rh_design_pattern_yt: "https://www.youtube.com/watch?v=xfxBhvGpoa0" - design_patterns_s3_lab_yaml : "https://s3.amazonaws.com/amazon-dynamodb-labs.com/assets/C9.yaml" + design_patterns_s3_lab_yaml : "https://s3.amazonaws.com/amazon-dynamodb-labs.com/assets/vscode.yaml" lhol_migration_setup_yaml : "https://s3.amazonaws.com/amazon-dynamodb-labs.com/assets/migration-env-setup.yaml" lhol_migration_dms_setup_yaml : "https://s3.amazonaws.com/amazon-dynamodb-labs.com/assets/migration-dms-setup.yaml" lhol_ddb_os_zetl_setup_yaml : "https://s3.amazonaws.com/amazon-dynamodb-labs.com/assets/dynamodb-opensearch-setup.yaml" diff --git a/design-patterns/cloudformation/vscode.yaml b/design-patterns/cloudformation/vscode.yaml index a09b1cea..45af4848 100644 --- a/design-patterns/cloudformation/vscode.yaml +++ b/design-patterns/cloudformation/vscode.yaml @@ -11,6 +11,10 @@ Parameters: Type: String Description: Location of LADV code ZIP Default: https://amazon-dynamodb-labs.com/assets/workshop.zip + LBEDZIP: + Type: String + Description: Location of LBED code ZIP + Default: https://amazon-dynamodb-labs.com/assets/OpenSearchPipeline.zip DBLatestAmiId: Type: 'AWS::SSM::Parameter::Value' Default: '/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-arm64' @@ -1636,6 +1640,8 @@ Resources: - mkdir -p /tmp - !Sub curl -o /tmp/workshop.zip "${WorkshopZIP}" - !Sub unzip -o /tmp/workshop.zip -d ${VSCodeHomeFolder}/LADV + - !Sub curl -o /tmp/lbed.zip "${LBEDZIP}" + - !Sub unzip -jo /tmp/lbed.zip unzip -jo /tmp/lbed.zip "OpenSearchPipeline/*" -d ${VSCodeHomeFolder}/LBED - rm /tmp/workshop.zip - !Sub echo "${DDBReplicationRole.Arn}" > ${VSCodeHomeFolder}/ddb-replication-role-arn.txt - !Sub chown -R ${VSCodeUser}:${VSCodeUser} ${VSCodeHomeFolder} diff --git a/static/images/CreatePipeline.png b/static/images/CreatePipeline.png new file mode 100644 index 00000000..d36a4bb0 Binary files /dev/null and b/static/images/CreatePipeline.png differ diff --git a/static/images/Devtools.png b/static/images/Devtools.png new file mode 100755 index 00000000..0616d63e Binary files /dev/null and b/static/images/Devtools.png differ diff --git a/static/images/Welcome-os.png b/static/images/Welcome-os.png new file mode 100755 index 00000000..ed172881 Binary files /dev/null and b/static/images/Welcome-os.png differ diff --git a/static/images/awsconsole1.png b/static/images/awsconsole1.png new file mode 100644 index 00000000..3581deeb Binary files /dev/null and b/static/images/awsconsole1.png differ diff --git a/static/images/awsconsole2.png b/static/images/awsconsole2.png new file mode 100644 index 00000000..d8cda045 Binary files /dev/null and b/static/images/awsconsole2.png differ diff --git a/static/images/cloudformation-console.png b/static/images/cloudformation-console.png new file mode 100755 index 00000000..7129b069 Binary files /dev/null and b/static/images/cloudformation-console.png differ diff --git a/static/images/cloudformation-outputs.png b/static/images/cloudformation-outputs.png new file mode 100755 index 00000000..040c62fd Binary files /dev/null and b/static/images/cloudformation-outputs.png differ diff --git a/static/images/cloudformation-stack.png b/static/images/cloudformation-stack.png new file mode 100755 index 00000000..488b4cbe Binary files /dev/null and b/static/images/cloudformation-stack.png differ diff --git a/static/images/code-cloudformation-opensearch.png b/static/images/code-cloudformation-opensearch.png new file mode 100755 index 00000000..c20a6b21 Binary files /dev/null and b/static/images/code-cloudformation-opensearch.png differ diff --git a/static/images/code-cloudformation.png b/static/images/code-cloudformation.png new file mode 100755 index 00000000..952eee6a Binary files /dev/null and b/static/images/code-cloudformation.png differ diff --git a/static/images/code-login.png b/static/images/code-login.png new file mode 100755 index 00000000..7705a948 Binary files /dev/null and b/static/images/code-login.png differ diff --git a/static/images/configure_sink.png b/static/images/configure_sink.png new file mode 100644 index 00000000..671274d0 Binary files /dev/null and b/static/images/configure_sink.png differ diff --git a/static/images/configure_source.png b/static/images/configure_source.png new file mode 100755 index 00000000..5119441e Binary files /dev/null and b/static/images/configure_source.png differ diff --git a/static/images/connectionsandpipelines.png b/static/images/connectionsandpipelines.png new file mode 100644 index 00000000..6cc73e2d Binary files /dev/null and b/static/images/connectionsandpipelines.png differ diff --git a/static/images/ddb-os-zetl11.png b/static/images/ddb-os-zetl11.png new file mode 100755 index 00000000..db0d71cd Binary files /dev/null and b/static/images/ddb-os-zetl11.png differ diff --git a/static/images/ddb-os-zetl14.jpg b/static/images/ddb-os-zetl14.jpg old mode 100644 new mode 100755 index d3163a9d..95dcb6f7 Binary files a/static/images/ddb-os-zetl14.jpg and b/static/images/ddb-os-zetl14.jpg differ diff --git a/static/images/ddb-os-zetl15.jpg b/static/images/ddb-os-zetl15.jpg old mode 100644 new mode 100755 index 9b67b38b..e702c48f Binary files a/static/images/ddb-os-zetl15.jpg and b/static/images/ddb-os-zetl15.jpg differ diff --git a/static/images/ddb-os-zetl18-small.jpg b/static/images/ddb-os-zetl18-small.jpg new file mode 100644 index 00000000..e4bc4bac Binary files /dev/null and b/static/images/ddb-os-zetl18-small.jpg differ diff --git a/static/images/ddb-os-zetl19-small.jpg b/static/images/ddb-os-zetl19-small.jpg new file mode 100644 index 00000000..33ffc206 Binary files /dev/null and b/static/images/ddb-os-zetl19-small.jpg differ diff --git a/static/images/ddb-os-zetl2.jpg b/static/images/ddb-os-zetl2.jpg new file mode 100644 index 00000000..5d0e361b Binary files /dev/null and b/static/images/ddb-os-zetl2.jpg differ diff --git a/static/images/ddb-os-zetl4-small.jpg b/static/images/ddb-os-zetl4-small.jpg new file mode 100644 index 00000000..266c6a70 Binary files /dev/null and b/static/images/ddb-os-zetl4-small.jpg differ diff --git a/static/images/ddb-os-zetl5-small.jpg b/static/images/ddb-os-zetl5-small.jpg new file mode 100644 index 00000000..1156d93d Binary files /dev/null and b/static/images/ddb-os-zetl5-small.jpg differ diff --git a/static/images/ddb-os-zetl6-small.jpg b/static/images/ddb-os-zetl6-small.jpg new file mode 100644 index 00000000..a3bd6404 Binary files /dev/null and b/static/images/ddb-os-zetl6-small.jpg differ diff --git a/static/images/ddb-os-zetl7-small.jpg b/static/images/ddb-os-zetl7-small.jpg new file mode 100644 index 00000000..1fb0331b Binary files /dev/null and b/static/images/ddb-os-zetl7-small.jpg differ diff --git a/static/images/ddb-os-zetl8-small.jpg b/static/images/ddb-os-zetl8-small.jpg new file mode 100644 index 00000000..a58a4927 Binary files /dev/null and b/static/images/ddb-os-zetl8-small.jpg differ diff --git a/static/images/ddb-os-zetl8.jpg b/static/images/ddb-os-zetl8.jpg old mode 100644 new mode 100755 index 8a8ec772..36fdbdff Binary files a/static/images/ddb-os-zetl8.jpg and b/static/images/ddb-os-zetl8.jpg differ diff --git a/static/images/ddb-os-zetl9-small.jpg b/static/images/ddb-os-zetl9-small.jpg new file mode 100644 index 00000000..82461c0b Binary files /dev/null and b/static/images/ddb-os-zetl9-small.jpg differ diff --git a/static/images/ddb-zetl11.jpg b/static/images/ddb-zetl11.jpg new file mode 100644 index 00000000..0e7d01b6 Binary files /dev/null and b/static/images/ddb-zetl11.jpg differ diff --git a/static/images/event-driven-architecture/lab1-permissions/add-permissions-write.png b/static/images/event-driven-architecture/lab1-permissions/add-permissions-write.png new file mode 100644 index 00000000..83360be1 Binary files /dev/null and b/static/images/event-driven-architecture/lab1-permissions/add-permissions-write.png differ diff --git a/static/images/event-driven-architecture/lab1-permissions/add_permissions.png b/static/images/event-driven-architecture/lab1-permissions/add_permissions.png new file mode 100644 index 00000000..76836c12 Binary files /dev/null and b/static/images/event-driven-architecture/lab1-permissions/add_permissions.png differ diff --git a/static/images/game-player-data/core-usage/basetable-consolev2.png b/static/images/game-player-data/core-usage/basetable-consolev2.png new file mode 100644 index 00000000..61bdab19 Binary files /dev/null and b/static/images/game-player-data/core-usage/basetable-consolev2.png differ diff --git a/static/images/game-player-data/open-games/partiql-consolev2.png b/static/images/game-player-data/open-games/partiql-consolev2.png new file mode 100644 index 00000000..8db795a3 Binary files /dev/null and b/static/images/game-player-data/open-games/partiql-consolev2.png differ diff --git a/static/images/processor_blank.png b/static/images/processor_blank.png new file mode 100644 index 00000000..0fd08766 Binary files /dev/null and b/static/images/processor_blank.png differ diff --git a/static/images/serviceconf.png b/static/images/serviceconf.png new file mode 100644 index 00000000..6f6fa6b6 Binary files /dev/null and b/static/images/serviceconf.png differ diff --git a/static/images/zetl-code-environment.png b/static/images/zetl-code-environment.png new file mode 100755 index 00000000..d557fc2e Binary files /dev/null and b/static/images/zetl-code-environment.png differ