diff --git a/tuts/031-cloudwatch-dynamicdash/README.md b/tuts/031-cloudwatch-dynamicdash/README.md new file mode 100644 index 00000000..7fc55c0a --- /dev/null +++ b/tuts/031-cloudwatch-dynamicdash/README.md @@ -0,0 +1,5 @@ +# Amazon CloudWatch dynamic dashboard tutorial + +This tutorial demonstrates how to create and manage dynamic dashboards in Amazon CloudWatch using the AWS CLI. You'll learn how to set up dashboards that automatically update with metrics from your AWS resources, providing real-time visibility into your infrastructure performance. + +You can either run the provided shell script to automatically create the dynamic dashboard resources, or follow the step-by-step instructions in the tutorial markdown file to understand each component and customize the implementation for your specific monitoring needs. diff --git a/tuts/031-cloudwatch-dynamicdash/cloudwatch-dynamicdash.md b/tuts/031-cloudwatch-dynamicdash/cloudwatch-dynamicdash.md new file mode 100644 index 00000000..e861c4d8 --- /dev/null +++ b/tuts/031-cloudwatch-dynamicdash/cloudwatch-dynamicdash.md @@ -0,0 +1,268 @@ +# Creating a CloudWatch dashboard with function name as a variable + +This tutorial guides you through creating a CloudWatch dashboard that uses a property variable to display metrics for different Lambda functions. You'll learn how to create a dashboard with a dropdown menu that allows you to switch between Lambda functions without creating separate dashboards for each function. + +## Prerequisites + +Before you begin this tutorial, make sure you have the following: + +1. The AWS CLI. If you need to install it, follow the [AWS CLI installation guide](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). +2. Configured your AWS CLI with appropriate credentials. Run `aws configure` if you haven't set up your credentials yet. +3. At least one Lambda function in your AWS account. If you don't have any Lambda functions, this tutorial includes steps to create a simple test function. +4. Sufficient permissions to create and manage CloudWatch dashboards and Lambda functions in your AWS account. + +### Cost considerations + +This tutorial uses AWS resources that are either included in the AWS Free Tier or have minimal costs: + +- CloudWatch Dashboards: First 3 dashboards are free. Additional dashboards cost $3.00 per dashboard per month. +- CloudWatch Metrics: Standard metrics for AWS services like Lambda are included at no additional charge. +- CloudWatch API Calls: First 1 million API calls per month are free. + +If you follow the cleanup instructions at the end of this tutorial, you should incur no charges or minimal charges. + +## Create a CloudWatch dashboard + +First, let's create a basic CloudWatch dashboard that will serve as the foundation for our dynamic dashboard with variables. + +**Create an empty dashboard** + +The following command creates a new empty CloudWatch dashboard: + +```bash +aws cloudwatch put-dashboard --dashboard-name LambdaMetricsDashboard --dashboard-body '{ + "widgets": [] +}' +``` + +This command creates a dashboard named "LambdaMetricsDashboard" with no widgets. The dashboard body is specified as a JSON string that defines the layout and content of the dashboard. + +## Add Lambda metrics widgets with a function name variable + +Now, let's create a more comprehensive dashboard that includes Lambda metrics widgets and a function name variable. We'll define the dashboard body in a JSON file for better readability. + +**Create the dashboard body JSON file** + +First, create a JSON file that defines the dashboard layout, widgets, and variables. Replace `us-east-1` in the region fields with your preferred AWS region: + +```bash +cat > dashboard-body.json << EOF +{ + "widgets": [ + { + "type": "metric", + "x": 0, + "y": 0, + "width": 12, + "height": 6, + "properties": { + "metrics": [ + [ "AWS/Lambda", "Invocations", "FunctionName", "\${FunctionName}" ], + [ ".", "Errors", ".", "." ], + [ ".", "Throttles", ".", "." ] + ], + "view": "timeSeries", + "stacked": false, + "region": "us-east-1", + "title": "Lambda Function Metrics for \${FunctionName}", + "period": 300 + } + }, + { + "type": "metric", + "x": 0, + "y": 6, + "width": 12, + "height": 6, + "properties": { + "metrics": [ + [ "AWS/Lambda", "Duration", "FunctionName", "\${FunctionName}", { "stat": "Average" } ] + ], + "view": "timeSeries", + "stacked": false, + "region": "us-east-1", + "title": "Duration for \${FunctionName}", + "period": 300 + } + }, + { + "type": "metric", + "x": 12, + "y": 0, + "width": 12, + "height": 6, + "properties": { + "metrics": [ + [ "AWS/Lambda", "ConcurrentExecutions", "FunctionName", "\${FunctionName}" ] + ], + "view": "timeSeries", + "stacked": false, + "region": "us-east-1", + "title": "Concurrent Executions for \${FunctionName}", + "period": 300 + } + } + ], + "periodOverride": "auto", + "variables": [ + { + "type": "property", + "id": "FunctionName", + "property": "FunctionName", + "label": "Lambda Function", + "inputType": "select", + "values": [ + { + "value": "my-lambda-function", + "label": "my-lambda-function" + } + ] + } + ] +} +EOF +``` + +This JSON file defines a dashboard with three metric widgets that display different Lambda metrics: Invocations, Errors, Throttles, Duration, and Concurrent Executions. The dashboard also includes a variable named "FunctionName" that allows you to select different Lambda functions from a dropdown menu. + +**Apply the dashboard configuration** + +Now, apply this dashboard configuration using the following command: + +```bash +aws cloudwatch put-dashboard --dashboard-name LambdaMetricsDashboard --dashboard-body file://dashboard-body.json +``` + +This command creates a dashboard with the specified widgets and variable. The `file://` prefix tells the AWS CLI to read the dashboard body from the specified file rather than treating it as a literal string. + +## Verify the dashboard + +After creating the dashboard, you can verify that it was created successfully and check its configuration. + +**List all dashboards** + +To see a list of all your CloudWatch dashboards, use the following command: + +```bash +aws cloudwatch list-dashboards +``` + +This command returns a list of all dashboards in your account, including the one you just created. + +**Get dashboard details** + +To view the details of your specific dashboard, use the following command: + +```bash +aws cloudwatch get-dashboard --dashboard-name LambdaMetricsDashboard +``` + +This command returns the full configuration of your dashboard, including the dashboard body JSON. You can verify that the variable and widgets are configured correctly. + +## Access and use the dashboard in the console + +While you've created the dashboard using the AWS CLI, you'll need to use the CloudWatch console to interact with the dropdown variable. + +1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/ +2. In the navigation pane, choose **Dashboards** +3. Select your **LambdaMetricsDashboard** +4. You should see a dropdown menu labeled "Lambda Function" at the top of the dashboard +5. Use this dropdown to select different Lambda functions and see their metrics displayed in the dashboard widgets + +The dashboard will automatically update all widgets to show metrics for the selected Lambda function. + +## Understanding the dashboard configuration + +Let's break down the key components of the dashboard configuration: + +**Widgets** + +Each widget in the dashboard is configured to display specific Lambda metrics. The `${FunctionName}` placeholder in the metrics configuration is replaced with the value selected in the dropdown menu. + +**Variables** + +The `variables` section defines a property variable with the following attributes: + +- `type`: "property" indicates this is a property variable +- `id`: The unique identifier for the variable +- `property`: The CloudWatch metric dimension that will be changed (FunctionName) +- `label`: The display label for the dropdown menu +- `inputType`: "select" creates a dropdown menu +- `values`: An array of values to populate the dropdown menu + +When you select a different function from the dropdown, all widgets that use `${FunctionName}` in their configuration will update to show metrics for the selected function. + +## Troubleshooting + +Here are solutions to common issues you might encounter: + +**Dashboard validation errors** + +If you receive validation errors when creating the dashboard, check: +- The JSON syntax in your dashboard body +- That all required fields are present in the variable definition +- That the region specified in the widgets is valid + +**Lambda functions not appearing in dropdown** + +If Lambda functions don't appear in your dropdown: +- Verify that you have Lambda functions in your account +- Check that the functions have metrics available in CloudWatch +- Ensure you have permissions to view the Lambda metrics + +**Metrics not displaying** + +If metrics don't display for selected functions: +- Confirm the function has been invoked recently (Lambda metrics only appear after function invocation) +- Check that you're looking at the appropriate time range in the dashboard +- Verify that the region in the widget configuration matches the region where your Lambda functions are deployed + +## Going to production + +This tutorial demonstrates how to create a CloudWatch dashboard with a function name variable for educational purposes. When implementing this in a production environment, consider these additional best practices: + +**Security considerations:** +- Implement proper IAM permissions to restrict who can view and modify dashboards +- Consider using resource tags to organize and control access to your dashboards +- Implement CloudWatch alarms for critical metrics to receive notifications when issues occur + +**Architecture best practices:** +- For large environments, organize multiple dashboards by application or team +- Implement automated dashboard creation and updates using AWS CloudFormation or other IaC tools +- Consider cross-account and cross-region monitoring for distributed applications +- Implement a tagging strategy for Lambda functions to enable more sophisticated filtering + +For more information on building production-ready monitoring solutions: +- [AWS Well-Architected Framework - Operational Excellence Pillar](https://docs.aws.amazon.com/wellarchitected/latest/operational-excellence-pillar/welcome.html) +- [AWS Well-Architected Framework - Reliability Pillar](https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/welcome.html) +- [CloudWatch Best Practices](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html) + +## Clean up resources + +When you're finished with the dashboard, you can delete it to avoid cluttering your CloudWatch console. + +**Delete the dashboard** + +To delete the dashboard, use the following command: + +```bash +aws cloudwatch delete-dashboards --dashboard-names LambdaMetricsDashboard +``` + +This command removes the dashboard from your account. The `delete-dashboards` command accepts multiple dashboard names, allowing you to delete multiple dashboards at once if needed. + +Don't forget to delete the JSON file if you no longer need it: + +```bash +rm dashboard-body.json +``` + +## Next steps + +Now that you've learned how to create a CloudWatch dashboard with a function name variable, you can explore other CloudWatch features: + +1. [Create composite alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Composite_Alarm.html) to monitor multiple metrics and conditions. +2. [Create anomaly detection alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Anomaly_Detection_Alarm.html) to automatically detect unusual behavior in your metrics. +3. [Use metric math](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html) to perform calculations on your metrics and create more advanced visualizations. +4. [Create cross-account dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-crossaccount-dashboard.html) to monitor resources across multiple AWS accounts. +5. [Use CloudWatch Logs Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html) to analyze and visualize your log data alongside your metrics. diff --git a/tuts/031-cloudwatch-dynamicdash/cloudwatch-dynamicdash.sh b/tuts/031-cloudwatch-dynamicdash/cloudwatch-dynamicdash.sh new file mode 100755 index 00000000..d4a16708 --- /dev/null +++ b/tuts/031-cloudwatch-dynamicdash/cloudwatch-dynamicdash.sh @@ -0,0 +1,362 @@ +#!/bin/bash + +# Script to create a CloudWatch dashboard with Lambda function name as a variable +# This script creates a CloudWatch dashboard that allows you to switch between different Lambda functions + +# Set up logging +LOG_FILE="cloudwatch-dashboard-script.log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +echo "$(date): Starting CloudWatch dashboard creation script" + +# Function to handle errors +handle_error() { + echo "ERROR: $1" + echo "Resources created:" + echo "- CloudWatch Dashboard: LambdaMetricsDashboard" + echo "" + echo "===========================================" + echo "CLEANUP CONFIRMATION" + echo "===========================================" + echo "An error occurred. Do you want to clean up the created resources? (y/n): " + read -r CLEANUP_CHOICE + + if [[ "${CLEANUP_CHOICE,,}" == "y" ]]; then + echo "Cleaning up resources..." + aws cloudwatch delete-dashboards --dashboard-names LambdaMetricsDashboard + echo "Cleanup complete." + else + echo "Resources were not cleaned up. You can manually delete them later." + fi + exit 1 +} + +# Check if AWS CLI is installed and configured +echo "Checking AWS CLI configuration..." +aws sts get-caller-identity > /dev/null 2>&1 +if [ $? -ne 0 ]; then + handle_error "AWS CLI is not properly configured. Please configure it with 'aws configure' and try again." +fi + +# Get the current region +REGION=$(aws configure get region) +if [ -z "$REGION" ]; then + REGION="us-east-1" + echo "No region found in AWS config, defaulting to $REGION" +fi +echo "Using region: $REGION" + +# Check if there are any Lambda functions in the account +echo "Checking for Lambda functions..." +LAMBDA_FUNCTIONS=$(aws lambda list-functions --query "Functions[*].FunctionName" --output text) +if [ -z "$LAMBDA_FUNCTIONS" ]; then + echo "No Lambda functions found in your account. Creating a simple test function..." + + # Create a temporary directory for Lambda function code + TEMP_DIR=$(mktemp -d) + + # Create a simple Lambda function + cat > "$TEMP_DIR/index.js" << EOF +exports.handler = async (event) => { + console.log('Event:', JSON.stringify(event, null, 2)); + return { + statusCode: 200, + body: JSON.stringify('Hello from Lambda!'), + }; +}; +EOF + + # Zip the function code + cd "$TEMP_DIR" || handle_error "Failed to change to temporary directory" + zip -q function.zip index.js + + # Create a role for the Lambda function + ROLE_NAME="LambdaDashboardTestRole" + ROLE_ARN=$(aws iam create-role \ + --role-name "$ROLE_NAME" \ + --assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":"lambda.amazonaws.com"},"Action":"sts:AssumeRole"}]}' \ + --query "Role.Arn" \ + --output text) + + if [ $? -ne 0 ]; then + handle_error "Failed to create IAM role for Lambda function" + fi + + echo "Waiting for role to be available..." + sleep 10 + + # Attach basic Lambda execution policy + aws iam attach-role-policy \ + --role-name "$ROLE_NAME" \ + --policy-arn "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" + + if [ $? -ne 0 ]; then + aws iam delete-role --role-name "$ROLE_NAME" + handle_error "Failed to attach policy to IAM role" + fi + + # Create the Lambda function + FUNCTION_NAME="DashboardTestFunction" + aws lambda create-function \ + --function-name "$FUNCTION_NAME" \ + --runtime nodejs18.x \ + --role "$ROLE_ARN" \ + --handler index.handler \ + --zip-file fileb://function.zip + + if [ $? -ne 0 ]; then + aws iam detach-role-policy \ + --role-name "$ROLE_NAME" \ + --policy-arn "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" + aws iam delete-role --role-name "$ROLE_NAME" + handle_error "Failed to create Lambda function" + fi + + # Invoke the function to generate some metrics + echo "Invoking Lambda function to generate metrics..." + for i in {1..5}; do + aws lambda invoke --function-name "$FUNCTION_NAME" --payload '{}' /dev/null > /dev/null + sleep 1 + done + + # Clean up temporary directory + cd - > /dev/null + rm -rf "$TEMP_DIR" + + # Set the function name for the dashboard + DEFAULT_FUNCTION="$FUNCTION_NAME" +else + # Use the first Lambda function as default + DEFAULT_FUNCTION=$(echo "$LAMBDA_FUNCTIONS" | awk '{print $1}') + echo "Found Lambda functions. Using $DEFAULT_FUNCTION as default." +fi + +# Create a dashboard with Lambda metrics and a function name variable +echo "Creating CloudWatch dashboard with Lambda function name variable..." + +# Create a JSON file for the dashboard body +cat > dashboard-body.json << EOF +{ + "widgets": [ + { + "type": "metric", + "x": 0, + "y": 0, + "width": 12, + "height": 6, + "properties": { + "metrics": [ + [ "AWS/Lambda", "Invocations", "FunctionName", "\${FunctionName}" ], + [ ".", "Errors", ".", "." ], + [ ".", "Throttles", ".", "." ] + ], + "view": "timeSeries", + "stacked": false, + "region": "$REGION", + "title": "Lambda Function Metrics for \${FunctionName}", + "period": 300 + } + }, + { + "type": "metric", + "x": 0, + "y": 6, + "width": 12, + "height": 6, + "properties": { + "metrics": [ + [ "AWS/Lambda", "Duration", "FunctionName", "\${FunctionName}", { "stat": "Average" } ] + ], + "view": "timeSeries", + "stacked": false, + "region": "$REGION", + "title": "Duration for \${FunctionName}", + "period": 300 + } + }, + { + "type": "metric", + "x": 12, + "y": 0, + "width": 12, + "height": 6, + "properties": { + "metrics": [ + [ "AWS/Lambda", "ConcurrentExecutions", "FunctionName", "\${FunctionName}" ] + ], + "view": "timeSeries", + "stacked": false, + "region": "$REGION", + "title": "Concurrent Executions for \${FunctionName}", + "period": 300 + } + } + ], + "periodOverride": "auto", + "variables": [ + { + "type": "property", + "id": "FunctionName", + "property": "FunctionName", + "label": "Lambda Function", + "inputType": "select", + "values": [ + { + "value": "$DEFAULT_FUNCTION", + "label": "$DEFAULT_FUNCTION" + } + ] + } + ] +} +EOF + +# Create the dashboard using the JSON file +DASHBOARD_RESULT=$(aws cloudwatch put-dashboard --dashboard-name LambdaMetricsDashboard --dashboard-body file://dashboard-body.json) +DASHBOARD_EXIT_CODE=$? + +# Check if there was a fatal error +if [ $DASHBOARD_EXIT_CODE -ne 0 ]; then + # If we created resources, clean them up + if [ -n "${FUNCTION_NAME:-}" ]; then + aws lambda delete-function --function-name "$FUNCTION_NAME" + aws iam detach-role-policy \ + --role-name "$ROLE_NAME" \ + --policy-arn "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" + aws iam delete-role --role-name "$ROLE_NAME" + fi + handle_error "Failed to create CloudWatch dashboard." +fi + +# Display any validation messages but continue +if [[ "$DASHBOARD_RESULT" == *"DashboardValidationMessages"* ]]; then + echo "Dashboard created with validation messages:" + echo "$DASHBOARD_RESULT" + echo "These validation messages are warnings and the dashboard should still function." +else + echo "Dashboard created successfully!" +fi + +# Verify the dashboard was created +echo "Verifying dashboard creation..." +DASHBOARD_INFO=$(aws cloudwatch get-dashboard --dashboard-name LambdaMetricsDashboard) +DASHBOARD_INFO_EXIT_CODE=$? + +if [ $DASHBOARD_INFO_EXIT_CODE -ne 0 ]; then + # If we created resources, clean them up + if [ -n "${FUNCTION_NAME:-}" ]; then + aws lambda delete-function --function-name "$FUNCTION_NAME" + aws iam detach-role-policy \ + --role-name "$ROLE_NAME" \ + --policy-arn "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" + aws iam delete-role --role-name "$ROLE_NAME" + fi + handle_error "Failed to verify dashboard creation." +fi + +echo "Dashboard verification successful!" +echo "Dashboard details:" +echo "$DASHBOARD_INFO" + +# List all dashboards to confirm +echo "Listing all dashboards:" +DASHBOARDS=$(aws cloudwatch list-dashboards) +DASHBOARDS_EXIT_CODE=$? + +if [ $DASHBOARDS_EXIT_CODE -ne 0 ]; then + # If we created resources, clean them up + if [ -n "${FUNCTION_NAME:-}" ]; then + aws lambda delete-function --function-name "$FUNCTION_NAME" + aws iam detach-role-policy \ + --role-name "$ROLE_NAME" \ + --policy-arn "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" + aws iam delete-role --role-name "$ROLE_NAME" + fi + handle_error "Failed to list dashboards." +fi +echo "$DASHBOARDS" + +# Show instructions for accessing the dashboard +echo "" +echo "Dashboard created successfully! To access it:" +echo "1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/" +echo "2. In the navigation pane, choose Dashboards" +echo "3. Select LambdaMetricsDashboard" +echo "4. You should see a dropdown menu labeled 'Lambda Function' at the top of the dashboard" +echo "5. Use this dropdown to select different Lambda functions and see their metrics" +echo "" + +# Create a list of resources for cleanup +RESOURCES=("- CloudWatch Dashboard: LambdaMetricsDashboard") +if [ -n "${FUNCTION_NAME:-}" ]; then + RESOURCES+=("- Lambda Function: $FUNCTION_NAME") + RESOURCES+=("- IAM Role: $ROLE_NAME") +fi + +# Prompt for cleanup +echo "===========================================" +echo "CLEANUP CONFIRMATION" +echo "===========================================" +echo "Resources created:" +for resource in "${RESOURCES[@]}"; do + echo "$resource" +done +echo "" +echo "Do you want to clean up all created resources? (y/n): " +read -r CLEANUP_CHOICE + +if [[ "${CLEANUP_CHOICE,,}" == "y" ]]; then + echo "Cleaning up resources..." + + # Delete the dashboard + aws cloudwatch delete-dashboards --dashboard-names LambdaMetricsDashboard + if [ $? -ne 0 ]; then + echo "WARNING: Failed to delete dashboard. You may need to delete it manually." + else + echo "Dashboard deleted successfully." + fi + + # If we created a Lambda function, delete it and its role + if [ -n "${FUNCTION_NAME:-}" ]; then + echo "Deleting Lambda function..." + aws lambda delete-function --function-name "$FUNCTION_NAME" + if [ $? -ne 0 ]; then + echo "WARNING: Failed to delete Lambda function. You may need to delete it manually." + else + echo "Lambda function deleted successfully." + fi + + echo "Detaching role policy..." + aws iam detach-role-policy \ + --role-name "$ROLE_NAME" \ + --policy-arn "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" + if [ $? -ne 0 ]; then + echo "WARNING: Failed to detach role policy. You may need to detach it manually." + else + echo "Role policy detached successfully." + fi + + echo "Deleting IAM role..." + aws iam delete-role --role-name "$ROLE_NAME" + if [ $? -ne 0 ]; then + echo "WARNING: Failed to delete IAM role. You may need to delete it manually." + else + echo "IAM role deleted successfully." + fi + fi + + # Clean up the JSON file + rm -f dashboard-body.json + + echo "Cleanup complete." +else + echo "Resources were not cleaned up. You can manually delete them later with:" + echo "aws cloudwatch delete-dashboards --dashboard-names LambdaMetricsDashboard" + if [ -n "${FUNCTION_NAME:-}" ]; then + echo "aws lambda delete-function --function-name $FUNCTION_NAME" + echo "aws iam detach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" + echo "aws iam delete-role --role-name $ROLE_NAME" + fi +fi + +echo "Script completed successfully!" diff --git a/tuts/034-eks-gs/README.md b/tuts/034-eks-gs/README.md new file mode 100644 index 00000000..36c41baa --- /dev/null +++ b/tuts/034-eks-gs/README.md @@ -0,0 +1,5 @@ +# Amazon Elastic Kubernetes Service getting started tutorial + +This tutorial provides a comprehensive introduction to Amazon Elastic Kubernetes Service (EKS) using the AWS CLI. You'll learn how to create an EKS cluster, configure node groups, and deploy applications to your Kubernetes environment on AWS. + +You can either run the provided shell script to automatically set up your EKS cluster and supporting infrastructure, or follow the step-by-step instructions in the tutorial markdown file to understand each component and customize the cluster configuration for your specific requirements. diff --git a/tuts/034-eks-gs/eks-gs.md b/tuts/034-eks-gs/eks-gs.md new file mode 100644 index 00000000..8e2e77bb --- /dev/null +++ b/tuts/034-eks-gs/eks-gs.md @@ -0,0 +1,432 @@ +# Getting started with Amazon EKS using the AWS CLI + +This tutorial guides you through creating and managing an Amazon Elastic Kubernetes Service (Amazon EKS) cluster using the AWS Command Line Interface (AWS CLI). You'll learn how to create all the required resources for a functional EKS cluster, including a VPC, IAM roles, the cluster itself, and a managed node group. + +## Prerequisites + +Before you begin this tutorial, make sure you have the following: + +1. The AWS CLI version 2 installed and configured. If you need to install it, follow the [AWS CLI installation guide](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). + +2. The `kubectl` command line tool installed. For installation instructions, see [Installing kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html). + +3. Sufficient IAM permissions to create and manage EKS clusters, IAM roles, CloudFormation stacks, and VPC resources. For more information about the required permissions, see [Amazon EKS IAM permissions](https://docs.aws.amazon.com/eks/latest/userguide/security_iam_service-with-iam.html). + +4. Basic familiarity with Kubernetes concepts and command line interfaces. + +5. **Estimated time**: This tutorial takes approximately 30-45 minutes to complete, not including wait times for resource creation (EKS cluster creation can take 10-15 minutes). + +6. **Estimated cost**: The resources created in this tutorial will cost approximately $0.23 per hour ($166 per month if left running). This includes: + - EKS Cluster: $0.10 per hour + - EC2 Instances (2 x t3.medium): $0.0832 per hour + - NAT Gateway: $0.045 per hour + +Verify that your AWS CLI is properly configured by running the following command: + +``` +aws sts get-caller-identity +``` + +This command returns your AWS account ID, IAM user or role, and AWS account ARN, confirming that your credentials are set up correctly. + +## Create a VPC for your EKS cluster + +Amazon EKS requires a VPC with specific configurations to operate properly. In this section, you'll create a VPC with public and private subnets using an AWS CloudFormation template. + +Run the following command to create a VPC using a CloudFormation template provided by AWS: + +``` +aws cloudformation create-stack \ + --stack-name my-eks-vpc-stack \ + --template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml +``` + +This command creates a CloudFormation stack that provisions a VPC with both public and private subnets across multiple Availability Zones, along with the necessary route tables and security groups for an EKS cluster. + +Wait for the stack creation to complete before proceeding: + +``` +aws cloudformation wait stack-create-complete \ + --stack-name my-eks-vpc-stack +``` + +The CloudFormation stack creates all the networking resources required for your EKS cluster, including subnets with the proper tagging for Kubernetes to use them effectively. + +## Create IAM roles for your EKS cluster + +Amazon EKS requires two IAM roles: one for the EKS cluster service and another for the worker nodes. In this section, you'll create both roles with the necessary permissions. + +**Create the EKS cluster IAM role** + +First, create a trust policy file that allows the EKS service to assume the role: + +``` +cat > eks-cluster-role-trust-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "eks.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] +} +EOF +``` + +This trust policy defines that only the EKS service can assume this role. + +Now create the cluster role using the trust policy: + +``` +aws iam create-role \ + --role-name myAmazonEKSClusterRole \ + --assume-role-policy-document file://"eks-cluster-role-trust-policy.json" +``` + +Attach the required EKS cluster policy to the role: + +``` +aws iam attach-role-policy \ + --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy \ + --role-name myAmazonEKSClusterRole +``` + +This policy grants the permissions necessary for EKS to create and manage resources on your behalf. + +**Create the EKS node IAM role** + +Create a trust policy file for the node role that allows EC2 instances to assume the role: + +``` +cat > node-role-trust-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "ec2.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] +} +EOF +``` + +Create the node role using this trust policy: + +``` +aws iam create-role \ + --role-name myAmazonEKSNodeRole \ + --assume-role-policy-document file://"node-role-trust-policy.json" +``` + +Attach the three required policies to the node role: + +``` +aws iam attach-role-policy \ + --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \ + --role-name myAmazonEKSNodeRole + +aws iam attach-role-policy \ + --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \ + --role-name myAmazonEKSNodeRole + +aws iam attach-role-policy \ + --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \ + --role-name myAmazonEKSNodeRole +``` + +These policies allow the worker nodes to connect to the EKS cluster, download container images, and configure networking. + +## Create your EKS cluster + +Now that you have the necessary networking and IAM resources, you can create your EKS cluster. In this section, you'll retrieve information from your VPC and create the cluster. + +First, retrieve the VPC ID, subnet IDs, and security group ID from the CloudFormation stack: + +``` +VPC_ID=$(aws cloudformation describe-stacks \ + --stack-name my-eks-vpc-stack \ + --query "Stacks[0].Outputs[?OutputKey=='VpcId'].OutputValue" \ + --output text) + +SUBNET_IDS=$(aws cloudformation describe-stacks \ + --stack-name my-eks-vpc-stack \ + --query "Stacks[0].Outputs[?OutputKey=='SubnetIds'].OutputValue" \ + --output text) + +SECURITY_GROUP_ID=$(aws cloudformation describe-stacks \ + --stack-name my-eks-vpc-stack \ + --query "Stacks[0].Outputs[?OutputKey=='SecurityGroups'].OutputValue" \ + --output text) +``` + +These commands extract the necessary resource IDs from the CloudFormation stack outputs. + +Now create the EKS cluster using these resources: + +``` +aws eks create-cluster \ + --name my-cluster \ + --role-arn $(aws iam get-role --role-name myAmazonEKSClusterRole --query "Role.Arn" --output text) \ + --resources-vpc-config subnetIds=$SUBNET_IDS,securityGroupIds=$SECURITY_GROUP_ID +``` + +This command creates an EKS cluster named "my-cluster" using the IAM role and VPC resources you created earlier. + +Creating an EKS cluster takes 10-15 minutes. Wait for the cluster to become active before proceeding: + +``` +aws eks wait cluster-active \ + --name my-cluster +``` + +This command will wait until the cluster is fully provisioned and active. + +## Configure kubectl to communicate with your cluster + +To interact with your Kubernetes cluster, you need to configure the `kubectl` tool. In this section, you'll update your kubeconfig file to connect to your new cluster. + +Run the following command to update your kubeconfig: + +``` +aws eks update-kubeconfig \ + --name my-cluster +``` + +This command adds an entry to your kubeconfig file that contains the necessary information to connect to your EKS cluster. + +Test your configuration by retrieving the cluster services: + +``` +kubectl get svc +``` + +If successful, you should see output similar to: + +``` +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +svc/kubernetes ClusterIP 10.100.0.1 443/TCP 1m +``` + +This confirms that your kubectl configuration is working correctly and can communicate with your EKS cluster. + +## Create a managed node group + +Now that your EKS cluster is running, you need to add worker nodes to run your applications. In this section, you'll create a managed node group that automatically provisions and manages EC2 instances for your cluster. + +Create a managed node group using the node role you created earlier: + +``` +aws eks create-nodegroup \ + --cluster-name my-cluster \ + --nodegroup-name my-nodegroup \ + --node-role $(aws iam get-role --role-name myAmazonEKSNodeRole --query "Role.Arn" --output text) \ + --subnets $(echo $SUBNET_IDS | tr ',' ' ') +``` + +This command creates a managed node group named "my-nodegroup" in your EKS cluster, using the IAM role and subnets you specified. + +Creating a node group takes 5-10 minutes. Wait for the node group to become active: + +``` +aws eks wait nodegroup-active \ + --cluster-name my-cluster \ + --nodegroup-name my-nodegroup +``` + +Once the node group is active, verify that the nodes have joined your cluster: + +``` +kubectl get nodes +``` + +You should see a list of nodes that have been provisioned and joined your cluster. If the nodes don't appear immediately, wait a minute or two and try again, as it takes some time for the nodes to register with the Kubernetes control plane. + +## View your cluster resources + +Now that your cluster is up and running with worker nodes, you can explore the resources that have been created. In this section, you'll use both AWS CLI and kubectl commands to view your cluster resources. + +View detailed information about your cluster: + +``` +aws eks describe-cluster \ + --name my-cluster +``` + +This command provides comprehensive information about your EKS cluster, including its status, endpoint, and configuration. + +View information about your node group: + +``` +aws eks describe-nodegroup \ + --cluster-name my-cluster \ + --nodegroup-name my-nodegroup +``` + +This command shows details about your managed node group, including the instance types, scaling configuration, and health status. + +View all Kubernetes resources across all namespaces: + +``` +kubectl get all --all-namespaces +``` + +This command lists all Kubernetes resources (pods, services, deployments, etc.) running in your cluster across all namespaces. + +## Troubleshooting + +If you encounter issues during this tutorial, here are some common problems and their solutions: + +**Issue: Insufficient permissions** + +If you receive an error about insufficient permissions, ensure that your IAM user or role has the necessary permissions to create and manage EKS resources. You may need to attach additional policies or create a custom policy. + +**Issue: Cluster creation fails** + +If cluster creation fails, check the error message for details. Common issues include: +- VPC configuration problems: Ensure your VPC has both public and private subnets. +- Service quota limits: You may have reached your account's limit for EKS clusters. +- IAM role issues: Ensure the cluster role has the correct trust relationship and permissions. + +**Issue: Nodes don't join the cluster** + +If nodes don't appear when you run `kubectl get nodes`: +- Wait a few minutes, as it can take time for nodes to register. +- Check the node group status with `aws eks describe-nodegroup`. +- Verify that the node role has all three required policies attached. + +**Issue: kubectl commands fail** + +If kubectl commands return errors: +- Ensure you've run `aws eks update-kubeconfig` with the correct cluster name. +- Check that your AWS CLI credentials are valid and have EKS permissions. +- Verify that kubectl is properly installed and in your PATH. + +## Clean up resources + +When you're finished with your EKS cluster, it's important to clean up the resources to avoid incurring unnecessary charges. In this section, you'll delete all the resources you created. + +First, delete the node group: + +``` +aws eks delete-nodegroup \ + --cluster-name my-cluster \ + --nodegroup-name my-nodegroup +``` + +Wait for the node group to be deleted: + +``` +aws eks wait nodegroup-deleted \ + --cluster-name my-cluster \ + --nodegroup-name my-nodegroup +``` + +Next, delete the EKS cluster: + +``` +aws eks delete-cluster \ + --name my-cluster +``` + +Wait for the cluster to be deleted: + +``` +aws eks wait cluster-deleted \ + --name my-cluster +``` + +Delete the CloudFormation stack that created your VPC: + +``` +aws cloudformation delete-stack \ + --stack-name my-eks-vpc-stack +``` + +Finally, delete the IAM roles you created: + +``` +aws iam detach-role-policy \ + --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy \ + --role-name myAmazonEKSClusterRole + +aws iam delete-role \ + --role-name myAmazonEKSClusterRole + +aws iam detach-role-policy \ + --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \ + --role-name myAmazonEKSNodeRole + +aws iam detach-role-policy \ + --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \ + --role-name myAmazonEKSNodeRole + +aws iam detach-role-policy \ + --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \ + --role-name myAmazonEKSNodeRole + +aws iam delete-role \ + --role-name myAmazonEKSNodeRole +``` + +These commands detach the policies from the roles and then delete the roles themselves. + +## Going to production + +This tutorial is designed to help you learn how to create and manage an EKS cluster using the AWS CLI. For production environments, consider the following additional best practices: + +### Security considerations + +1. **Network security**: + - Place worker nodes in private subnets only + - Use security groups to restrict traffic between pods + - Consider using private API server endpoints + +2. **IAM and RBAC**: + - Implement fine-grained access control using Kubernetes RBAC + - Use IAM roles for service accounts instead of node instance roles when possible + - Follow the principle of least privilege for all IAM roles + +3. **Encryption**: + - Enable encryption for EKS secrets + - Use AWS KMS for encrypting EBS volumes + - Consider using network policies to encrypt pod-to-pod traffic + +For more information on EKS security best practices, see [Amazon EKS security](https://docs.aws.amazon.com/eks/latest/userguide/security.html). + +### Architecture considerations + +1. **High availability**: + - Deploy across multiple Availability Zones + - Use multiple node groups for different workload types + - Implement proper pod disruption budgets + +2. **Scaling**: + - Configure cluster autoscaler for automatic node scaling + - Use horizontal pod autoscaler for application scaling + - Consider Karpenter for more efficient node provisioning + +3. **Monitoring and logging**: + - Enable CloudWatch Container Insights + - Set up Prometheus and Grafana for monitoring + - Configure Fluentd or Fluent Bit for centralized logging + +For more information on EKS architecture best practices, see the [EKS Best Practices Guide](https://aws.github.io/aws-eks-best-practices/). + +## Next steps + +Now that you've learned how to create and manage an Amazon EKS cluster using the AWS CLI, you can explore more advanced features and use cases: + +* Deploy a [sample application](https://docs.aws.amazon.com/eks/latest/userguide/sample-deployment.html) to your EKS cluster +* Learn how to [manage access to your cluster](https://docs.aws.amazon.com/eks/latest/userguide/grant-k8s-access.html) for other IAM users and roles +* Explore [cluster autoscaling](https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html) to automatically adjust the size of your node groups based on demand +* Configure [persistent storage](https://docs.aws.amazon.com/eks/latest/userguide/storage.html) for your applications using Amazon EBS or Amazon EFS +* Set up [monitoring and logging](https://docs.aws.amazon.com/eks/latest/userguide/monitoring.html) for your EKS cluster +* Implement [security best practices](https://docs.aws.amazon.com/eks/latest/userguide/security.html) for your Kubernetes workloads diff --git a/tuts/034-eks-gs/eks-gs.sh b/tuts/034-eks-gs/eks-gs.sh new file mode 100755 index 00000000..d55964c6 --- /dev/null +++ b/tuts/034-eks-gs/eks-gs.sh @@ -0,0 +1,427 @@ +#!/bin/bash + +# Amazon EKS Cluster Creation Script (v2) +# This script creates an Amazon EKS cluster with a managed node group using the AWS CLI + +# Set up logging +LOG_FILE="eks-cluster-creation-v2.log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +echo "Starting Amazon EKS cluster creation script at $(date)" +echo "All commands and outputs will be logged to $LOG_FILE" + +# Error handling function +handle_error() { + echo "ERROR: $1" + echo "Attempting to clean up resources..." + cleanup_resources + exit 1 +} + +# Function to check command success +check_command() { + if [ $? -ne 0 ] || echo "$1" | grep -i "error" > /dev/null; then + handle_error "$1" + fi +} + +# Function to check if kubectl is installed +check_kubectl() { + if ! command -v kubectl &> /dev/null; then + echo "WARNING: kubectl is not installed or not in your PATH." + echo "" + echo "To install kubectl, follow these instructions based on your operating system:" + echo "" + echo "For Linux:" + echo " 1. Download the latest release:" + echo " curl -LO \"https://dl.k8s.io/release/\$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl\"" + echo "" + echo " 2. Make the kubectl binary executable:" + echo " chmod +x ./kubectl" + echo "" + echo " 3. Move the binary to your PATH:" + echo " sudo mv ./kubectl /usr/local/bin/kubectl" + echo "" + echo "For macOS:" + echo " 1. Using Homebrew:" + echo " brew install kubectl" + echo " or" + echo " 2. Using curl:" + echo " curl -LO \"https://dl.k8s.io/release/\$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl\"" + echo " chmod +x ./kubectl" + echo " sudo mv ./kubectl /usr/local/bin/kubectl" + echo "" + echo "For Windows:" + echo " 1. Using curl:" + echo " curl -LO \"https://dl.k8s.io/release/v1.28.0/bin/windows/amd64/kubectl.exe\"" + echo " Add the binary to your PATH" + echo " or" + echo " 2. Using Chocolatey:" + echo " choco install kubernetes-cli" + echo "" + echo "After installation, verify with: kubectl version --client" + echo "" + return 1 + fi + return 0 +} + +# Generate a random identifier for resource names +RANDOM_ID=$(LC_ALL=C tr -dc 'a-z0-9' < /dev/urandom | fold -w 6 | head -n 1) +STACK_NAME="eks-vpc-stack-${RANDOM_ID}" +CLUSTER_NAME="eks-cluster-${RANDOM_ID}" +NODEGROUP_NAME="eks-nodegroup-${RANDOM_ID}" +CLUSTER_ROLE_NAME="EKSClusterRole-${RANDOM_ID}" +NODE_ROLE_NAME="EKSNodeRole-${RANDOM_ID}" + +echo "Using the following resource names:" +echo "- VPC Stack: $STACK_NAME" +echo "- EKS Cluster: $CLUSTER_NAME" +echo "- Node Group: $NODEGROUP_NAME" +echo "- Cluster IAM Role: $CLUSTER_ROLE_NAME" +echo "- Node IAM Role: $NODE_ROLE_NAME" + +# Array to track created resources for cleanup +declare -a CREATED_RESOURCES + +# Function to clean up resources +cleanup_resources() { + echo "Cleaning up resources in reverse order..." + + # Check if node group exists and delete it + if aws eks list-nodegroups --cluster-name "$CLUSTER_NAME" --query "nodegroups[?contains(@,'$NODEGROUP_NAME')]" --output text 2>/dev/null | grep -q "$NODEGROUP_NAME"; then + echo "Deleting node group: $NODEGROUP_NAME" + aws eks delete-nodegroup --cluster-name "$CLUSTER_NAME" --nodegroup-name "$NODEGROUP_NAME" + echo "Waiting for node group deletion to complete..." + aws eks wait nodegroup-deleted --cluster-name "$CLUSTER_NAME" --nodegroup-name "$NODEGROUP_NAME" + echo "Node group deleted successfully." + fi + + # Check if cluster exists and delete it + if aws eks describe-cluster --name "$CLUSTER_NAME" 2>/dev/null; then + echo "Deleting cluster: $CLUSTER_NAME" + aws eks delete-cluster --name "$CLUSTER_NAME" + echo "Waiting for cluster deletion to complete (this may take several minutes)..." + aws eks wait cluster-deleted --name "$CLUSTER_NAME" + echo "Cluster deleted successfully." + fi + + # Check if CloudFormation stack exists and delete it + if aws cloudformation describe-stacks --stack-name "$STACK_NAME" 2>/dev/null; then + echo "Deleting CloudFormation stack: $STACK_NAME" + aws cloudformation delete-stack --stack-name "$STACK_NAME" + echo "Waiting for CloudFormation stack deletion to complete..." + aws cloudformation wait stack-delete-complete --stack-name "$STACK_NAME" + echo "CloudFormation stack deleted successfully." + fi + + # Clean up IAM roles + if aws iam get-role --role-name "$NODE_ROLE_NAME" 2>/dev/null; then + echo "Detaching policies from node role: $NODE_ROLE_NAME" + aws iam detach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy --role-name "$NODE_ROLE_NAME" + aws iam detach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly --role-name "$NODE_ROLE_NAME" + aws iam detach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy --role-name "$NODE_ROLE_NAME" + echo "Deleting node role: $NODE_ROLE_NAME" + aws iam delete-role --role-name "$NODE_ROLE_NAME" + echo "Node role deleted successfully." + fi + + if aws iam get-role --role-name "$CLUSTER_ROLE_NAME" 2>/dev/null; then + echo "Detaching policies from cluster role: $CLUSTER_ROLE_NAME" + aws iam detach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy --role-name "$CLUSTER_ROLE_NAME" + echo "Deleting cluster role: $CLUSTER_ROLE_NAME" + aws iam delete-role --role-name "$CLUSTER_ROLE_NAME" + echo "Cluster role deleted successfully." + fi + + echo "Cleanup complete." +} + +# Trap to ensure cleanup on script exit +trap 'echo "Script interrupted. Cleaning up resources..."; cleanup_resources; exit 1' SIGINT SIGTERM + +# Verify AWS CLI configuration +echo "Verifying AWS CLI configuration..." +AWS_ACCOUNT_INFO=$(aws sts get-caller-identity) +check_command "$AWS_ACCOUNT_INFO" +echo "AWS CLI is properly configured." + +# Step 1: Create VPC using CloudFormation +echo "Step 1: Creating VPC with CloudFormation..." +echo "Creating CloudFormation stack: $STACK_NAME" + +# Create the CloudFormation stack +CF_CREATE_OUTPUT=$(aws cloudformation create-stack \ + --stack-name "$STACK_NAME" \ + --template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml) +check_command "$CF_CREATE_OUTPUT" +CREATED_RESOURCES+=("CloudFormation Stack: $STACK_NAME") + +echo "Waiting for CloudFormation stack to complete (this may take a few minutes)..." +aws cloudformation wait stack-create-complete --stack-name "$STACK_NAME" +if [ $? -ne 0 ]; then + handle_error "CloudFormation stack creation failed" +fi +echo "CloudFormation stack created successfully." + +# Step 2: Create IAM roles for EKS +echo "Step 2: Creating IAM roles for EKS..." + +# Create cluster role trust policy +echo "Creating cluster role trust policy..." +cat > eks-cluster-role-trust-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "eks.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] +} +EOF + +# Create cluster role +echo "Creating cluster IAM role: $CLUSTER_ROLE_NAME" +CLUSTER_ROLE_OUTPUT=$(aws iam create-role \ + --role-name "$CLUSTER_ROLE_NAME" \ + --assume-role-policy-document file://"eks-cluster-role-trust-policy.json") +check_command "$CLUSTER_ROLE_OUTPUT" +CREATED_RESOURCES+=("IAM Role: $CLUSTER_ROLE_NAME") + +# Attach policy to cluster role +echo "Attaching EKS cluster policy to role..." +ATTACH_CLUSTER_POLICY_OUTPUT=$(aws iam attach-role-policy \ + --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy \ + --role-name "$CLUSTER_ROLE_NAME") +check_command "$ATTACH_CLUSTER_POLICY_OUTPUT" + +# Create node role trust policy +echo "Creating node role trust policy..." +cat > node-role-trust-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "ec2.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] +} +EOF + +# Create node role +echo "Creating node IAM role: $NODE_ROLE_NAME" +NODE_ROLE_OUTPUT=$(aws iam create-role \ + --role-name "$NODE_ROLE_NAME" \ + --assume-role-policy-document file://"node-role-trust-policy.json") +check_command "$NODE_ROLE_OUTPUT" +CREATED_RESOURCES+=("IAM Role: $NODE_ROLE_NAME") + +# Attach policies to node role +echo "Attaching EKS node policies to role..." +ATTACH_NODE_POLICY1_OUTPUT=$(aws iam attach-role-policy \ + --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \ + --role-name "$NODE_ROLE_NAME") +check_command "$ATTACH_NODE_POLICY1_OUTPUT" + +ATTACH_NODE_POLICY2_OUTPUT=$(aws iam attach-role-policy \ + --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \ + --role-name "$NODE_ROLE_NAME") +check_command "$ATTACH_NODE_POLICY2_OUTPUT" + +ATTACH_NODE_POLICY3_OUTPUT=$(aws iam attach-role-policy \ + --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \ + --role-name "$NODE_ROLE_NAME") +check_command "$ATTACH_NODE_POLICY3_OUTPUT" + +# Step 3: Get VPC and subnet information +echo "Step 3: Getting VPC and subnet information..." + +VPC_ID=$(aws cloudformation describe-stacks \ + --stack-name "$STACK_NAME" \ + --query "Stacks[0].Outputs[?OutputKey=='VpcId'].OutputValue" \ + --output text) +if [ -z "$VPC_ID" ]; then + handle_error "Failed to get VPC ID from CloudFormation stack" +fi +echo "VPC ID: $VPC_ID" + +SUBNET_IDS=$(aws cloudformation describe-stacks \ + --stack-name "$STACK_NAME" \ + --query "Stacks[0].Outputs[?OutputKey=='SubnetIds'].OutputValue" \ + --output text) +if [ -z "$SUBNET_IDS" ]; then + handle_error "Failed to get Subnet IDs from CloudFormation stack" +fi +echo "Subnet IDs: $SUBNET_IDS" + +SECURITY_GROUP_ID=$(aws cloudformation describe-stacks \ + --stack-name "$STACK_NAME" \ + --query "Stacks[0].Outputs[?OutputKey=='SecurityGroups'].OutputValue" \ + --output text) +if [ -z "$SECURITY_GROUP_ID" ]; then + handle_error "Failed to get Security Group ID from CloudFormation stack" +fi +echo "Security Group ID: $SECURITY_GROUP_ID" + +# Step 4: Create EKS cluster +echo "Step 4: Creating EKS cluster: $CLUSTER_NAME" + +CLUSTER_ROLE_ARN=$(aws iam get-role --role-name "$CLUSTER_ROLE_NAME" --query "Role.Arn" --output text) +if [ -z "$CLUSTER_ROLE_ARN" ]; then + handle_error "Failed to get Cluster Role ARN" +fi + +echo "Creating EKS cluster (this will take 10-15 minutes)..." +CREATE_CLUSTER_OUTPUT=$(aws eks create-cluster \ + --name "$CLUSTER_NAME" \ + --role-arn "$CLUSTER_ROLE_ARN" \ + --resources-vpc-config subnetIds="$SUBNET_IDS",securityGroupIds="$SECURITY_GROUP_ID") +check_command "$CREATE_CLUSTER_OUTPUT" +CREATED_RESOURCES+=("EKS Cluster: $CLUSTER_NAME") + +echo "Waiting for EKS cluster to become active (this may take 10-15 minutes)..." +aws eks wait cluster-active --name "$CLUSTER_NAME" +if [ $? -ne 0 ]; then + handle_error "Cluster creation failed or timed out" +fi +echo "EKS cluster is now active." + +# Step 5: Configure kubectl +echo "Step 5: Configuring kubectl to communicate with the cluster..." + +# Check if kubectl is installed +if ! check_kubectl; then + echo "Will skip kubectl configuration steps but continue with the script." + echo "You can manually configure kubectl later with: aws eks update-kubeconfig --name \"$CLUSTER_NAME\"" +else + UPDATE_KUBECONFIG_OUTPUT=$(aws eks update-kubeconfig --name "$CLUSTER_NAME") + check_command "$UPDATE_KUBECONFIG_OUTPUT" + echo "kubectl configured successfully." + + # Test kubectl configuration + echo "Testing kubectl configuration..." + KUBECTL_TEST_OUTPUT=$(kubectl get svc 2>&1) + if [ $? -ne 0 ]; then + echo "Warning: kubectl configuration test failed. This might be due to permissions or network issues." + echo "Error details: $KUBECTL_TEST_OUTPUT" + echo "Continuing with script execution..." + else + echo "$KUBECTL_TEST_OUTPUT" + echo "kubectl configuration test successful." + fi +fi + +# Step 6: Create managed node group +echo "Step 6: Creating managed node group: $NODEGROUP_NAME" + +NODE_ROLE_ARN=$(aws iam get-role --role-name "$NODE_ROLE_NAME" --query "Role.Arn" --output text) +if [ -z "$NODE_ROLE_ARN" ]; then + handle_error "Failed to get Node Role ARN" +fi + +# Convert comma-separated subnet IDs to space-separated for the create-nodegroup command +SUBNET_IDS_ARRAY=(${SUBNET_IDS//,/ }) + +echo "Creating managed node group (this will take 5-10 minutes)..." +CREATE_NODEGROUP_OUTPUT=$(aws eks create-nodegroup \ + --cluster-name "$CLUSTER_NAME" \ + --nodegroup-name "$NODEGROUP_NAME" \ + --node-role "$NODE_ROLE_ARN" \ + --subnets "${SUBNET_IDS_ARRAY[@]}") +check_command "$CREATE_NODEGROUP_OUTPUT" +CREATED_RESOURCES+=("EKS Node Group: $NODEGROUP_NAME") + +echo "Waiting for node group to become active (this may take 5-10 minutes)..." +aws eks wait nodegroup-active --cluster-name "$CLUSTER_NAME" --nodegroup-name "$NODEGROUP_NAME" +if [ $? -ne 0 ]; then + handle_error "Node group creation failed or timed out" +fi +echo "Node group is now active." + +# Step 7: Verify nodes +echo "Step 7: Verifying nodes..." +echo "Waiting for nodes to register with the cluster (this may take a few minutes)..." +sleep 60 # Give nodes more time to register + +# Check if kubectl is installed before attempting to use it +if ! check_kubectl; then + echo "Cannot verify nodes without kubectl. Skipping this step." + echo "You can manually verify nodes after installing kubectl with: kubectl get nodes" +else + NODES_OUTPUT=$(kubectl get nodes 2>&1) + if [ $? -ne 0 ]; then + echo "Warning: Unable to get nodes. This might be due to permissions or the nodes are still registering." + echo "Error details: $NODES_OUTPUT" + echo "Continuing with script execution..." + else + echo "$NODES_OUTPUT" + echo "Nodes verified successfully." + fi +fi + +# Step 8: View resources +echo "Step 8: Viewing cluster resources..." + +echo "Cluster information:" +CLUSTER_INFO=$(aws eks describe-cluster --name "$CLUSTER_NAME") +echo "$CLUSTER_INFO" + +echo "Node group information:" +NODEGROUP_INFO=$(aws eks describe-nodegroup --cluster-name "$CLUSTER_NAME" --nodegroup-name "$NODEGROUP_NAME") +echo "$NODEGROUP_INFO" + +echo "Kubernetes resources:" +if ! check_kubectl; then + echo "Cannot list Kubernetes resources without kubectl. Skipping this step." + echo "You can manually list resources after installing kubectl with: kubectl get all --all-namespaces" +else + KUBE_RESOURCES=$(kubectl get all --all-namespaces 2>&1) + if [ $? -ne 0 ]; then + echo "Warning: Unable to get Kubernetes resources. This might be due to permissions." + echo "Error details: $KUBE_RESOURCES" + echo "Continuing with script execution..." + else + echo "$KUBE_RESOURCES" + fi +fi + +# Display summary of created resources +echo "" +echo "===========================================" +echo "RESOURCES CREATED" +echo "===========================================" +for resource in "${CREATED_RESOURCES[@]}"; do + echo "- $resource" +done +echo "===========================================" + +# Prompt for cleanup +echo "" +echo "===========================================" +echo "CLEANUP CONFIRMATION" +echo "===========================================" +echo "Do you want to clean up all created resources? (y/n): " +read -r CLEANUP_CHOICE + +if [[ "${CLEANUP_CHOICE,,}" == "y" ]]; then + cleanup_resources +else + echo "Resources will not be cleaned up. You can manually clean them up later." + echo "To clean up resources, run the following commands:" + echo "1. Delete node group: aws eks delete-nodegroup --cluster-name $CLUSTER_NAME --nodegroup-name $NODEGROUP_NAME" + echo "2. Wait for node group deletion: aws eks wait nodegroup-deleted --cluster-name $CLUSTER_NAME --nodegroup-name $NODEGROUP_NAME" + echo "3. Delete cluster: aws eks delete-cluster --name $CLUSTER_NAME" + echo "4. Wait for cluster deletion: aws eks wait cluster-deleted --name $CLUSTER_NAME" + echo "5. Delete CloudFormation stack: aws cloudformation delete-stack --stack-name $STACK_NAME" + echo "6. Detach and delete IAM roles for the node group and cluster" +fi + +echo "Script completed at $(date)" diff --git a/tuts/035-workspaces-personal/README.md b/tuts/035-workspaces-personal/README.md new file mode 100644 index 00000000..5976bd80 --- /dev/null +++ b/tuts/035-workspaces-personal/README.md @@ -0,0 +1,5 @@ +# Amazon WorkSpaces personal tutorial + +This tutorial demonstrates how to set up and manage personal Amazon WorkSpaces using the AWS CLI. You'll learn how to create virtual desktop environments in the cloud, configure user access, and manage WorkSpaces for individual users or small teams. + +You can either run the provided shell script to automatically provision your WorkSpaces environment and user configurations, or follow the step-by-step instructions in the tutorial markdown file to understand each component and customize the setup for your specific organizational needs. diff --git a/tuts/035-workspaces-personal/workspaces-personal.md b/tuts/035-workspaces-personal/workspaces-personal.md new file mode 100644 index 00000000..796c5da0 --- /dev/null +++ b/tuts/035-workspaces-personal/workspaces-personal.md @@ -0,0 +1,252 @@ +# Creating and managing Amazon WorkSpaces Personal using the AWS CLI + +This tutorial guides you through creating and managing Amazon WorkSpaces Personal using the AWS Command Line Interface (AWS CLI). You'll learn how to register a directory with WorkSpaces, create a WorkSpace for a user, check its status, and perform basic management tasks. + +## Prerequisites + +Before you begin this tutorial, make sure you have the following: + +1. The AWS CLI installed and configured with appropriate credentials. If you need to install it, follow the [AWS CLI installation guide](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). + +2. An AWS account with permissions to create and manage WorkSpaces resources. + +3. A directory service already set up in a supported AWS Region. WorkSpaces Personal requires one of the following directory types: + - Simple AD directory + - AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) + - AD Connector to connect to an existing Microsoft Active Directory + - A trust relationship between AWS Managed Microsoft AD and your on-premises domain + - A dedicated directory using Microsoft Entra ID or another identity provider through IAM Identity Center + +4. A user account in your directory that will be assigned to the WorkSpace. + +5. Sufficient [service quotas](https://docs.aws.amazon.com/workspaces/latest/adminguide/workspaces-limits.html) for creating WorkSpaces in your AWS account. + +6. Basic familiarity with command line interfaces. + +### Cost considerations + +Running resources created in this tutorial will incur costs in your AWS account. Approximate costs include: + +- **WorkSpaces Personal (Standard bundle with Windows)**: + - AlwaysOn mode: ~$35/month + - AutoStop mode: ~$9.75/month + $0.26/hour of usage + +- **Directory Services** (if you need to create one): + - AWS Simple AD (Small): ~$36.50/month + - AWS Managed Microsoft AD (Standard): ~$292/month + - AD Connector: ~$36.50/month + +Additional charges may apply for data transfer, increased storage volumes, and application licensing. For the most current pricing information, see the [Amazon WorkSpaces Pricing page](https://aws.amazon.com/workspaces/pricing/). + +## Verify WorkSpaces availability in your region + +Amazon WorkSpaces is not available in all AWS Regions. Before proceeding, verify that WorkSpaces is available in your chosen region by checking the [WorkSpaces supported regions](https://docs.aws.amazon.com/workspaces/latest/adminguide/workspaces-regions.html) in the documentation. + +Once you've confirmed WorkSpaces availability, set your AWS region: + +``` +export AWS_DEFAULT_REGION=us-west-2 +``` + +Replace `us-west-2` with your preferred region where WorkSpaces is available. + +## Register a directory with WorkSpaces + +Before creating WorkSpaces, you need to register your directory with the WorkSpaces service. First, check if your directory is already registered: + +``` +aws workspaces describe-workspace-directories +``` + +This command lists all directories that are registered with WorkSpaces. If your directory is not listed, you need to register it: + +``` +aws workspaces register-workspace-directory --directory-id d-abcd1234 +``` + +Replace `d-abcd1234` with your actual directory ID. The registration process may take a few minutes to complete. You can check the registration status with: + +``` +aws workspaces describe-workspace-directories --directory-ids d-abcd1234 +``` + +Look for the `"State": "REGISTERED"` field in the output to confirm that registration is complete. + +## List available WorkSpaces bundles + +A bundle defines the hardware and software configuration for your WorkSpace. To list all available bundles provided by AWS: + +``` +aws workspaces describe-workspace-bundles --owner AMAZON +``` + +This command returns detailed information about all available bundles. For a more concise list showing just the bundle names and IDs: + +``` +aws workspaces describe-workspace-bundles --owner AMAZON --query "Bundles[*].[Name, BundleId]" --output text +``` + +Note the bundle ID that you want to use for creating your WorkSpace. + +## Create a WorkSpace + +Now you can create a WorkSpace for a user in your directory. You have two options for the running mode: + +**Option 1: Create an AlwaysOn WorkSpace (billed monthly)** + +``` +aws workspaces create-workspaces --workspaces DirectoryId=d-abcd1234,UserName=jdoe,BundleId=wsb-abcd1234 +``` + +**Option 2: Create an AutoStop WorkSpace (billed hourly)** + +``` +aws workspaces create-workspaces --workspaces DirectoryId=d-abcd1234,UserName=jdoe,BundleId=wsb-abcd1234,WorkspaceProperties={RunningMode=AUTO_STOP} +``` + +You can also specify additional properties like timeout duration and add tags: + +``` +aws workspaces create-workspaces --workspaces DirectoryId=d-abcd1234,UserName=jdoe,BundleId=wsb-abcd1234,WorkspaceProperties={RunningMode=AUTO_STOP,RunningModeAutoStopTimeoutInMinutes=60},Tags=[{Key=Department,Value=IT}] +``` + +Replace the following values with your actual information: +- `d-abcd1234`: Your directory ID +- `jdoe`: The username of the user in your directory +- `wsb-abcd1234`: The bundle ID you selected + +The command returns a response that includes the WorkSpace ID. Note this ID for future management operations. + +## Check the status of your WorkSpace + +Creating a WorkSpace can take 20 minutes or more. To check the status of your WorkSpace: + +``` +aws workspaces describe-workspaces --workspace-ids ws-abcd1234 +``` + +Replace `ws-abcd1234` with your actual WorkSpace ID. Look for the `"State"` field in the output: +- `PENDING`: The WorkSpace is still being created +- `AVAILABLE`: The WorkSpace is ready to use +- `ERROR`: There was a problem creating the WorkSpace + +You can also list all WorkSpaces in a specific directory: + +``` +aws workspaces describe-workspaces --directory-id d-abcd1234 +``` + +## Troubleshooting WorkSpace creation + +If your WorkSpace creation fails or gets stuck in an error state, here are some common issues and solutions: + +1. **Insufficient service quotas**: Check your [WorkSpaces service quotas](https://docs.aws.amazon.com/workspaces/latest/adminguide/workspaces-limits.html) and request an increase if needed. + +2. **Directory issues**: Ensure your directory is properly configured and accessible. + +3. **User not found**: Verify that the username exists in your directory. + +4. **Network connectivity**: Check that your VPC and subnets are properly configured for WorkSpaces. + +5. **Region availability**: Confirm that WorkSpaces is available in your selected region. + +If you encounter an error, you can get more details using: + +``` +aws workspaces describe-workspace-errors --workspace-ids ws-abcd1234 +``` + +## Manage your WorkSpace + +After your WorkSpace is created, you can perform various management tasks: + +**Modify WorkSpace properties** + +Change the running mode from AutoStop to AlwaysOn: + +``` +aws workspaces modify-workspace-properties --workspace-id ws-abcd1234 --workspace-properties RunningMode=ALWAYS_ON +``` + +**Reboot a WorkSpace** + +If your WorkSpace becomes unresponsive, you can reboot it: + +``` +aws workspaces reboot-workspaces --reboot-workspace-requests WorkspaceId=ws-abcd1234 +``` + +**Rebuild a WorkSpace** + +If you need to restore the operating system to its original state: + +``` +aws workspaces rebuild-workspaces --rebuild-workspace-requests WorkspaceId=ws-abcd1234 +``` + +## Invitation emails + +When you create a WorkSpace, an invitation email is automatically sent to the user in most cases. However, invitation emails aren't sent automatically if you're using AD Connector or a trust relationship, or if the user already exists in Active Directory. + +In these cases, you need to manually send an invitation email through the AWS Management Console. For more information, see [Send an invitation email](https://docs.aws.amazon.com/workspaces/latest/adminguide/manage-workspaces-users.html#send-invitation). + +## Clean up resources + +When you no longer need your WorkSpace, you can delete it to avoid incurring charges: + +``` +aws workspaces terminate-workspaces --terminate-workspace-requests WorkspaceId=ws-abcd1234 +``` + +If you registered a directory specifically for this tutorial and no longer need it, you can deregister it: + +``` +aws workspaces deregister-workspace-directory --directory-id d-abcd1234 +``` + +Note that deregistering a directory does not delete it. The directory will still exist in AWS Directory Service, but it will no longer be available for use with WorkSpaces. + +## Going to production + +This tutorial is designed to help you learn how to use the AWS CLI to create and manage WorkSpaces. For production environments, consider the following additional factors: + +### Security considerations + +1. **Implement least privilege access**: Create IAM policies that grant only the permissions needed for specific roles. + +2. **Configure network security**: Use security groups and IP access control groups to restrict network access to WorkSpaces. + +3. **Enable encryption**: Configure encryption for WorkSpaces volumes and data in transit. + +4. **Implement multi-factor authentication**: Enable MFA for WorkSpaces users. + +5. **Set up monitoring and logging**: Configure CloudTrail and CloudWatch to monitor WorkSpaces activity. + +For more information, see the [Amazon WorkSpaces Security guide](https://docs.aws.amazon.com/workspaces/latest/adminguide/workspaces-security.html). + +### Architecture best practices + +1. **Automation**: Use AWS CloudFormation or Terraform to automate WorkSpaces deployment. + +2. **High availability**: Configure cross-Region redirection for disaster recovery. + +3. **Scalability**: Create custom images and bundles for consistent deployment at scale. + +4. **Cost optimization**: Implement WorkSpaces Savings Plans and choose appropriate running modes. + +5. **Monitoring and management**: Set up proactive monitoring and automated management. + +For more information on building production-ready WorkSpaces environments, see: + +- [AWS Well-Architected Framework](https://aws.amazon.com/architecture/well-architected/) +- [Amazon WorkSpaces Best Practices](https://docs.aws.amazon.com/workspaces/latest/adminguide/best-practices.html) + +## Next steps + +Now that you've learned how to create and manage WorkSpaces using the AWS CLI, you might want to explore: + +- [Customize your WorkSpace](https://docs.aws.amazon.com/workspaces/latest/userguide/customize-workspaces.html) +- [Enable self-service WorkSpace management capabilities for your users](https://docs.aws.amazon.com/workspaces/latest/adminguide/enable-user-self-service-workspace-management.html) +- [Set up cross-Region redirection for your WorkSpaces](https://docs.aws.amazon.com/workspaces/latest/adminguide/cross-region-redirection.html) +- [Implement IP access control groups for your WorkSpaces](https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces-ip-access-control-groups.html) +- [Enable multi-factor authentication for WorkSpaces](https://docs.aws.amazon.com/workspaces/latest/adminguide/configure-workspace-authentication.html) diff --git a/tuts/035-workspaces-personal/workspaces-personal.sh b/tuts/035-workspaces-personal/workspaces-personal.sh new file mode 100755 index 00000000..c00b301a --- /dev/null +++ b/tuts/035-workspaces-personal/workspaces-personal.sh @@ -0,0 +1,445 @@ +#!/bin/bash + +# Script to create a WorkSpace in WorkSpaces Personal +# This script follows the workflow described in the AWS documentation +# https://docs.aws.amazon.com/workspaces/latest/adminguide/create-workspaces-personal.html + +# Set up logging +LOG_FILE="workspaces_creation.log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +echo "$(date): Starting WorkSpaces creation script" +echo "==============================================" + +# Initialize resource tracking array +declare -a CREATED_RESOURCES + +# Function to handle errors +handle_error() { + echo "ERROR: $1" + echo "Resources created before error:" + for resource in "${CREATED_RESOURCES[@]}"; do + echo " - $resource" + done + exit 1 +} + +# Function to check if a command succeeded +check_command() { + # Check for ResourceNotFound.User error specifically + if echo "$1" | grep -q "ResourceNotFound.User"; then + echo "" + echo "ERROR: User not found in the directory." + echo "" + echo "This error occurs when the specified username doesn't exist in the directory." + echo "" + echo "To resolve this issue:" + echo "1. Ensure the user exists in the directory before creating a WorkSpace." + echo "2. For Simple AD and AWS Managed Microsoft AD:" + echo " - Connect to a directory-joined instance" + echo " - Use Active Directory tools to create the user" + echo " - See: https://docs.aws.amazon.com/workspaces/latest/adminguide/manage-users.html" + echo "" + echo "3. For AD Connector:" + echo " - Create the user in your on-premises Active Directory" + echo " - Ensure proper synchronization with the AD Connector" + echo "" + echo "4. Alternatively, you can use the AWS Console to create a WorkSpace," + echo " which can create the user automatically in some directory types." + echo "" + handle_error "User '$USERNAME' not found in directory '$DIRECTORY_ID'" + # Check for other errors + elif echo "$1" | grep -i "error" > /dev/null; then + handle_error "$1" + fi +} + +# Step 0: Select AWS region +echo "" +echo "==============================================" +echo "AWS REGION SELECTION" +echo "==============================================" +echo "Enter the AWS region to use (e.g., us-east-1, us-west-2):" +read -r AWS_REGION + +if [ -z "$AWS_REGION" ]; then + handle_error "Region cannot be empty" +fi + +export AWS_DEFAULT_REGION="$AWS_REGION" +echo "Using AWS region: $AWS_REGION" + +# Step 1: Prompt for directory ID +echo "" +echo "==============================================" +echo "DIRECTORY SELECTION" +echo "==============================================" +echo "Listing available directories..." + +DIRECTORIES_OUTPUT=$(aws workspaces describe-workspace-directories --output json) +check_command "$DIRECTORIES_OUTPUT" +echo "$DIRECTORIES_OUTPUT" + +# Extract directory IDs and display them +DIRECTORY_IDS=$(echo "$DIRECTORIES_OUTPUT" | grep -o '"DirectoryId": "[^"]*' | cut -d'"' -f4) + +if [ -z "$DIRECTORY_IDS" ]; then + echo "No directories found. Please create a directory first using AWS Directory Service." + echo "For more information, see: https://docs.aws.amazon.com/workspaces/latest/adminguide/register-deregister-directory.html" + exit 1 +fi + +echo "" +echo "Available directory IDs:" +echo "$DIRECTORY_IDS" +echo "" +echo "Enter the directory ID you want to use:" +read -r DIRECTORY_ID + +# Validate directory ID +if ! echo "$DIRECTORY_IDS" | grep -q "$DIRECTORY_ID"; then + echo "Directory ID $DIRECTORY_ID not found in the list of available directories." + echo "Please check the ID and try again." + exit 1 +fi + +echo "Selected directory ID: $DIRECTORY_ID" + +# Step 2: Check if directory is registered with WorkSpaces +echo "" +echo "==============================================" +echo "CHECKING DIRECTORY REGISTRATION" +echo "==============================================" + +REGISTERED=$(echo "$DIRECTORIES_OUTPUT" | grep -A 5 "\"DirectoryId\": \"$DIRECTORY_ID\"" | grep -c "\"State\": \"REGISTERED\"") + +if [ "$REGISTERED" -eq 0 ]; then + echo "Directory $DIRECTORY_ID is not registered with WorkSpaces. Registering now..." + REGISTER_OUTPUT=$(aws workspaces register-workspace-directory --directory-id "$DIRECTORY_ID") + check_command "$REGISTER_OUTPUT" + echo "Directory registration initiated. This may take a few minutes." + + # Add to resource tracking + CREATED_RESOURCES+=("Directory registration: $DIRECTORY_ID") + + # Wait for directory to be registered + echo "Waiting for directory registration to complete..." + sleep 30 + + # Check registration status + REGISTRATION_CHECK=$(aws workspaces describe-workspace-directories --directory-ids "$DIRECTORY_ID") + check_command "$REGISTRATION_CHECK" + + REGISTRATION_STATE=$(echo "$REGISTRATION_CHECK" | grep -o '"State": "[^"]*' | cut -d'"' -f4) + if [ "$REGISTRATION_STATE" != "REGISTERED" ]; then + echo "Directory registration is still in progress. Current state: $REGISTRATION_STATE" + echo "Please check the AWS console for the final status." + echo "You may need to wait a few minutes before proceeding." + else + echo "Directory successfully registered with WorkSpaces." + fi +else + echo "Directory $DIRECTORY_ID is already registered with WorkSpaces." +fi + +# Get directory type to provide appropriate user guidance +DIRECTORY_TYPE=$(echo "$DIRECTORIES_OUTPUT" | grep -A 10 "\"DirectoryId\": \"$DIRECTORY_ID\"" | grep -o '"DirectoryType": "[^"]*' | cut -d'"' -f4) +echo "Directory type: $DIRECTORY_TYPE" + +# Display user creation guidance based on directory type +echo "" +echo "==============================================" +echo "USER CREATION GUIDANCE" +echo "==============================================" +case "$DIRECTORY_TYPE" in + "SimpleAD" | "MicrosoftAD") + echo "For $DIRECTORY_TYPE, users must be created using Active Directory tools." + echo "1. Connect to a directory-joined EC2 instance" + echo "2. Use Active Directory Users and Computers to create users" + echo "3. For detailed instructions, see: https://docs.aws.amazon.com/workspaces/latest/adminguide/manage-users.html" + ;; + "ADConnector") + echo "For AD Connector, users must exist in your on-premises Active Directory." + echo "1. Create the user in your on-premises Active Directory" + echo "2. Ensure the user is in an OU that is within the scope of your AD Connector" + echo "3. For detailed instructions, see: https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ad_connector_management.html" + ;; + *) + echo "For this directory type, ensure users exist before creating WorkSpaces." + echo "For detailed instructions, see: https://docs.aws.amazon.com/workspaces/latest/adminguide/manage-users.html" + ;; +esac +echo "" + +# Step 3: List available bundles +echo "" +echo "==============================================" +echo "BUNDLE SELECTION" +echo "==============================================" +echo "Listing available WorkSpace bundles..." + +# Get bundles with a format that's easier to parse +BUNDLES_OUTPUT=$(aws workspaces describe-workspace-bundles --owner AMAZON --output text --query "Bundles[*].[BundleId,Name,ComputeType.Name,RootStorage.Capacity,UserStorage.Capacity]") +check_command "$BUNDLES_OUTPUT" + +# Extract bundle information and display in a numbered list +echo "Available bundles:" +echo "-----------------" +echo "NUM | BUNDLE ID | NAME | COMPUTE TYPE | ROOT STORAGE | USER STORAGE" +echo "-----------------------------------------------------------------" + +# Create arrays to store bundle information +declare -a BUNDLE_IDS +declare -a BUNDLE_NAMES + +# Process the output to extract bundle information +COUNT=1 +while IFS=$'\t' read -r BUNDLE_ID BUNDLE_NAME COMPUTE_TYPE ROOT_STORAGE USER_STORAGE || [[ -n "$BUNDLE_ID" ]]; do + # Store in arrays + BUNDLE_IDS[$COUNT]="$BUNDLE_ID" + BUNDLE_NAMES[$COUNT]="$BUNDLE_NAME" + + # Display with number + echo "$COUNT | $BUNDLE_ID | $BUNDLE_NAME | $COMPUTE_TYPE | $ROOT_STORAGE GB | $USER_STORAGE GB" + + ((COUNT++)) +done <<< "$BUNDLES_OUTPUT" + +# Prompt for selection +echo "" +echo "Enter the number of the bundle you want to use (1-$((COUNT-1))):" +read -r BUNDLE_SELECTION + +# Validate selection +if ! [[ "$BUNDLE_SELECTION" =~ ^[0-9]+$ ]] || [ "$BUNDLE_SELECTION" -lt 1 ] || [ "$BUNDLE_SELECTION" -ge "$COUNT" ]; then + handle_error "Invalid bundle selection. Please enter a number between 1 and $((COUNT-1))." +fi + +# Get the selected bundle ID +BUNDLE_ID="${BUNDLE_IDS[$BUNDLE_SELECTION]}" +BUNDLE_NAME="${BUNDLE_NAMES[$BUNDLE_SELECTION]}" + +echo "Selected bundle: $BUNDLE_NAME (ID: $BUNDLE_ID)" + +# Step 4: Prompt for username +echo "" +echo "==============================================" +echo "USER INFORMATION" +echo "==============================================" +echo "Enter the username for the WorkSpace:" +read -r USERNAME + +echo "NOTE: The user must already exist in the directory for the WorkSpace creation to succeed." +echo "If you're using Simple AD or AWS Managed Microsoft AD, the user must be created using Active Directory tools." +echo "If you're using AD Connector, the user must exist in your on-premises Active Directory." +echo "" + +echo "Enter the user's first name:" +read -r FIRST_NAME + +echo "Enter the user's last name:" +read -r LAST_NAME + +echo "Enter the user's email address:" +read -r EMAIL + +# Step 5: Choose running mode +echo "" +echo "==============================================" +echo "RUNNING MODE SELECTION" +echo "==============================================" +echo "Select running mode:" +echo "1. AlwaysOn (billed monthly)" +echo "2. AutoStop (billed hourly)" +read -r RUNNING_MODE_CHOICE + +if [ "$RUNNING_MODE_CHOICE" = "1" ]; then + RUNNING_MODE="ALWAYS_ON" + AUTO_STOP_TIMEOUT="" +else + RUNNING_MODE="AUTO_STOP" + AUTO_STOP_TIMEOUT=60 +fi + +echo "Selected running mode: $RUNNING_MODE" + +# Step 6: Add tags (optional) +echo "" +echo "==============================================" +echo "TAGS (OPTIONAL)" +echo "==============================================" +echo "Would you like to add tags to your WorkSpace? (y/n):" +read -r ADD_TAGS + +TAGS_JSON="" +if [ "$ADD_TAGS" = "y" ] || [ "$ADD_TAGS" = "Y" ]; then + echo "Enter tag key (e.g., Department):" + read -r TAG_KEY + + echo "Enter tag value (e.g., IT):" + read -r TAG_VALUE + + TAGS_JSON="[{\"Key\":\"$TAG_KEY\",\"Value\":\"$TAG_VALUE\"}]" +fi + +# Step 7: Create the WorkSpace +echo "" +echo "==============================================" +echo "CREATING WORKSPACE" +echo "==============================================" +echo "Creating WorkSpace with the following parameters:" +echo "Directory ID: $DIRECTORY_ID" +echo "Username: $USERNAME" +echo "Bundle ID: $BUNDLE_ID" +echo "Running Mode: $RUNNING_MODE" +if [ -n "$TAGS_JSON" ]; then + echo "Tags: $TAG_KEY=$TAG_VALUE" +fi + +# Create JSON for workspace properties +if [ "$RUNNING_MODE" = "AUTO_STOP" ]; then + PROPERTIES_JSON="{\"RunningMode\":\"$RUNNING_MODE\",\"RunningModeAutoStopTimeoutInMinutes\":$AUTO_STOP_TIMEOUT}" +else + PROPERTIES_JSON="{\"RunningMode\":\"$RUNNING_MODE\"}" +fi + +# Create JSON for workspaces parameter +WORKSPACE_JSON="{\"DirectoryId\":\"$DIRECTORY_ID\",\"UserName\":\"$USERNAME\",\"BundleId\":\"$BUNDLE_ID\",\"WorkspaceProperties\":$PROPERTIES_JSON" + +# Add tags if specified +if [ -n "$TAGS_JSON" ]; then + WORKSPACE_JSON="$WORKSPACE_JSON,\"Tags\":$TAGS_JSON" +fi + +# Close the JSON object +WORKSPACE_JSON="$WORKSPACE_JSON}" + +# Construct the create-workspaces command +CREATE_COMMAND="aws workspaces create-workspaces --workspaces '$WORKSPACE_JSON'" + +echo "Executing: $CREATE_COMMAND" +CREATE_OUTPUT=$(eval "$CREATE_COMMAND") +check_command "$CREATE_OUTPUT" +echo "$CREATE_OUTPUT" + +# Extract WorkSpace ID +WORKSPACE_ID=$(echo "$CREATE_OUTPUT" | grep -o '"WorkspaceId": "[^"]*' | head -1 | cut -d'"' -f4) + +if [ -z "$WORKSPACE_ID" ]; then + handle_error "Failed to extract WorkSpace ID from creation output." +fi + +echo "WorkSpace creation initiated. WorkSpace ID: $WORKSPACE_ID" +CREATED_RESOURCES+=("WorkSpace: $WORKSPACE_ID") + +# Step 8: Check WorkSpace status +echo "" +echo "==============================================" +echo "CHECKING WORKSPACE STATUS" +echo "==============================================" +echo "Checking status of WorkSpace $WORKSPACE_ID..." + +# Initial status check +STATUS_OUTPUT=$(aws workspaces describe-workspaces --workspace-ids "$WORKSPACE_ID") +check_command "$STATUS_OUTPUT" +echo "$STATUS_OUTPUT" + +WORKSPACE_STATE=$(echo "$STATUS_OUTPUT" | grep -o '"State": "[^"]*' | head -1 | cut -d'"' -f4) +echo "Current WorkSpace state: $WORKSPACE_STATE" + +# Wait for WorkSpace to be available (this can take 20+ minutes) +echo "" +echo "WorkSpace creation is in progress. This can take 20+ minutes." +echo "The script will check the status every 60 seconds." +echo "Press Ctrl+C to exit the script at any time. The WorkSpace will continue to be created." + +while [ "$WORKSPACE_STATE" = "PENDING" ]; do + echo "$(date): WorkSpace state is still PENDING. Waiting 60 seconds before checking again..." + sleep 60 + + STATUS_OUTPUT=$(aws workspaces describe-workspaces --workspace-ids "$WORKSPACE_ID") + check_command "$STATUS_OUTPUT" + + WORKSPACE_STATE=$(echo "$STATUS_OUTPUT" | grep -o '"State": "[^"]*' | head -1 | cut -d'"' -f4) + echo "$(date): Current WorkSpace state: $WORKSPACE_STATE" + + # If state is ERROR or UNHEALTHY, exit + if [ "$WORKSPACE_STATE" = "ERROR" ] || [ "$WORKSPACE_STATE" = "UNHEALTHY" ]; then + handle_error "WorkSpace creation failed. Final state: $WORKSPACE_STATE" + fi + + # If state is AVAILABLE, break the loop + if [ "$WORKSPACE_STATE" = "AVAILABLE" ]; then + break + fi +done + +# Step 9: Display WorkSpace information +echo "" +echo "==============================================" +echo "WORKSPACE CREATION COMPLETE" +echo "==============================================" +echo "WorkSpace has been successfully created!" +echo "WorkSpace ID: $WORKSPACE_ID" +echo "Directory ID: $DIRECTORY_ID" +echo "Username: $USERNAME" +echo "Running Mode: $RUNNING_MODE" + +# Step 10: Remind about invitation emails +echo "" +echo "==============================================" +echo "INVITATION EMAILS" +echo "==============================================" +echo "IMPORTANT: If you're using AD Connector or a trust relationship, or if the user already exists in Active Directory," +echo "invitation emails are not sent automatically. You'll need to manually send an invitation email." +echo "For more information, see: https://docs.aws.amazon.com/workspaces/latest/adminguide/manage-workspaces-users.html#send-invitation" + +# Step 11: Cleanup confirmation +echo "" +echo "==============================================" +echo "CLEANUP CONFIRMATION" +echo "==============================================" +echo "Resources created:" +for resource in "${CREATED_RESOURCES[@]}"; do + echo " - $resource" +done + +echo "" +echo "Do you want to clean up all created resources? (y/n):" +read -r CLEANUP_CHOICE + +if [ "$CLEANUP_CHOICE" = "y" ] || [ "$CLEANUP_CHOICE" = "Y" ]; then + echo "" + echo "==============================================" + echo "CLEANING UP RESOURCES" + echo "==============================================" + + # Terminate WorkSpace + if [ -n "$WORKSPACE_ID" ]; then + echo "Terminating WorkSpace $WORKSPACE_ID..." + TERMINATE_OUTPUT=$(aws workspaces terminate-workspaces --terminate-workspace-requests WorkspaceId="$WORKSPACE_ID") + check_command "$TERMINATE_OUTPUT" + echo "$TERMINATE_OUTPUT" + echo "WorkSpace termination initiated. This may take a few minutes." + fi + + # Deregister directory (only if we registered it in this script) + if [[ " ${CREATED_RESOURCES[*]} " == *"Directory registration: $DIRECTORY_ID"* ]]; then + echo "Deregistering directory $DIRECTORY_ID from WorkSpaces..." + DEREGISTER_OUTPUT=$(aws workspaces deregister-workspace-directory --directory-id "$DIRECTORY_ID") + check_command "$DEREGISTER_OUTPUT" + echo "$DEREGISTER_OUTPUT" + echo "Directory deregistration initiated. This may take a few minutes." + fi + + echo "Cleanup completed." +else + echo "Skipping cleanup. Resources will remain in your AWS account." +fi + +echo "" +echo "==============================================" +echo "SCRIPT COMPLETED" +echo "==============================================" +echo "Log file: $LOG_FILE" +echo "Thank you for using the WorkSpaces creation script!" diff --git a/tuts/043-amazon-mq-gs/README.md b/tuts/043-amazon-mq-gs/README.md new file mode 100644 index 00000000..612ad60c --- /dev/null +++ b/tuts/043-amazon-mq-gs/README.md @@ -0,0 +1,5 @@ +# Amazon MQ getting started tutorial + +This tutorial provides a comprehensive introduction to Amazon MQ using the AWS CLI. You'll learn how to create and configure managed message brokers, set up queues and topics, and integrate messaging capabilities into your applications using Apache ActiveMQ or RabbitMQ. + +You can either run the provided shell script to automatically set up your Amazon MQ broker and basic messaging infrastructure, or follow the step-by-step instructions in the tutorial markdown file to understand each component and customize the configuration for your specific messaging requirements. diff --git a/tuts/043-amazon-mq-gs/amazon-mq-gs.md b/tuts/043-amazon-mq-gs/amazon-mq-gs.md new file mode 100644 index 00000000..f8dfa7b9 --- /dev/null +++ b/tuts/043-amazon-mq-gs/amazon-mq-gs.md @@ -0,0 +1,506 @@ +# Getting started with Amazon MQ for ActiveMQ using the AWS CLI and Secrets Manager + +This tutorial guides you through creating an Amazon MQ for ActiveMQ broker and connecting a Java application to it using the AWS CLI. You'll also learn how to securely manage broker credentials using AWS Secrets Manager, which is a best practice for production environments. + +## Prerequisites + +Before you begin, make sure you have: + +1. **AWS CLI installed and configured** - If you haven't already, install the AWS CLI and configure it with your credentials. For installation instructions, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). + +2. **Java Development Kit (JDK)** - You need Java 11 or later installed to run the sample application. + +3. **Maven** - You need Maven to build the sample Java application. + +4. **Required permissions** - Ensure your AWS user has permissions to create and manage Amazon MQ resources, AWS Secrets Manager secrets, and modify security groups. + +5. **Estimated time to complete**: 30-40 minutes (including broker creation time) + +6. **Estimated cost**: Running an Amazon MQ broker with a mq.t3.micro instance type costs approximately $0.068 per hour. AWS Secrets Manager costs $0.40 per secret per month and $0.05 per 10,000 API calls. The total cost for completing this tutorial should be less than $0.10 if you delete the resources immediately after completion. For the most up-to-date pricing information, see [Amazon MQ Pricing](https://aws.amazon.com/amazon-mq/pricing/) and [AWS Secrets Manager Pricing](https://aws.amazon.com/secrets-manager/pricing/). + +## Step 1: Store broker credentials in AWS Secrets Manager + +First, let's create a secure password and store it in AWS Secrets Manager: + +```bash +# Generate a random identifier for resource names +RANDOM_ID=$(LC_ALL=C tr -dc 'a-z0-9' < /dev/urandom | fold -w 8 | head -n 1) +BROKER_NAME="mq-broker-${RANDOM_ID}" +SECRET_NAME="mq-broker-creds-${RANDOM_ID}" + +# Generate a secure password with special characters, numbers, uppercase and lowercase letters +MQ_PASSWORD=$(LC_ALL=C tr -dc 'A-Za-z0-9!@#$%^&*()_+' < /dev/urandom | fold -w 20 | head -n 1) +MQ_USERNAME="mqadmin" + +# Create a JSON document with the credentials +CREDENTIALS_JSON="{\"username\":\"$MQ_USERNAME\",\"password\":\"$MQ_PASSWORD\"}" + +# Store the credentials in AWS Secrets Manager +SECRET_RESULT=$(aws secretsmanager create-secret \ + --name "$SECRET_NAME" \ + --description "Amazon MQ broker credentials for $BROKER_NAME" \ + --secret-string "$CREDENTIALS_JSON") + +# Extract secret ARN +SECRET_ARN=$(echo "$SECRET_RESULT" | grep -o '"ARN": "[^"]*' | cut -d'"' -f4) +echo "Secret created successfully. ARN: $SECRET_ARN" +``` + +This creates a secret in AWS Secrets Manager containing the username and password for your Amazon MQ broker. Using Secrets Manager provides several benefits: + +- Credentials are stored securely and encrypted +- You can rotate credentials automatically +- You can control access to credentials using IAM policies +- You can audit access to credentials + +## Step 2: Create an Amazon MQ broker + +Now, create a single-instance Amazon MQ broker with the ActiveMQ engine: + +```bash +# Create the broker using the credentials from the previous step +BROKER_RESULT=$(aws mq create-broker \ + --broker-name "$BROKER_NAME" \ + --engine-type ACTIVEMQ \ + --engine-version 5.18 \ + --host-instance-type mq.t3.micro \ + --deployment-mode SINGLE_INSTANCE \ + --authentication-strategy SIMPLE \ + --users "Username=$MQ_USERNAME,Password=$MQ_PASSWORD,ConsoleAccess=true" \ + --publicly-accessible \ + --auto-minor-version-upgrade) + +# Extract broker ID +BROKER_ID=$(echo "$BROKER_RESULT" | grep -o '"BrokerId": "[^"]*' | cut -d'"' -f4) +echo "Broker creation initiated. Broker ID: $BROKER_ID" +``` + +This command creates a broker with the following configuration: +- Name: A unique name with a random identifier +- Engine: ActiveMQ version 5.18 +- Instance type: mq.t3.micro (suitable for development) +- Deployment mode: Single-instance (not highly available) +- Authentication: Simple authentication with the username and password stored in Secrets Manager +- Public accessibility: Enabled (for easy access in this tutorial) + +## Step 3: Wait for the broker to be in RUNNING state + +The broker creation process takes about 15-20 minutes. You can check the status with the following command: + +```bash +# Check broker status +aws mq describe-broker --broker-id "$BROKER_ID" --query 'BrokerState' --output text +``` + +Wait until the status shows "RUNNING" before proceeding to the next step. + +## Step 4: Get broker connection details + +Once the broker is running, retrieve its connection details: + +```bash +# Get broker details +BROKER_DETAILS=$(aws mq describe-broker --broker-id "$BROKER_ID") + +# Extract web console URL +WEB_CONSOLE=$(aws mq describe-broker --broker-id "$BROKER_ID" --query 'BrokerInstances[0].ConsoleURL' --output text) + +# Extract wire-level endpoint for OpenWire +WIRE_ENDPOINT=$(aws mq describe-broker --broker-id "$BROKER_ID" --query 'BrokerInstances[0].Endpoints[0]' --output text) + +echo "Web Console URL: $WEB_CONSOLE" +echo "Wire-level Endpoint: $WIRE_ENDPOINT" +``` + +## Step 5: Configure security group for the broker + +To connect to your broker, you need to configure its security group to allow inbound connections: + +```bash +# Get the security group ID associated with your broker +SECURITY_GROUP_ID=$(aws mq describe-broker --broker-id "$BROKER_ID" --query 'SecurityGroups[0]' --output text) + +# Get current IP address +CURRENT_IP=$(curl -s https://checkip.amazonaws.com) + +# Allow inbound connections to the web console (port 8162) +aws ec2 authorize-security-group-ingress \ + --group-id "$SECURITY_GROUP_ID" \ + --protocol tcp \ + --port 8162 \ + --cidr "${CURRENT_IP}/32" + +# Allow inbound connections to the OpenWire endpoint (port 61617) +aws ec2 authorize-security-group-ingress \ + --group-id "$SECURITY_GROUP_ID" \ + --protocol tcp \ + --port 61617 \ + --cidr "${CURRENT_IP}/32" +``` + +These commands add rules to the security group to allow connections from your current IP address to the web console (port 8162) and the OpenWire endpoint (port 61617). + +## Step 6: Create a Java application to connect to the broker + +Now, let's create a Java application that connects to your Amazon MQ broker, sends a message, and receives it. This application will retrieve the broker credentials from AWS Secrets Manager: + +```bash +# Create project directory +mkdir -p amazon-mq-demo/src/main/java/com/example + +# Create pom.xml file with required dependencies +cat > amazon-mq-demo/pom.xml << 'EOF' + + + 4.0.0 + + com.example + amazon-mq-demo + 1.0-SNAPSHOT + + + 11 + 11 + + + + + org.apache.activemq + activemq-client + 5.15.16 + + + org.apache.activemq + activemq-pool + 5.15.16 + + + software.amazon.awssdk + secretsmanager + 2.20.45 + + + com.google.code.gson + gson + 2.10.1 + + + + + + + org.apache.maven.plugins + maven-compiler-plugin + 3.8.1 + + + org.codehaus.mojo + exec-maven-plugin + 3.0.0 + + com.example.AmazonMQExample + + + + + +EOF +``` + +This Maven configuration includes: +- ActiveMQ client and connection pooling dependencies +- AWS SDK for Secrets Manager to retrieve the broker credentials +- Gson for parsing the JSON response from Secrets Manager + +Now, create the Java application file: + +```bash +# Create the Java application file with the actual endpoint and secret retrieval +cat > amazon-mq-demo/src/main/java/com/example/AmazonMQExample.java << EOF +package com.example; + +import org.apache.activemq.ActiveMQConnectionFactory; +import org.apache.activemq.jms.pool.PooledConnectionFactory; +import software.amazon.awssdk.regions.Region; +import software.amazon.awssdk.services.secretsmanager.SecretsManagerClient; +import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueRequest; +import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueResponse; +import com.google.gson.Gson; +import com.google.gson.JsonObject; + +import javax.jms.*; + +public class AmazonMQExample { + + // Broker connection details + private final static String WIRE_LEVEL_ENDPOINT = "$WIRE_ENDPOINT"; + private final static String SECRET_NAME = "$SECRET_NAME"; + + // Credentials will be retrieved from AWS Secrets Manager + private static String username; + private static String password; + + public static void main(String[] args) throws JMSException { + // Retrieve credentials from AWS Secrets Manager + retrieveCredentials(); + + final ActiveMQConnectionFactory connectionFactory = createActiveMQConnectionFactory(); + final PooledConnectionFactory pooledConnectionFactory = createPooledConnectionFactory(connectionFactory); + + sendMessage(pooledConnectionFactory); + receiveMessage(connectionFactory); + + pooledConnectionFactory.stop(); + } + + private static void retrieveCredentials() { + try { + // Create a Secrets Manager client + SecretsManagerClient client = SecretsManagerClient.builder() + .region(Region.of(System.getenv("AWS_REGION"))) + .build(); + + GetSecretValueRequest getSecretValueRequest = GetSecretValueRequest.builder() + .secretId(SECRET_NAME) + .build(); + + GetSecretValueResponse getSecretValueResponse = client.getSecretValue(getSecretValueRequest); + String secretString = getSecretValueResponse.secretString(); + + // Parse the JSON string + JsonObject jsonObject = new Gson().fromJson(secretString, JsonObject.class); + username = jsonObject.get("username").getAsString(); + password = jsonObject.get("password").getAsString(); + + System.out.println("Successfully retrieved credentials from AWS Secrets Manager"); + } catch (Exception e) { + System.err.println("Error retrieving credentials from AWS Secrets Manager: " + e.getMessage()); + System.exit(1); + } + } + + private static void sendMessage(PooledConnectionFactory pooledConnectionFactory) throws JMSException { + // Establish a connection for the producer + final Connection producerConnection = pooledConnectionFactory.createConnection(); + producerConnection.start(); + + // Create a session + final Session producerSession = producerConnection.createSession(false, Session.AUTO_ACKNOWLEDGE); + + // Create a queue named "MyQueue" + final Destination producerDestination = producerSession.createQueue("MyQueue"); + + // Create a producer from the session to the queue + final MessageProducer producer = producerSession.createProducer(producerDestination); + producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT); + + // Create a message + final String text = "Hello from Amazon MQ!"; + final TextMessage producerMessage = producerSession.createTextMessage(text); + + // Send the message + producer.send(producerMessage); + System.out.println("Message sent: " + text); + + // Clean up the producer + producer.close(); + producerSession.close(); + producerConnection.close(); + } + + private static void receiveMessage(ActiveMQConnectionFactory connectionFactory) throws JMSException { + // Establish a connection for the consumer + // Note: Consumers should not use PooledConnectionFactory + final Connection consumerConnection = connectionFactory.createConnection(); + consumerConnection.start(); + + // Create a session + final Session consumerSession = consumerConnection.createSession(false, Session.AUTO_ACKNOWLEDGE); + + // Create a queue named "MyQueue" + final Destination consumerDestination = consumerSession.createQueue("MyQueue"); + + // Create a message consumer from the session to the queue + final MessageConsumer consumer = consumerSession.createConsumer(consumerDestination); + + // Begin to wait for messages + final Message consumerMessage = consumer.receive(1000); + + // Receive the message when it arrives + final TextMessage consumerTextMessage = (TextMessage) consumerMessage; + System.out.println("Message received: " + consumerTextMessage.getText()); + + // Clean up the consumer + consumer.close(); + consumerSession.close(); + consumerConnection.close(); + } + + private static PooledConnectionFactory createPooledConnectionFactory(ActiveMQConnectionFactory connectionFactory) { + // Create a pooled connection factory + final PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory(); + pooledConnectionFactory.setConnectionFactory(connectionFactory); + pooledConnectionFactory.setMaxConnections(10); + return pooledConnectionFactory; + } + + private static ActiveMQConnectionFactory createActiveMQConnectionFactory() { + // Create a connection factory + final ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(WIRE_LEVEL_ENDPOINT); + + // Pass the sign-in credentials + connectionFactory.setUserName(username); + connectionFactory.setPassword(password); + return connectionFactory; + } +} +EOF +``` + +This Java application: +1. Connects to AWS Secrets Manager to retrieve the broker credentials +2. Establishes a connection to your Amazon MQ broker using those credentials +3. Sends a message to a queue named "MyQueue" +4. Receives that message from the queue + +## Step 7: Build and run the application + +Now, build and run the Java application: + +```bash +cd amazon-mq-demo +mvn clean compile +mvn exec:java +``` + +If successful, you should see output similar to: +``` +Successfully retrieved credentials from AWS Secrets Manager +Message sent: Hello from Amazon MQ! +Message received: Hello from Amazon MQ! +``` + +This confirms that your application successfully: +1. Retrieved the credentials from AWS Secrets Manager +2. Connected to the Amazon MQ broker +3. Sent a message to a queue +4. Received that message from the queue + +## Step 8: Access the ActiveMQ web console (Optional) + +You can also access the ActiveMQ web console to monitor your broker: + +1. Open the web console URL in your browser (the URL you retrieved earlier) +2. Log in with the username and password stored in Secrets Manager +3. Navigate to the "Queues" tab to see the "MyQueue" that was created by your application +4. You can explore other tabs to monitor connections, topics, and other broker metrics + +## Step 9: Clean up resources + +When you're done with the tutorial, you can delete the resources to avoid incurring additional charges: + +```bash +# Delete the broker +aws mq delete-broker --broker-id "$BROKER_ID" + +# Delete the secret +aws secretsmanager delete-secret --secret-id "$SECRET_ARN" --force-delete-without-recovery +``` + +The broker deletion process takes a few minutes to complete. + +## Benefits of using AWS Secrets Manager + +Using AWS Secrets Manager to store your broker credentials provides several advantages: + +1. **Enhanced security**: Credentials are encrypted at rest and in transit, and access is controlled through IAM policies. + +2. **Centralized management**: You can manage all your credentials in one place, making it easier to track and update them. + +3. **Automatic rotation**: You can configure Secrets Manager to automatically rotate credentials on a schedule. + +4. **Audit and compliance**: Secrets Manager integrates with AWS CloudTrail, allowing you to audit who accessed your secrets and when. + +5. **Reduced risk of credential exposure**: By retrieving credentials programmatically, you avoid hardcoding them in your application code or storing them in environment variables. + +## Going to production + +This tutorial is designed to help you learn how to use Amazon MQ with the AWS CLI and Secrets Manager, not to provide production-ready configurations. If you're planning to use Amazon MQ in a production environment, consider the following best practices: + +### Security considerations + +1. **Use private accessibility**: Instead of making your broker publicly accessible, configure it to be accessible only within your VPC. + +2. **Implement proper IAM policies**: Restrict access to your Secrets Manager secrets using IAM policies. + +3. **Use more secure authentication**: Consider using LDAP authentication instead of simple username/password authentication. + +4. **Configure encryption**: Ensure that your data is encrypted both in transit and at rest. + +5. **Implement credential rotation**: Configure Secrets Manager to automatically rotate your broker credentials. + +### Architecture considerations + +1. **Use active/standby deployment**: For high availability, use the active/standby deployment mode instead of single-instance. + +2. **Right-size your broker**: Choose an appropriate instance type based on your workload requirements. + +3. **Implement proper connection pooling**: Follow best practices for connection pooling to optimize performance. + +4. **Configure message persistence**: Configure message persistence to prevent data loss in case of broker failure. + +5. **Set up monitoring and alerting**: Use Amazon CloudWatch to monitor your broker and set up alerts for important metrics. + +For more information on best practices, see: +- [Amazon MQ Best Practices](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/best-practices-activemq.html) +- [AWS Secrets Manager Best Practices](https://docs.aws.amazon.com/secretsmanager/latest/userguide/best-practices.html) +- [AWS Well-Architected Framework](https://aws.amazon.com/architecture/well-architected/) +- [AWS Security Best Practices](https://aws.amazon.com/architecture/security-identity-compliance/) + +## Troubleshooting + +### Connection issues + +If you're having trouble connecting to your broker: + +1. **Check security group rules**: Ensure that the security group allows inbound connections from your IP address to the required ports. + +2. **Verify broker status**: Make sure the broker is in the "RUNNING" state. + +3. **Check network connectivity**: Ensure that your network allows outbound connections to the broker's endpoints. + +4. **Verify credentials**: Double-check that the credentials in Secrets Manager are correct. + +### Java application issues + +If the Java application fails to compile or run: + +1. **Check Java version**: Ensure you have Java 11 or later installed. + +2. **Verify Maven installation**: Make sure Maven is properly installed and configured. + +3. **Check AWS credentials**: Ensure that your AWS credentials are properly configured to allow the application to access Secrets Manager. + +4. **Examine error messages**: Look for specific error messages in the output to identify the issue. + +### Secrets Manager issues + +If you're having trouble with Secrets Manager: + +1. **Check IAM permissions**: Ensure that your IAM user or role has the necessary permissions to access the secret. + +2. **Verify region**: Make sure you're using the correct AWS region when accessing the secret. + +3. **Check secret name**: Verify that you're using the correct secret name or ARN. + +## Next steps + +Now that you've created an Amazon MQ broker with secure credential management, you can explore more advanced features: + +- [Configure automatic credential rotation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) to enhance security +- [Configure broker network of brokers](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/network-of-brokers.html) to connect multiple brokers together +- [Configure broker storage](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/broker-storage.html) to understand storage options for your broker +- [Monitor your broker](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/security-logging-monitoring.html) using Amazon CloudWatch metrics and logs +- [Create an ActiveMQ broker with high availability](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-broker-architecture.html#active-standby-broker-deployment) by using the active/standby deployment mode diff --git a/tuts/043-amazon-mq-gs/amazon-mq-gs.sh b/tuts/043-amazon-mq-gs/amazon-mq-gs.sh new file mode 100755 index 00000000..e3321521 --- /dev/null +++ b/tuts/043-amazon-mq-gs/amazon-mq-gs.sh @@ -0,0 +1,508 @@ +#!/bin/bash + +# Amazon MQ Getting Started Script +# This script creates an Amazon MQ broker and demonstrates connecting to it with a Java application + +# FIXES APPLIED: +# - Added checks for Java and Maven installations before creating the Java application +# - Generate secure password and store in AWS Secrets Manager instead of hardcoding + +# Set up logging +LOG_FILE="amazon-mq-tutorial.log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +echo "Starting Amazon MQ tutorial script at $(date)" +echo "All commands and outputs will be logged to $LOG_FILE" + +# Function to handle errors +handle_error() { + echo "ERROR: $1" + echo "Resources created:" + if [ -n "$BROKER_ID" ]; then + echo "- Amazon MQ Broker: $BROKER_ID" + fi + if [ -n "$SECRET_ARN" ]; then + echo "- AWS Secrets Manager Secret: $SECRET_ARN" + fi + + echo "" + echo "===========================================" + echo "CLEANUP CONFIRMATION" + echo "===========================================" + echo "An error occurred. Do you want to clean up all created resources? (y/n): " + read -r CLEANUP_CHOICE + + if [[ "${CLEANUP_CHOICE,,}" == "y" ]]; then + cleanup_resources + else + echo "Resources were not cleaned up. You can manually delete them later." + fi + + exit 1 +} + +# Function to clean up resources +cleanup_resources() { + echo "Cleaning up resources..." + + if [ -n "$BROKER_ID" ]; then + echo "Deleting Amazon MQ broker: $BROKER_ID" + aws mq delete-broker --broker-id "$BROKER_ID" + echo "Broker deletion initiated. It may take several minutes to complete." + fi + + if [ -n "$SECRET_ARN" ]; then + echo "Deleting AWS Secrets Manager secret: $SECRET_ARN" + aws secretsmanager delete-secret --secret-id "$SECRET_ARN" --force-delete-without-recovery + echo "Secret deleted." + fi +} + +# Generate a random identifier for resource names +RANDOM_ID=$(LC_ALL=C tr -dc 'a-z0-9' < /dev/urandom | fold -w 8 | head -n 1) +BROKER_NAME="mq-broker-${RANDOM_ID}" +SECRET_NAME="mq-broker-creds-${RANDOM_ID}" +BROKER_ID="" +SECRET_ARN="" + +# Step 1: Generate a secure password and store it in AWS Secrets Manager +echo "Generating secure password and storing in AWS Secrets Manager..." + +# Generate a secure password with special characters, numbers, uppercase and lowercase letters +MQ_PASSWORD=$(LC_ALL=C tr -dc 'A-Za-z0-9!@#$%^&*()_+' < /dev/urandom | fold -w 20 | head -n 1) +MQ_USERNAME="mqadmin" + +# Create a JSON document with the credentials +CREDENTIALS_JSON="{\"username\":\"$MQ_USERNAME\",\"password\":\"$MQ_PASSWORD\"}" + +# Store the credentials in AWS Secrets Manager +SECRET_RESULT=$(aws secretsmanager create-secret \ + --name "$SECRET_NAME" \ + --description "Amazon MQ broker credentials for $BROKER_NAME" \ + --secret-string "$CREDENTIALS_JSON") + +# Check for errors +if echo "$SECRET_RESULT" | grep -i "error" > /dev/null; then + handle_error "Failed to create secret: $SECRET_RESULT" +fi + +# Extract secret ARN +SECRET_ARN=$(echo "$SECRET_RESULT" | grep -o '"ARN": "[^"]*' | cut -d'"' -f4) +if [ -z "$SECRET_ARN" ]; then + handle_error "Failed to extract secret ARN from response" +fi + +echo "Secret created successfully. ARN: $SECRET_ARN" + +# Step 2: Create an Amazon MQ broker +echo "Creating Amazon MQ broker: $BROKER_NAME" +# Note: Using publicly-accessible for tutorial purposes only +# In production, you should use private access and proper network controls +BROKER_RESULT=$(aws mq create-broker \ + --broker-name "$BROKER_NAME" \ + --engine-type ACTIVEMQ \ + --engine-version 5.18 \ + --host-instance-type mq.t3.micro \ + --deployment-mode SINGLE_INSTANCE \ + --authentication-strategy SIMPLE \ + --users "Username=$MQ_USERNAME,Password=$MQ_PASSWORD,ConsoleAccess=true" \ + --publicly-accessible \ + --auto-minor-version-upgrade) + +# Check for errors +if echo "$BROKER_RESULT" | grep -i "error" > /dev/null; then + handle_error "Failed to create broker: $BROKER_RESULT" +fi + +# Extract broker ID +BROKER_ID=$(echo "$BROKER_RESULT" | grep -o '"BrokerId": "[^"]*' | cut -d'"' -f4) +if [ -z "$BROKER_ID" ]; then + handle_error "Failed to extract broker ID from response" +fi + +echo "Broker creation initiated. Broker ID: $BROKER_ID" + +# Step 3: Wait for the broker to be in RUNNING state +echo "Waiting for broker to be in RUNNING state. This may take 15-20 minutes..." +while true; do + BROKER_STATE=$(aws mq describe-broker --broker-id "$BROKER_ID" --query 'BrokerState' --output text) + + if echo "$BROKER_STATE" | grep -i "error" > /dev/null; then + handle_error "Error checking broker state: $BROKER_STATE" + fi + + echo "Current broker state: $BROKER_STATE" + + if [ "$BROKER_STATE" == "RUNNING" ]; then + echo "Broker is now in RUNNING state" + break + elif [ "$BROKER_STATE" == "CREATION_FAILED" ]; then + handle_error "Broker creation failed" + fi + + echo "Waiting 60 seconds before checking again..." + sleep 60 +done + +# Step 4: Get broker connection details +echo "Retrieving broker connection details..." +BROKER_DETAILS=$(aws mq describe-broker --broker-id "$BROKER_ID") + +if echo "$BROKER_DETAILS" | grep -i "error" > /dev/null; then + handle_error "Failed to get broker details: $BROKER_DETAILS" +fi + +# Extract web console URL +WEB_CONSOLE=$(aws mq describe-broker --broker-id "$BROKER_ID" --query 'BrokerInstances[0].ConsoleURL' --output text) +if [ -z "$WEB_CONSOLE" ] || [ "$WEB_CONSOLE" == "None" ]; then + handle_error "Failed to get web console URL" +fi + +# Extract wire-level endpoint for OpenWire +WIRE_ENDPOINT=$(aws mq describe-broker --broker-id "$BROKER_ID" --query 'BrokerInstances[0].Endpoints[0]' --output text) +if [ -z "$WIRE_ENDPOINT" ] || [ "$WIRE_ENDPOINT" == "None" ]; then + handle_error "Failed to get wire-level endpoint" +fi + +echo "Web Console URL: $WEB_CONSOLE" +echo "Wire-level Endpoint: $WIRE_ENDPOINT" + +# Step 5: Configure security group for the broker +echo "Configuring security group for the broker..." +SECURITY_GROUP_ID=$(aws mq describe-broker --broker-id "$BROKER_ID" --query 'SecurityGroups[0]' --output text) + +if [ -z "$SECURITY_GROUP_ID" ] || [ "$SECURITY_GROUP_ID" == "None" ]; then + handle_error "Failed to get security group ID" +fi + +echo "Security Group ID: $SECURITY_GROUP_ID" + +# Get current IP address +CURRENT_IP=$(curl -s https://checkip.amazonaws.com) +if [ -z "$CURRENT_IP" ]; then + handle_error "Failed to get current IP address" +fi + +echo "Your current IP address: $CURRENT_IP" + +# Allow inbound connections to the web console (port 8162) +echo "Adding inbound rule for web console access (port 8162)..." +SG_RESULT=$(aws ec2 authorize-security-group-ingress \ + --group-id "$SECURITY_GROUP_ID" \ + --protocol tcp \ + --port 8162 \ + --cidr "${CURRENT_IP}/32") + +if echo "$SG_RESULT" | grep -i "error" > /dev/null; then + echo "Warning: Failed to add security group rule for port 8162. It might already exist or you may not have permissions." +fi + +# Allow inbound connections to the OpenWire endpoint (port 61617) +echo "Adding inbound rule for OpenWire access (port 61617)..." +SG_RESULT=$(aws ec2 authorize-security-group-ingress \ + --group-id "$SECURITY_GROUP_ID" \ + --protocol tcp \ + --port 61617 \ + --cidr "${CURRENT_IP}/32") + +if echo "$SG_RESULT" | grep -i "error" > /dev/null; then + echo "Warning: Failed to add security group rule for port 61617. It might already exist or you may not have permissions." +fi + +# Step 6: Create Java application to connect to the broker +echo "Creating Java application to connect to the broker..." + +# Check for Java and Maven installations before creating the Java application +echo "Checking for required dependencies..." +JAVA_INSTALLED=false +MAVEN_INSTALLED=false + +if command -v java &> /dev/null; then + JAVA_VERSION=$(java -version 2>&1 | head -n 1) + echo "Java is installed: $JAVA_VERSION" + JAVA_INSTALLED=true +else + echo "Java is not installed. You will need to install Java to run the sample application." +fi + +if command -v mvn &> /dev/null; then + MAVEN_VERSION=$(mvn --version 2>&1 | head -n 1) + echo "Maven is installed: $MAVEN_VERSION" + MAVEN_INSTALLED=true +else + echo "Maven is not installed. You will need to install Maven to build and run the sample application." +fi + +# Create project directory +mkdir -p amazon-mq-demo/src/main/java/com/example + +# Create pom.xml file +cat > amazon-mq-demo/pom.xml << 'EOF' + + + 4.0.0 + + com.example + amazon-mq-demo + 1.0-SNAPSHOT + + + 11 + 11 + + + + + org.apache.activemq + activemq-client + 5.15.16 + + + org.apache.activemq + activemq-pool + 5.15.16 + + + software.amazon.awssdk + secretsmanager + 2.20.45 + + + com.google.code.gson + gson + 2.10.1 + + + + + + + org.apache.maven.plugins + maven-compiler-plugin + 3.8.1 + + + org.codehaus.mojo + exec-maven-plugin + 3.0.0 + + com.example.AmazonMQExample + + + + + +EOF + +# Create Java application file with the actual endpoint and secret retrieval +cat > amazon-mq-demo/src/main/java/com/example/AmazonMQExample.java << EOF +package com.example; + +import org.apache.activemq.ActiveMQConnectionFactory; +import org.apache.activemq.jms.pool.PooledConnectionFactory; +import software.amazon.awssdk.regions.Region; +import software.amazon.awssdk.services.secretsmanager.SecretsManagerClient; +import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueRequest; +import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueResponse; +import com.google.gson.Gson; +import com.google.gson.JsonObject; + +import javax.jms.*; + +public class AmazonMQExample { + + // Broker connection details + private final static String WIRE_LEVEL_ENDPOINT = "$WIRE_ENDPOINT"; + private final static String SECRET_NAME = "$SECRET_NAME"; + + // Credentials will be retrieved from AWS Secrets Manager + private static String username; + private static String password; + + public static void main(String[] args) throws JMSException { + // Retrieve credentials from AWS Secrets Manager + retrieveCredentials(); + + final ActiveMQConnectionFactory connectionFactory = createActiveMQConnectionFactory(); + final PooledConnectionFactory pooledConnectionFactory = createPooledConnectionFactory(connectionFactory); + + sendMessage(pooledConnectionFactory); + receiveMessage(connectionFactory); + + pooledConnectionFactory.stop(); + } + + private static void retrieveCredentials() { + try { + // Create a Secrets Manager client + SecretsManagerClient client = SecretsManagerClient.builder() + .region(Region.of(System.getenv("AWS_REGION"))) + .build(); + + GetSecretValueRequest getSecretValueRequest = GetSecretValueRequest.builder() + .secretId(SECRET_NAME) + .build(); + + GetSecretValueResponse getSecretValueResponse = client.getSecretValue(getSecretValueRequest); + String secretString = getSecretValueResponse.secretString(); + + // Parse the JSON string + JsonObject jsonObject = new Gson().fromJson(secretString, JsonObject.class); + username = jsonObject.get("username").getAsString(); + password = jsonObject.get("password").getAsString(); + + System.out.println("Successfully retrieved credentials from AWS Secrets Manager"); + } catch (Exception e) { + System.err.println("Error retrieving credentials from AWS Secrets Manager: " + e.getMessage()); + System.exit(1); + } + } + + private static void sendMessage(PooledConnectionFactory pooledConnectionFactory) throws JMSException { + // Establish a connection for the producer + final Connection producerConnection = pooledConnectionFactory.createConnection(); + producerConnection.start(); + + // Create a session + final Session producerSession = producerConnection.createSession(false, Session.AUTO_ACKNOWLEDGE); + + // Create a queue named "MyQueue" + final Destination producerDestination = producerSession.createQueue("MyQueue"); + + // Create a producer from the session to the queue + final MessageProducer producer = producerSession.createProducer(producerDestination); + producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT); + + // Create a message + final String text = "Hello from Amazon MQ!"; + final TextMessage producerMessage = producerSession.createTextMessage(text); + + // Send the message + producer.send(producerMessage); + System.out.println("Message sent: " + text); + + // Clean up the producer + producer.close(); + producerSession.close(); + producerConnection.close(); + } + + private static void receiveMessage(ActiveMQConnectionFactory connectionFactory) throws JMSException { + // Establish a connection for the consumer + // Note: Consumers should not use PooledConnectionFactory + final Connection consumerConnection = connectionFactory.createConnection(); + consumerConnection.start(); + + // Create a session + final Session consumerSession = consumerConnection.createSession(false, Session.AUTO_ACKNOWLEDGE); + + // Create a queue named "MyQueue" + final Destination consumerDestination = consumerSession.createQueue("MyQueue"); + + // Create a message consumer from the session to the queue + final MessageConsumer consumer = consumerSession.createConsumer(consumerDestination); + + // Begin to wait for messages + final Message consumerMessage = consumer.receive(1000); + + // Receive the message when it arrives + final TextMessage consumerTextMessage = (TextMessage) consumerMessage; + System.out.println("Message received: " + consumerTextMessage.getText()); + + // Clean up the consumer + consumer.close(); + consumerSession.close(); + consumerConnection.close(); + } + + private static PooledConnectionFactory createPooledConnectionFactory(ActiveMQConnectionFactory connectionFactory) { + // Create a pooled connection factory + final PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory(); + pooledConnectionFactory.setConnectionFactory(connectionFactory); + pooledConnectionFactory.setMaxConnections(10); + return pooledConnectionFactory; + } + + private static ActiveMQConnectionFactory createActiveMQConnectionFactory() { + // Create a connection factory + final ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(WIRE_LEVEL_ENDPOINT); + + // Pass the sign-in credentials + connectionFactory.setUserName(username); + connectionFactory.setPassword(password); + return connectionFactory; + } +} +EOF + +echo "Java application created successfully" +echo "Project location: $(pwd)/amazon-mq-demo" + +# Step 7: Instructions for building and running the application +echo "" +echo "To build and run the Java application, execute the following commands:" +echo "cd amazon-mq-demo" +echo "mvn clean compile" +echo "mvn exec:java" +echo "" + +# Provide installation instructions if dependencies are missing +if [ "$JAVA_INSTALLED" = false ] || [ "$MAVEN_INSTALLED" = false ]; then + echo "===========================================" + echo "DEPENDENCY INSTALLATION INSTRUCTIONS" + echo "===========================================" + + if [ "$JAVA_INSTALLED" = false ]; then + echo "To install Java:" + echo " - Ubuntu/Debian: sudo apt-get install default-jdk" + echo " - Amazon Linux/RHEL/CentOS: sudo yum install java-11-amazon-corretto" + echo " - macOS: brew install openjdk@11" + echo "" + fi + + if [ "$MAVEN_INSTALLED" = false ]; then + echo "To install Maven:" + echo " - Ubuntu/Debian: sudo apt-get install maven" + echo " - Amazon Linux/RHEL/CentOS: sudo yum install maven" + echo " - macOS: brew install maven" + echo "" + fi + + echo "After installing the required dependencies, you can proceed with building and running the application." + echo "" +fi + +# Display summary of created resources +echo "" +echo "===========================================" +echo "RESOURCE SUMMARY" +echo "===========================================" +echo "Amazon MQ Broker Name: $BROKER_NAME" +echo "Amazon MQ Broker ID: $BROKER_ID" +echo "Web Console URL: $WEB_CONSOLE" +echo "Wire-level Endpoint: $WIRE_ENDPOINT" +echo "Username: $MQ_USERNAME" +echo "Password: Stored in AWS Secrets Manager" +echo "Secret Name: $SECRET_NAME" +echo "Secret ARN: $SECRET_ARN" +echo "" + +# Ask if user wants to clean up resources +echo "" +echo "===========================================" +echo "CLEANUP CONFIRMATION" +echo "===========================================" +echo "Do you want to clean up all created resources? (y/n): " +read -r CLEANUP_CHOICE + +if [[ "${CLEANUP_CHOICE,,}" == "y" ]]; then + cleanup_resources +else + echo "Resources were not cleaned up. You can manually delete them later using:" + echo "aws mq delete-broker --broker-id $BROKER_ID" + echo "aws secretsmanager delete-secret --secret-id $SECRET_ARN --force-delete-without-recovery" +fi + +echo "Script completed at $(date)" diff --git a/tuts/044-amazon-managed-grafana-gs/README.md b/tuts/044-amazon-managed-grafana-gs/README.md new file mode 100644 index 00000000..07f61c43 --- /dev/null +++ b/tuts/044-amazon-managed-grafana-gs/README.md @@ -0,0 +1,5 @@ +# Amazon Managed Grafana getting started tutorial + +This tutorial provides a comprehensive introduction to Amazon Managed Grafana using the AWS CLI. You'll learn how to create and configure a managed Grafana workspace, set up data sources, create dashboards, and visualize metrics from your AWS services and applications. + +You can either run the provided shell script to automatically set up your Amazon Managed Grafana workspace and basic configurations, or follow the step-by-step instructions in the tutorial markdown file to understand each component and customize the setup for your specific monitoring and visualization needs. diff --git a/tuts/044-amazon-managed-grafana-gs/amazon-managed-grafana-gs.md b/tuts/044-amazon-managed-grafana-gs/amazon-managed-grafana-gs.md new file mode 100644 index 00000000..7648ad1f --- /dev/null +++ b/tuts/044-amazon-managed-grafana-gs/amazon-managed-grafana-gs.md @@ -0,0 +1,525 @@ +# Creating an Amazon Managed Grafana workspace using the AWS CLI + +This tutorial guides you through creating and configuring an Amazon Managed Grafana workspace using the AWS Command Line Interface (AWS CLI). Amazon Managed Grafana is a fully managed service that makes it easy to deploy, operate, and scale Grafana, a popular open-source analytics platform. + +## Topics + +* [Prerequisites](#prerequisites) +* [Create an IAM role for your workspace](#create-an-iam-role-for-your-workspace) +* [Create a Grafana workspace](#create-a-grafana-workspace) +* [Configure authentication](#configure-authentication) +* [Configure optional settings](#configure-optional-settings) +* [Access your Grafana workspace](#access-your-grafana-workspace) +* [Clean up resources](#clean-up-resources) +* [Going to production](#going-to-production) +* [Next steps](#next-steps) + +## Prerequisites + +Before you begin this tutorial, make sure you have the following: + +1. The AWS CLI. If you need to install it, follow the [AWS CLI installation guide](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). +2. Configured your AWS CLI with appropriate credentials. Run `aws configure` if you haven't set up your credentials yet. +3. The necessary permissions to create and manage Amazon Managed Grafana workspaces and IAM roles. At minimum, you need the **AWSGrafanaAccountAdministrator** policy attached to your IAM principal. +4. If you plan to use IAM Identity Center for authentication, you also need the **AWSSSOMemberAccountAdministrator** and **AWSSSODirectoryAdministrator** policies. + +### Cost considerations + +Amazon Managed Grafana is priced based on active users per workspace per month: +- Standard Edition: $9.00 per active user per workspace per month +- Enterprise Edition: $19.00 per active user per workspace per month + +For this tutorial with 1 admin user, the cost would be approximately $0.0125 per hour (prorated from the monthly rate). If you follow the cleanup instructions promptly after completing the tutorial, the actual cost incurred would be minimal. + +Additional costs may apply if you use the workspace to query data from other AWS services like CloudWatch, Prometheus, or X-Ray, or if you enable VPC connectivity. + +## Create an IAM role for your workspace + +Before creating a Grafana workspace, you need to create an IAM role that grants permissions to the AWS resources that the workspace will access. This role allows Amazon Managed Grafana to read data from services like CloudWatch, Prometheus, and X-Ray. + +**Create a trust policy document** + +First, create a trust policy document that allows the Grafana service to assume the role: + +``` +cat > trust-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "grafana.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] +} +EOF +``` + +This trust policy enables the Amazon Managed Grafana service to assume this role when accessing AWS resources on behalf of your workspace. + +**Create the IAM role** + +Now, create the IAM role using the trust policy: + +``` +aws iam create-role \ + --role-name GrafanaWorkspaceRole \ + --assume-role-policy-document file://trust-policy.json \ + --description "Role for Amazon Managed Grafana workspace" +``` + +The command returns details about the newly created role, including its ARN, which you'll need when creating the workspace: + +``` +{ + "Role": { + "Path": "/", + "RoleName": "GrafanaWorkspaceRole", + "RoleId": "AROAEXAMPLEID", + "Arn": "arn:aws:iam::123456789012:role/GrafanaWorkspaceRole", + "CreateDate": "2025-01-13T12:00:00Z", + "AssumeRolePolicyDocument": { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "grafana.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] + } + } +} +``` + +**Create and attach a policy for CloudWatch access** + +Create a policy that grants permissions to access CloudWatch metrics: + +``` +cat > cloudwatch-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "cloudwatch:DescribeAlarmsForMetric", + "cloudwatch:DescribeAlarmHistory", + "cloudwatch:DescribeAlarms", + "cloudwatch:ListMetrics", + "cloudwatch:GetMetricStatistics", + "cloudwatch:GetMetricData" + ], + "Resource": "*" + } + ] +} +EOF + +aws iam create-policy \ + --policy-name GrafanaCloudWatchPolicy \ + --policy-document file://cloudwatch-policy.json +``` + +The command returns details about the newly created policy: + +``` +{ + "Policy": { + "PolicyName": "GrafanaCloudWatchPolicy", + "PolicyId": "ANPAEXAMPLEID", + "Arn": "arn:aws:iam::123456789012:policy/GrafanaCloudWatchPolicy", + "Path": "/", + "DefaultVersionId": "v1", + "AttachmentCount": 0, + "PermissionsBoundaryUsageCount": 0, + "IsAttachable": true, + "CreateDate": "2025-01-13T12:00:00Z", + "UpdateDate": "2025-01-13T12:00:00Z" + } +} +``` + +After creating the policy, attach it to the role: + +``` +aws iam attach-role-policy \ + --role-name GrafanaWorkspaceRole \ + --policy-arn arn:aws:iam::123456789012:policy/GrafanaCloudWatchPolicy +``` + +Replace `123456789012` with your AWS account ID. This policy allows your Grafana workspace to read CloudWatch metrics and alarms. + +## Create a Grafana workspace + +Now that you have created the necessary IAM role, you can create your Amazon Managed Grafana workspace. + +**Create the workspace** + +Use the following command to create a new workspace: + +``` +aws grafana create-workspace \ + --workspace-name MyGrafanaWorkspace \ + --authentication-providers "SAML" \ + --permission-type "CUSTOMER_MANAGED" \ + --account-access-type "CURRENT_ACCOUNT" \ + --workspace-role-arn "arn:aws:iam::123456789012:role/GrafanaWorkspaceRole" \ + --workspace-data-sources "CLOUDWATCH" "PROMETHEUS" "XRAY" \ + --grafana-version "10.4" \ + --tags Environment=Development +``` + +Replace `123456789012` with your AWS account ID. This command creates a workspace with the following configuration: + +- Name: MyGrafanaWorkspace +- Authentication: SAML (Security Assertion Markup Language) +- Permission type: Customer managed (you manage the IAM roles and permissions) +- Account access: Current account only +- Data sources: CloudWatch, Prometheus, and X-Ray +- Grafana version: 10.4 + +The response includes details about the workspace: + +``` +{ + "workspace": { + "id": "g-abcd1234", + "name": "MyGrafanaWorkspace", + "accountAccessType": "CURRENT_ACCOUNT", + "authentication": { + "providers": [ + "SAML" + ], + "samlConfigurationStatus": "NOT_CONFIGURED" + }, + "created": 1673596800.000, + "dataSources": [ + "CLOUDWATCH", + "PROMETHEUS", + "XRAY" + ], + "description": "", + "grafanaVersion": "10.4", + "permissionType": "CUSTOMER_MANAGED", + "status": "CREATING", + "tags": { + "Environment": "Development" + }, + "workspaceRoleArn": "arn:aws:iam::123456789012:role/GrafanaWorkspaceRole" + } +} +``` + +Note the workspace ID (e.g., `g-abcd1234`), as you'll need it for subsequent operations. + +**Check workspace status** + +After creating the workspace, check its status to ensure it becomes active: + +``` +aws grafana describe-workspace --workspace-id g-abcd1234 +``` + +Replace `g-abcd1234` with your workspace ID. The workspace status will initially be "CREATING". Wait until the status changes to "ACTIVE" before proceeding: + +``` +{ + "workspace": { + "id": "g-abcd1234", + "name": "MyGrafanaWorkspace", + "accountAccessType": "CURRENT_ACCOUNT", + "authentication": { + "providers": [ + "SAML" + ], + "samlConfigurationStatus": "NOT_CONFIGURED" + }, + "created": 1673596800.000, + "dataSources": [ + "CLOUDWATCH", + "PROMETHEUS", + "XRAY" + ], + "description": "", + "endpoint": "g-abcd1234.grafana-workspace.us-east-1.amazonaws.com", + "grafanaVersion": "10.4", + "permissionType": "CUSTOMER_MANAGED", + "status": "ACTIVE", + "tags": { + "Environment": "Development" + }, + "workspaceRoleArn": "arn:aws:iam::123456789012:role/GrafanaWorkspaceRole" + } +} +``` + +## Configure authentication + +Amazon Managed Grafana supports two authentication methods: SAML and IAM Identity Center. This section covers how to configure each method. + +**Configure SAML authentication** + +If you selected SAML as your authentication provider, you need to configure it: + +``` +aws grafana update-workspace-authentication \ + --workspace-id g-abcd1234 \ + --authentication-providers "SAML" \ + --saml-configuration '{ + "idpMetadata": { + "url": "https://your-idp-metadata-url" + }, + "assertionAttributes": { + "role": "role", + "name": "name", + "login": "login", + "email": "email" + }, + "roleValues": { + "admin": ["admin-role"] + } + }' +``` + +Replace `g-abcd1234` with your workspace ID and `https://your-idp-metadata-url` with the URL of your identity provider's metadata. This configuration maps SAML attributes to Grafana user properties and assigns admin roles. + +The response confirms the authentication configuration: + +``` +{ + "authentication": { + "providers": [ + "SAML" + ], + "samlConfigurationStatus": "CONFIGURED" + } +} +``` + +**Configure IAM Identity Center authentication** + +If you're using IAM Identity Center, first list the available users: + +``` +aws identitystore list-users --identity-store-id d-abcd1234 +``` + +Replace `d-abcd1234` with your Identity Store ID. The command returns a list of users: + +``` +{ + "Users": [ + { + "UserId": "abcd1234-efgh-5678-ijkl-9012mnop3456", + "UserName": "jdoe", + "Name": { + "Formatted": "John Doe", + "FamilyName": "Doe", + "GivenName": "John" + }, + "DisplayName": "John Doe", + "Emails": [ + { + "Value": "jdoe@example.com", + "Type": "Work", + "Primary": true + } + ] + } + ] +} +``` + +Then, assign a user as an admin: + +``` +aws grafana update-permissions \ + --workspace-id g-abcd1234 \ + --update-instruction-batch '[{ + "action": "ADD", + "role": "ADMIN", + "users": [{ + "id": "abcd1234-efgh-5678-ijkl-9012mnop3456", + "type": "SSO_USER" + }] + }]' +``` + +Replace `g-abcd1234` with your workspace ID and `abcd1234-efgh-5678-ijkl-9012mnop3456` with the user ID you want to assign as admin. + +## Configure optional settings + +Amazon Managed Grafana offers several optional configurations to enhance your workspace. + +**Enable network access control** + +To restrict access to your workspace to specific IP addresses or VPC endpoints: + +``` +aws grafana update-workspace \ + --workspace-id g-abcd1234 \ + --network-access-control '{ + "prefixListIds": ["pl-abcd1234"], + "vpceIds": ["vpce-abcd1234"] + }' +``` + +Replace `g-abcd1234` with your workspace ID, `pl-abcd1234` with your prefix list ID, and `vpce-abcd1234` with your VPC endpoint ID. This configuration restricts access to the specified IP ranges and VPC endpoints. + +**Configure VPC connection** + +To connect your workspace to resources in a VPC: + +``` +aws grafana update-workspace \ + --workspace-id g-abcd1234 \ + --vpc-configuration '{ + "securityGroupIds": ["sg-abcd1234"], + "subnetIds": ["subnet-abcd1234", "subnet-efgh5678"] + }' +``` + +Replace `g-abcd1234` with your workspace ID, `sg-abcd1234` with your security group ID, and the subnet IDs with your subnet IDs. This allows your workspace to connect to data sources in your VPC. + +**Enable Grafana alerting** + +To enable Grafana's unified alerting feature: + +``` +aws grafana update-workspace-configuration \ + --workspace-id g-abcd1234 \ + --configuration '{ + "unifiedAlerting": { + "enabled": true + } + }' +``` + +Replace `g-abcd1234` with your workspace ID. This enables Grafana's unified alerting system, which allows you to view and manage alerts from multiple sources in one interface. + +**Enable plugin management** + +To allow Grafana administrators to install and manage plugins: + +``` +aws grafana update-workspace-configuration \ + --workspace-id g-abcd1234 \ + --configuration '{ + "pluginAdminEnabled": true + }' +``` + +Replace `g-abcd1234` with your workspace ID. This allows workspace administrators to install, update, and remove plugins. + +## Access your Grafana workspace + +Once your workspace is active and configured, you can access it using the URL provided in the workspace details. + +**Get the workspace URL** + +``` +aws grafana describe-workspace --workspace-id g-abcd1234 +``` + +Replace `g-abcd1234` with your workspace ID. Look for the `endpoint` value in the output, which is your Grafana workspace URL. + +**Sign in to your workspace** + +Open the workspace URL in your web browser. Depending on your authentication method: + +- For SAML: Click "Sign in with SAML" and enter your credentials in your identity provider's login page. +- For IAM Identity Center: Click "Sign in with AWS IAM Identity Center" and enter your email address and password. + +Once signed in, you can start adding data sources, creating dashboards, and visualizing your data. + +## Clean up resources + +When you no longer need your Grafana workspace, you should delete it to avoid incurring charges. + +**Delete the workspace** + +``` +aws grafana delete-workspace --workspace-id g-abcd1234 +``` + +Replace `g-abcd1234` with your workspace ID. This command deletes your Grafana workspace. + +Wait for the workspace to be deleted before proceeding with the next steps. You can check the status by running: + +``` +aws grafana describe-workspace --workspace-id g-abcd1234 +``` + +If the workspace has been deleted, you'll receive an error message indicating that the workspace doesn't exist. + +**Clean up IAM resources** + +After deleting the workspace, clean up the IAM resources: + +``` +aws iam detach-role-policy \ + --role-name GrafanaWorkspaceRole \ + --policy-arn arn:aws:iam::123456789012:policy/GrafanaCloudWatchPolicy + +aws iam delete-policy \ + --policy-arn arn:aws:iam::123456789012:policy/GrafanaCloudWatchPolicy + +aws iam delete-role \ + --role-name GrafanaWorkspaceRole +``` + +Replace `123456789012` with your AWS account ID. These commands detach and delete the policy, then delete the role. + +**Clean up JSON files** + +Finally, remove the JSON files created during the tutorial: + +``` +rm trust-policy.json cloudwatch-policy.json +``` + +## Going to production + +This tutorial is designed to help you learn how to use the Amazon Managed Grafana API through the AWS CLI. When moving to a production environment, consider the following security and architecture best practices: + +### Security considerations + +1. **Least privilege access**: The CloudWatch policy in this tutorial uses a wildcard resource (`"Resource": "*"`). In production, restrict access to only the specific resources needed. + +2. **Network access control**: Implement network access control to restrict access to your workspace from specific IP ranges or VPC endpoints. + +3. **Encryption**: Configure encryption settings for sensitive data in your Grafana workspace. + +4. **Monitoring and auditing**: Set up AWS CloudTrail to monitor API calls to your Grafana workspace for security auditing. + +### Architecture considerations + +1. **High availability**: Consider implementing backup strategies for your workspace configurations. + +2. **Multi-workspace architecture**: For larger organizations, design a multi-workspace architecture to separate concerns between teams or departments. + +3. **Authentication at scale**: Implement group-based access control for managing large numbers of users. + +4. **Cost management**: Monitor usage costs and implement strategies to optimize costs, such as managing the number of active users. + +For more information on AWS security and architecture best practices, refer to: +- [AWS Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html) +- [AWS Security Best Practices](https://docs.aws.amazon.com/whitepapers/latest/aws-security-best-practices/welcome.html) +- [Amazon Managed Grafana Security](https://docs.aws.amazon.com/grafana/latest/userguide/security.html) + +## Next steps + +Now that you've created an Amazon Managed Grafana workspace, explore these additional features: + +1. [Add data sources to your workspace](https://docs.aws.amazon.com/grafana/latest/userguide/AMG-data-sources.html) to start visualizing your data. +2. [Create dashboards](https://docs.aws.amazon.com/grafana/latest/userguide/AMG-dashboards.html) to monitor your applications and infrastructure. +3. [Set up alerts](https://docs.aws.amazon.com/grafana/latest/userguide/alerts-overview.html) to get notified when metrics cross thresholds. +4. [Configure user access](https://docs.aws.amazon.com/grafana/latest/userguide/AMG-manage-users.html) to control who can view and edit your dashboards. +5. [Connect to Amazon VPC](https://docs.aws.amazon.com/grafana/latest/userguide/AMG-configure-vpc.html) to access data sources in your private network. diff --git a/tuts/044-amazon-managed-grafana-gs/amazon-managed-grafana-gs.sh b/tuts/044-amazon-managed-grafana-gs/amazon-managed-grafana-gs.sh new file mode 100755 index 00000000..53fa7040 --- /dev/null +++ b/tuts/044-amazon-managed-grafana-gs/amazon-managed-grafana-gs.sh @@ -0,0 +1,287 @@ +#!/bin/bash + +# Amazon Managed Grafana Workspace Creation Script +# This script creates an Amazon Managed Grafana workspace and configures it + +# Set up logging +LOG_FILE="grafana-workspace-creation.log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +echo "Starting Amazon Managed Grafana workspace creation script at $(date)" +echo "All commands and outputs will be logged to $LOG_FILE" + +# Function to check for errors in command output +check_error() { + local output=$1 + local cmd=$2 + + if echo "$output" | grep -i "error\|exception\|fail" > /dev/null; then + echo "ERROR: Command '$cmd' failed with output:" + echo "$output" + cleanup_on_error + exit 1 + fi +} + +# Function to clean up resources on error +cleanup_on_error() { + echo "Error encountered. Attempting to clean up resources..." + + if [ -n "$WORKSPACE_ID" ]; then + echo "Deleting workspace $WORKSPACE_ID..." + aws grafana delete-workspace --workspace-id "$WORKSPACE_ID" + fi + + if [ -n "$ROLE_NAME" ]; then + echo "Detaching policies from role $ROLE_NAME..." + if [ -n "$POLICY_ARN" ]; then + aws iam detach-role-policy --role-name "$ROLE_NAME" --policy-arn "$POLICY_ARN" + fi + + echo "Deleting role $ROLE_NAME..." + aws iam delete-role --role-name "$ROLE_NAME" + fi + + if [ -n "$POLICY_ARN" ]; then + echo "Deleting policy..." + aws iam delete-policy --policy-arn "$POLICY_ARN" + fi + + # Clean up JSON files + rm -f trust-policy.json cloudwatch-policy.json + + echo "Cleanup completed. See $LOG_FILE for details." +} + +# Generate a random identifier for resource names +RANDOM_ID=$(openssl rand -hex 4) +WORKSPACE_NAME="GrafanaWorkspace-${RANDOM_ID}" +ROLE_NAME="GrafanaWorkspaceRole-${RANDOM_ID}" + +echo "Using workspace name: $WORKSPACE_NAME" +echo "Using role name: $ROLE_NAME" + +# Step 1: Get AWS account ID +echo "Getting AWS account ID..." +ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) +check_error "$ACCOUNT_ID" "get-caller-identity" +echo "AWS Account ID: $ACCOUNT_ID" + +# Step 2: Create IAM role for Grafana workspace +echo "Creating IAM role for Grafana workspace..." + +# Create trust policy document +cat > trust-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "grafana.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] +} +EOF + +# Create IAM role +ROLE_OUTPUT=$(aws iam create-role \ + --role-name "$ROLE_NAME" \ + --assume-role-policy-document file://trust-policy.json \ + --description "Role for Amazon Managed Grafana workspace") + +check_error "$ROLE_OUTPUT" "create-role" +echo "IAM role created successfully" + +# Extract role ARN +ROLE_ARN=$(echo "$ROLE_OUTPUT" | grep -o '"Arn": "[^"]*' | cut -d'"' -f4) +echo "Role ARN: $ROLE_ARN" + +# Attach policies to the role +echo "Attaching policies to the role..." + +# CloudWatch policy +cat > cloudwatch-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "cloudwatch:DescribeAlarmsForMetric", + "cloudwatch:DescribeAlarmHistory", + "cloudwatch:DescribeAlarms", + "cloudwatch:ListMetrics", + "cloudwatch:GetMetricStatistics", + "cloudwatch:GetMetricData" + ], + "Resource": "*" + } + ] +} +EOF + +POLICY_OUTPUT=$(aws iam create-policy \ + --policy-name "GrafanaCloudWatchPolicy-${RANDOM_ID}" \ + --policy-document file://cloudwatch-policy.json) + +check_error "$POLICY_OUTPUT" "create-policy" + +POLICY_ARN=$(echo "$POLICY_OUTPUT" | grep -o '"Arn": "[^"]*' | cut -d'"' -f4) +echo "CloudWatch policy ARN: $POLICY_ARN" + +ATTACH_OUTPUT=$(aws iam attach-role-policy \ + --role-name "$ROLE_NAME" \ + --policy-arn "$POLICY_ARN") + +check_error "$ATTACH_OUTPUT" "attach-role-policy" +echo "CloudWatch policy attached to role" + +# Step 3: Create the Grafana workspace +echo "Creating Amazon Managed Grafana workspace..." +WORKSPACE_OUTPUT=$(aws grafana create-workspace \ + --workspace-name "$WORKSPACE_NAME" \ + --authentication-providers "SAML" \ + --permission-type "CUSTOMER_MANAGED" \ + --account-access-type "CURRENT_ACCOUNT" \ + --workspace-role-arn "$ROLE_ARN" \ + --workspace-data-sources "CLOUDWATCH" "PROMETHEUS" "XRAY" \ + --grafana-version "10.4" \ + --tags Environment=Development) + +check_error "$WORKSPACE_OUTPUT" "create-workspace" + +echo "Workspace creation initiated:" +echo "$WORKSPACE_OUTPUT" + +# Extract workspace ID +WORKSPACE_ID=$(echo "$WORKSPACE_OUTPUT" | grep -o '"id": "[^"]*' | cut -d'"' -f4) + +if [ -z "$WORKSPACE_ID" ]; then + echo "ERROR: Failed to extract workspace ID from output" + exit 1 +fi + +echo "Workspace ID: $WORKSPACE_ID" + +# Step 4: Wait for workspace to become active +echo "Waiting for workspace to become active. This may take several minutes..." +ACTIVE=false +MAX_ATTEMPTS=30 +ATTEMPT=0 + +while [ $ACTIVE = false ] && [ $ATTEMPT -lt $MAX_ATTEMPTS ]; do + ATTEMPT=$((ATTEMPT+1)) + echo "Checking workspace status (attempt $ATTEMPT of $MAX_ATTEMPTS)..." + + DESCRIBE_OUTPUT=$(aws grafana describe-workspace --workspace-id "$WORKSPACE_ID") + check_error "$DESCRIBE_OUTPUT" "describe-workspace" + + STATUS=$(echo "$DESCRIBE_OUTPUT" | grep -o '"status": "[^"]*' | cut -d'"' -f4) + echo "Current status: $STATUS" + + if [ "$STATUS" = "ACTIVE" ]; then + ACTIVE=true + echo "Workspace is now ACTIVE" + elif [ "$STATUS" = "FAILED" ]; then + echo "ERROR: Workspace creation failed" + cleanup_on_error + exit 1 + else + echo "Workspace is still being created. Waiting 30 seconds..." + sleep 30 + fi +done + +if [ $ACTIVE = false ]; then + echo "ERROR: Workspace did not become active within the expected time" + cleanup_on_error + exit 1 +fi + +# Extract workspace endpoint URL +WORKSPACE_URL=$(echo "$DESCRIBE_OUTPUT" | grep -o '"endpoint": "[^"]*' | cut -d'"' -f4) +echo "Workspace URL: https://$WORKSPACE_URL" + +# Step 5: Display workspace information +echo "" +echo "===========================================" +echo "WORKSPACE INFORMATION" +echo "===========================================" +echo "Workspace ID: $WORKSPACE_ID" +echo "Workspace URL: https://$WORKSPACE_URL" +echo "Workspace Name: $WORKSPACE_NAME" +echo "IAM Role: $ROLE_NAME" +echo "" +echo "Note: Since SAML authentication is used, you need to configure SAML settings" +echo "using the AWS Management Console or the update-workspace-authentication command." +echo "===========================================" + +# Step 6: Prompt for cleanup +echo "" +echo "===========================================" +echo "CLEANUP CONFIRMATION" +echo "===========================================" +echo "Resources created:" +echo "- Amazon Managed Grafana workspace: $WORKSPACE_ID" +echo "- IAM Role: $ROLE_NAME" +echo "- IAM Policy: GrafanaCloudWatchPolicy-${RANDOM_ID}" +echo "" +echo "Do you want to clean up all created resources? (y/n): " +read -r CLEANUP_CHOICE + +if [[ "$CLEANUP_CHOICE" =~ ^[Yy] ]]; then + echo "Cleaning up resources..." + + echo "Deleting workspace $WORKSPACE_ID..." + DELETE_OUTPUT=$(aws grafana delete-workspace --workspace-id "$WORKSPACE_ID") + check_error "$DELETE_OUTPUT" "delete-workspace" + + echo "Waiting for workspace to be deleted..." + DELETED=false + ATTEMPT=0 + + while [ $DELETED = false ] && [ $ATTEMPT -lt $MAX_ATTEMPTS ]; do + ATTEMPT=$((ATTEMPT+1)) + echo "Checking deletion status (attempt $ATTEMPT of $MAX_ATTEMPTS)..." + + if aws grafana describe-workspace --workspace-id "$WORKSPACE_ID" 2>&1 | grep -i "not found\|does not exist" > /dev/null; then + DELETED=true + echo "Workspace has been deleted" + else + echo "Workspace is still being deleted. Waiting 30 seconds..." + sleep 30 + fi + done + + if [ $DELETED = false ]; then + echo "WARNING: Workspace deletion is taking longer than expected. It may still be in progress." + fi + + # Detach policy from role + echo "Detaching policy from role..." + aws iam detach-role-policy \ + --role-name "$ROLE_NAME" \ + --policy-arn "$POLICY_ARN" + + # Delete policy + echo "Deleting IAM policy..." + aws iam delete-policy \ + --policy-arn "$POLICY_ARN" + + # Delete role + echo "Deleting IAM role..." + aws iam delete-role \ + --role-name "$ROLE_NAME" + + # Clean up JSON files + rm -f trust-policy.json cloudwatch-policy.json + + echo "Cleanup completed" +else + echo "Skipping cleanup. Resources will remain in your AWS account." +fi + +echo "Script completed at $(date)"