diff --git a/tuts/013-ec2-basics/README.md b/tuts/013-ec2-basics/README.md new file mode 100644 index 00000000..c7374e3e --- /dev/null +++ b/tuts/013-ec2-basics/README.md @@ -0,0 +1,5 @@ +# Amazon EC2 basics + +This tutorial demonstrates the basic operations for working with Amazon Elastic Compute Cloud (EC2) instances, including creating, configuring, and managing virtual servers in the AWS cloud. + +You can either run the automated script `ec2-basics.sh` to execute all the steps automatically, or follow the step-by-step instructions in the `ec2-basics.md` tutorial to understand each operation in detail. diff --git a/tuts/013-ec2-basics/ec2-basics.md b/tuts/013-ec2-basics/ec2-basics.md new file mode 100644 index 00000000..213cabd8 --- /dev/null +++ b/tuts/013-ec2-basics/ec2-basics.md @@ -0,0 +1,513 @@ +# Getting started with Amazon EC2 using the AWS CLI + +This tutorial guides you through the process of creating and managing Amazon EC2 instances using the AWS Command Line Interface (AWS CLI). You'll learn how to create key pairs, set up security groups, launch instances, and manage Elastic IP addresses. + +## Prerequisites + +Before you begin this tutorial, make sure you have the following: + +1. The AWS CLI installed and configured with appropriate credentials. If you need to install it, follow the [AWS CLI installation guide](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). +2. Basic familiarity with command line interfaces and SSH concepts. +3. [Sufficient permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_ec2_instances.html) to create and manage EC2 resources in your AWS account. + +**Cost Information**: Completing this tutorial will incur minimal costs (approximately $0.01-$0.02) if you follow all steps including cleanup. The tutorial uses t2.micro instances which are Free Tier eligible (750 hours/month). Elastic IP addresses are free when associated with running instances but cost $0.005/hour when not associated. Following the cleanup instructions will help you avoid ongoing charges. + +## Create a key pair + +SSH key pairs allow you to securely connect to your EC2 instances without using passwords. In this section, you'll create a new key pair and save the private key to your local machine. + +**Create a new key pair** + +The following command creates a new SSH key pair named "my-ec2-key" and saves the private key to your local machine. + +```bash +aws ec2 create-key-pair \ + --key-name "my-ec2-key" \ + --query 'KeyMaterial' \ + --output text > my-ec2-key.pem +``` + +After running this command, the private key is saved to a file named `my-ec2-key.pem` in your current directory. + +**Set proper permissions on the key file** + +SSH requires that private key files are not readable by others. Use the following command to set the correct permissions: + +```bash +chmod 400 my-ec2-key.pem +``` + +This command ensures that only you can read the private key file, which is a security requirement for SSH. + +**Verify your key pair** + +You can list your key pairs to verify that the new key pair was created successfully: + +```bash +aws ec2 describe-key-pairs + +{ + "KeyPairs": [ + { + "KeyPairId": "key-abcd1234", + "KeyFingerprint": "1a:2b:3c:4d:5e:6f:7g:8h:9i:0j:1k:2l:3m:4n:5o:6p", + "KeyName": "my-ec2-key", + "Tags": [] + } + ] +} +``` + +The output shows details about your key pair, including its name, ID, and fingerprint. + +## Create a security group + +Security groups act as virtual firewalls for your EC2 instances to control inbound and outbound traffic. In this section, you'll create a security group and configure it to allow SSH access from your IP address. + +**Create a security group** + +The following command creates a new security group: + +```bash +aws ec2 create-security-group \ + --group-name "my-ec2-sg" \ + --description "Security group for EC2 tutorial" \ + --query "GroupId" \ + --output text + +sg-abcd1234 +``` + +The output is the ID of your new security group. Make note of this ID as you'll need it in subsequent commands. + +**Add a rule to allow SSH access** + +To connect to your instance via SSH, you need to add an inbound rule to your security group. For security reasons, it's best to restrict SSH access to your current IP address: + +```bash +# Get your current public IP address +MY_IP=$(curl -s http://checkip.amazonaws.com) + +# Add a rule to allow SSH access only from your IP address +aws ec2 authorize-security-group-ingress \ + --group-id "sg-abcd1234" \ + --protocol tcp \ + --port 22 \ + --cidr "$MY_IP/32" + +{ + "Return": true, + "SecurityGroupRules": [ + { + "SecurityGroupRuleId": "sgr-abcd1234", + "GroupId": "sg-abcd1234", + "IpProtocol": "tcp", + "FromPort": 22, + "ToPort": 22, + "CidrIpv4": "203.0.113.75/32", + "Description": "" + } + ] +} +``` + +The response confirms that the rule was added successfully. This rule allows SSH connections (port 22) only from your current IP address. + +**Verify security group configuration** + +You can check the security group's configuration with the following command: + +```bash +aws ec2 describe-security-groups \ + --group-ids "sg-abcd1234" + +{ + "SecurityGroups": [ + { + "Description": "Security group for EC2 tutorial", + "GroupName": "my-ec2-sg", + "IpPermissions": [ + { + "FromPort": 22, + "IpProtocol": "tcp", + "IpRanges": [ + { + "CidrIp": "203.0.113.75/32" + } + ], + "ToPort": 22 + } + ], + "OwnerId": "123456789012", + "GroupId": "sg-abcd1234", + "IpPermissionsEgress": [ + { + "IpProtocol": "-1", + "IpRanges": [ + { + "CidrIp": "0.0.0.0/0" + } + ] + } + ], + "VpcId": "vpc-abcd1234" + } + ] +} +``` + +The output shows the security group's inbound rules (IpPermissions), which include the SSH rule you just added. + +## Launch an EC2 instance + +Now that you have a key pair and security group, you can launch an EC2 instance. In this section, you'll find a suitable Amazon Machine Image (AMI) and launch an instance. + +**Find an Amazon Linux 2023 AMI** + +Amazon Linux 2023 is the recommended Linux distribution for EC2. You can find the latest Amazon Linux 2023 AMI using the AWS Systems Manager Parameter Store: + +```bash +aws ssm get-parameters-by-path \ + --path "/aws/service/ami-amazon-linux-latest" \ + --query "Parameters[?contains(Name, 'al2023-ami-kernel-default-x86_64')].Value" \ + --output text | head -1 + +ami-abcd1234 +``` + +The output is the ID of the latest Amazon Linux 2023 AMI. Make note of this ID as you'll need it to launch your instance. + +**Launch an instance with IMDSv2 and encryption enabled** + +Now you can launch an EC2 instance using the AMI ID, key pair, and security group you created earlier: + +```bash +aws ec2 run-instances \ + --image-id "ami-abcd1234" \ + --instance-type "t2.micro" \ + --key-name "my-ec2-key" \ + --security-group-ids "sg-abcd1234" \ + --metadata-options "HttpTokens=required,HttpEndpoint=enabled" \ + --block-device-mappings "DeviceName=/dev/xvda,Ebs={Encrypted=true}" \ + --count 1 \ + --query 'Instances[0].InstanceId' \ + --output text + +i-abcd1234 +``` + +This command includes two important security enhancements: +- `--metadata-options "HttpTokens=required"` enforces IMDSv2, which provides additional protection against SSRF attacks +- `--block-device-mappings "DeviceName=/dev/xvda,Ebs={Encrypted=true}"` ensures that the EBS volume is encrypted + +The output is the ID of your new instance. Make note of this ID as you'll need it in subsequent commands. + +**Wait for the instance to be running** + +After launching an instance, it takes a few moments to initialize. You can wait for the instance to reach the "running" state: + +```bash +aws ec2 wait instance-running --instance-ids "i-abcd1234" +``` + +This command will wait until the instance is running before returning. + +**Get instance details** + +Once your instance is running, you can retrieve its details: + +```bash +aws ec2 describe-instances \ + --instance-ids "i-abcd1234" \ + --query 'Reservations[0].Instances[0].{ID:InstanceId,Type:InstanceType,State:State.Name,PublicIP:PublicIpAddress}' \ + --output table + +--------------------------------------------------------- +| DescribeInstances | ++---------------+------------+----------+---------------+ +| ID | PublicIP | State | Type | ++---------------+------------+----------+---------------+ +| i-abcd1234 | 203.0.113.75 | running | t2.micro | ++---------------+------------+----------+---------------+ +``` + +The output shows details about your instance, including its public IP address, which you'll need to connect via SSH. + +## Connect to your instance + +Now that your instance is running, you can connect to it using SSH with the key pair you created earlier. + +**Connect via SSH** + +Use the following command to connect to your instance, replacing the IP address with your instance's public IP: + +```bash +ssh -i my-ec2-key.pem ec2-user@203.0.113.75 +``` + +If the connection is successful, you'll see a welcome message and a command prompt for your instance: + +``` + , #_ + ~\_ ####_ Amazon Linux 2023 + ~~ \_#####\ + ~~ \###| + ~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023 + ~~ V~' '-> + ~~~ / + ~~._. _/ + _/ _/ + _/m/' +``` + +You can now run commands on your instance. When you're done, type `exit` to close the SSH connection. + +## Stop and start your instance + +You can stop and start your EC2 instance as needed. When you stop an instance, it remains in your account but doesn't incur compute charges. When you start it again, it will have a new public IP address (unless you use an Elastic IP, which we'll cover next). + +**Stop your instance** + +To stop your instance, use the following command: + +```bash +aws ec2 stop-instances --instance-ids "i-abcd1234" + +{ + "StoppingInstances": [ + { + "CurrentState": { + "Code": 64, + "Name": "stopping" + }, + "InstanceId": "i-abcd1234", + "PreviousState": { + "Code": 16, + "Name": "running" + } + } + ] +} +``` + +The response shows that the instance is transitioning from "running" to "stopping" state. + +**Wait for the instance to stop** + +You can wait for the instance to reach the "stopped" state: + +```bash +aws ec2 wait instance-stopped --instance-ids "i-abcd1234" +``` + +**Start your instance** + +To start your instance again, use the following command: + +```bash +aws ec2 start-instances --instance-ids "i-abcd1234" + +{ + "StartingInstances": [ + { + "CurrentState": { + "Code": 0, + "Name": "pending" + }, + "InstanceId": "i-abcd1234", + "PreviousState": { + "Code": 80, + "Name": "stopped" + } + } + ] +} +``` + +The response shows that the instance is transitioning from "stopped" to "pending" state. + +**Wait for the instance to start** + +You can wait for the instance to reach the "running" state: + +```bash +aws ec2 wait instance-running --instance-ids "i-abcd1234" +``` + +**Get the new public IP address** + +After restarting, your instance will have a new public IP address: + +```bash +aws ec2 describe-instances \ + --instance-ids "i-abcd1234" \ + --query 'Reservations[0].Instances[0].PublicIpAddress' \ + --output text + +203.0.113.80 +``` + +Note that the IP address has changed. This is normal behavior when stopping and starting an EC2 instance. + +## Allocate and associate an Elastic IP address + +If you need a consistent IP address for your instance, you can use an Elastic IP address. An Elastic IP is a static IPv4 address that you can associate with your instance, and it remains the same even when you stop and start the instance. + +**Allocate an Elastic IP address** + +The following command allocates a new Elastic IP address: + +```bash +aws ec2 allocate-address \ + --domain vpc \ + --query '[PublicIp,AllocationId]' \ + --output text + +203.0.113.85 eipalloc-abcd1234 +``` + +The output shows the Elastic IP address and its allocation ID. Make note of both as you'll need them in subsequent commands. + +**Associate the Elastic IP with your instance** + +Now you can associate the Elastic IP with your instance: + +```bash +aws ec2 associate-address \ + --instance-id "i-abcd1234" \ + --allocation-id "eipalloc-abcd1234" \ + --query "AssociationId" \ + --output text + +eipassoc-abcd1234 +``` + +The output is the association ID, which you'll need if you want to disassociate the Elastic IP later. + +**Connect using the Elastic IP** + +You can now connect to your instance using the Elastic IP: + +```bash +ssh -i my-ec2-key.pem ec2-user@203.0.113.85 +``` + +## Test Elastic IP persistence + +Let's verify that the Elastic IP remains associated with your instance even after stopping and starting it. + +**Stop your instance** + +```bash +aws ec2 stop-instances --instance-ids "i-abcd1234" +aws ec2 wait instance-stopped --instance-ids "i-abcd1234" +``` + +**Start your instance** + +```bash +aws ec2 start-instances --instance-ids "i-abcd1234" +aws ec2 wait instance-running --instance-ids "i-abcd1234" +``` + +**Verify the Elastic IP is still associated** + +```bash +aws ec2 describe-instances \ + --instance-ids "i-abcd1234" \ + --query 'Reservations[0].Instances[0].PublicIpAddress' \ + --output text + +203.0.113.85 +``` + +The output shows that the instance still has the same Elastic IP address, confirming that the Elastic IP remains associated even after stopping and starting the instance. + +## Going to production + +This tutorial is designed to teach you the basics of EC2 instance management using the AWS CLI. For production environments, consider these additional best practices: + +1. **High Availability**: Deploy instances across multiple Availability Zones to improve resilience. + +2. **Auto Scaling**: Use [Auto Scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html) to automatically adjust capacity based on demand. + +3. **Load Balancing**: Distribute traffic across multiple instances using [Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html). + +4. **Infrastructure as Code**: Manage infrastructure using [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) or [AWS CDK](https://docs.aws.amazon.com/cdk/latest/guide/home.html). + +5. **Security Hardening**: + - Restrict outbound traffic in security groups + - Use private subnets for instances that don't need direct internet access + - Implement [VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html) for AWS services + - Follow the [AWS Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/welcome.html) security pillar + +6. **Monitoring and Logging**: Implement [CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) monitoring and centralized logging. + +7. **Backup and Recovery**: Implement regular [EBS snapshots](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html) and a disaster recovery strategy. + +## Clean up resources + +When you're finished with this tutorial, you should clean up the resources you created to avoid incurring additional charges. + +**Disassociate the Elastic IP** + +```bash +aws ec2 disassociate-address --association-id "eipassoc-abcd1234" +``` + +**Release the Elastic IP** + +```bash +aws ec2 release-address --allocation-id "eipalloc-abcd1234" +``` + +**Terminate the instance** + +```bash +aws ec2 terminate-instances --instance-ids "i-abcd1234" + +{ + "TerminatingInstances": [ + { + "CurrentState": { + "Code": 32, + "Name": "shutting-down" + }, + "InstanceId": "i-abcd1234", + "PreviousState": { + "Code": 16, + "Name": "running" + } + } + ] +} +``` + +**Wait for the instance to terminate** + +```bash +aws ec2 wait instance-terminated --instance-ids "i-abcd1234" +``` + +**Delete the security group** + +```bash +aws ec2 delete-security-group --group-id "sg-abcd1234" +``` + +**Delete the key pair** + +```bash +aws ec2 delete-key-pair --key-name "my-ec2-key" +rm -f my-ec2-key.pem +``` + +## Next steps + +Now that you've learned the basics of managing EC2 instances using the AWS CLI, explore other EC2 features: + +1. **Auto Scaling** – [Automatically adjust capacity](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html) based on demand. +2. **Load Balancing** – [Distribute traffic](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) across multiple instances. +3. **EBS Volumes** – [Add additional storage](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes.html) to your instances. +4. **AMI Creation** – [Create your own AMIs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) with your applications pre-installed. +5. **Instance Metadata** – [Access instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html) from within your instances. diff --git a/tuts/013-ec2-basics/ec2-basics.sh b/tuts/013-ec2-basics/ec2-basics.sh new file mode 100755 index 00000000..d02a5904 --- /dev/null +++ b/tuts/013-ec2-basics/ec2-basics.sh @@ -0,0 +1,389 @@ +#!/bin/bash + +# EC2 Basics Tutorial Script - Revised +# This script demonstrates the basics of working with EC2 instances using AWS CLI +# Updated to use Amazon Linux 2023 and enhanced security settings + +# Set up logging +LOG_FILE="ec2_tutorial_$(date +%Y%m%d_%H%M%S).log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +# Function to log messages +log() { + echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" +} + +# Function to handle errors +handle_error() { + log "ERROR: $1" + log "Cleaning up resources..." + cleanup + exit 1 +} + +# Function to clean up resources +cleanup() { + log "Resources created:" + + if [ -n "$ASSOCIATION_ID" ]; then + log "- Elastic IP Association: $ASSOCIATION_ID" + fi + + if [ -n "$ALLOCATION_ID" ]; then + log "- Elastic IP Allocation: $ALLOCATION_ID (IP: $ELASTIC_IP)" + fi + + if [ -n "$INSTANCE_ID" ]; then + log "- EC2 Instance: $INSTANCE_ID" + fi + + if [ -n "$SECURITY_GROUP_ID" ]; then + log "- Security Group: $SECURITY_GROUP_ID" + fi + + if [ -n "$KEY_NAME" ]; then + log "- Key Pair: $KEY_NAME (File: $KEY_FILE)" + fi + + read -p "Do you want to delete these resources? (y/n): " -n 1 -r + echo + + if [[ $REPLY =~ ^[Yy]$ ]]; then + log "Starting cleanup..." + + # Track cleanup failures + CLEANUP_FAILURES=0 + + # Disassociate Elastic IP if it exists + if [ -n "$ASSOCIATION_ID" ]; then + log "Disassociating Elastic IP..." + if ! aws ec2 disassociate-address --association-id "$ASSOCIATION_ID"; then + log "Failed to disassociate Elastic IP" + ((CLEANUP_FAILURES++)) + fi + fi + + # Release Elastic IP if it exists + if [ -n "$ALLOCATION_ID" ]; then + log "Releasing Elastic IP..." + if ! aws ec2 release-address --allocation-id "$ALLOCATION_ID"; then + log "Failed to release Elastic IP" + ((CLEANUP_FAILURES++)) + fi + fi + + # Terminate instance if it exists + if [ -n "$INSTANCE_ID" ]; then + log "Terminating instance $INSTANCE_ID..." + if ! aws ec2 terminate-instances --instance-ids "$INSTANCE_ID" > /dev/null; then + log "Failed to terminate instance" + ((CLEANUP_FAILURES++)) + else + log "Waiting for instance to terminate..." + if ! aws ec2 wait instance-terminated --instance-ids "$INSTANCE_ID"; then + log "Failed while waiting for instance to terminate" + ((CLEANUP_FAILURES++)) + fi + fi + fi + + # Delete security group if it exists + if [ -n "$SECURITY_GROUP_ID" ]; then + log "Deleting security group..." + if ! aws ec2 delete-security-group --group-id "$SECURITY_GROUP_ID"; then + log "Failed to delete security group" + ((CLEANUP_FAILURES++)) + fi + fi + + # Delete key pair if it exists + if [ -n "$KEY_NAME" ]; then + log "Deleting key pair..." + if ! aws ec2 delete-key-pair --key-name "$KEY_NAME"; then + log "Failed to delete key pair" + ((CLEANUP_FAILURES++)) + fi + + # Remove key file + if [ -f "$KEY_FILE" ]; then + log "Removing key file..." + if ! rm -f "$KEY_FILE"; then + log "Failed to remove key file" + ((CLEANUP_FAILURES++)) + fi + fi + fi + + # Report cleanup status + if [ $CLEANUP_FAILURES -eq 0 ]; then + log "Cleanup completed successfully." + else + log "WARNING: Cleanup completed with $CLEANUP_FAILURES failures. Some resources may not have been deleted properly." + fi + else + log "Resources were not deleted." + fi +} + +# Generate random identifier for resource names +RANDOM_ID=$(openssl rand -hex 4) +KEY_NAME="ec2-tutorial-key-$RANDOM_ID" +SG_NAME="ec2-tutorial-sg-$RANDOM_ID" + +# Create a directory for the key file +KEY_DIR=$(mktemp -d) +KEY_FILE="$KEY_DIR/$KEY_NAME.pem" + +log "Starting EC2 basics tutorial script" +log "Random identifier: $RANDOM_ID" +log "Key name: $KEY_NAME" +log "Security group name: $SG_NAME" + +# Step 1: Create a key pair +log "Creating key pair..." +KEY_RESULT=$(aws ec2 create-key-pair --key-name "$KEY_NAME" --query 'KeyMaterial' --output text) + +if [ $? -ne 0 ] || [ -z "$KEY_RESULT" ]; then + handle_error "Failed to create key pair" +fi + +echo "$KEY_RESULT" > "$KEY_FILE" +chmod 400 "$KEY_FILE" +log "Created key pair and saved to $KEY_FILE" + +# Step 2: Create a security group +log "Creating security group..." +SECURITY_GROUP_ID=$(aws ec2 create-security-group \ + --group-name "$SG_NAME" \ + --description "Security group for EC2 tutorial" \ + --query "GroupId" \ + --output text) + +if [ $? -ne 0 ] || [ -z "$SECURITY_GROUP_ID" ]; then + handle_error "Failed to create security group" +fi + +log "Created security group: $SECURITY_GROUP_ID" + +# Get current public IP address for SSH access +MY_IP=$(curl -s http://checkip.amazonaws.com) +if [ $? -ne 0 ] || [ -z "$MY_IP" ]; then + handle_error "Failed to get current IP address" +fi + +log "Adding SSH ingress rule for IP $MY_IP..." +aws ec2 authorize-security-group-ingress \ + --group-id "$SECURITY_GROUP_ID" \ + --protocol tcp \ + --port 22 \ + --cidr "$MY_IP/32" > /dev/null + +if [ $? -ne 0 ]; then + handle_error "Failed to add security group ingress rule" +fi + +log "Added SSH ingress rule for IP $MY_IP" + +# Step 3: Find an Amazon Linux 2023 AMI (updated from AL2) +log "Finding latest Amazon Linux 2023 AMI..." +AMI_ID=$(aws ssm get-parameters-by-path \ + --path "/aws/service/ami-amazon-linux-latest" \ + --query "Parameters[?contains(Name, 'al2023-ami-kernel-default-x86_64')].Value" \ + --output text | head -1) + +if [ $? -ne 0 ] || [ -z "$AMI_ID" ]; then + handle_error "Failed to find Amazon Linux 2023 AMI" +fi + +log "Selected AMI: $AMI_ID" + +# Get the architecture of the AMI +log "Getting AMI architecture..." +AMI_ARCH=$(aws ec2 describe-images \ + --image-ids "$AMI_ID" \ + --query "Images[0].Architecture" \ + --output text) + +if [ $? -ne 0 ] || [ -z "$AMI_ARCH" ]; then + handle_error "Failed to get AMI architecture" +fi + +log "AMI architecture: $AMI_ARCH" + +# Find a compatible instance type +log "Finding compatible instance type..." +# Directly use t2.micro for simplicity +INSTANCE_TYPE="t2.micro" +log "Using instance type: $INSTANCE_TYPE" + +# Step 4: Launch an EC2 instance with enhanced security +log "Launching EC2 instance with IMDSv2 and encryption enabled..." +INSTANCE_ID=$(aws ec2 run-instances \ + --image-id "$AMI_ID" \ + --instance-type "$INSTANCE_TYPE" \ + --key-name "$KEY_NAME" \ + --security-group-ids "$SECURITY_GROUP_ID" \ + --metadata-options "HttpTokens=required,HttpEndpoint=enabled" \ + --block-device-mappings "DeviceName=/dev/xvda,Ebs={Encrypted=true}" \ + --count 1 \ + --query 'Instances[0].InstanceId' \ + --output text) + +if [ $? -ne 0 ] || [ -z "$INSTANCE_ID" ]; then + handle_error "Failed to launch EC2 instance" +fi + +log "Launched instance $INSTANCE_ID. Waiting for it to start..." + +# Wait for the instance to be running +aws ec2 wait instance-running --instance-ids "$INSTANCE_ID" +if [ $? -ne 0 ]; then + handle_error "Failed while waiting for instance to start" +fi + +# Get instance details +INSTANCE_DETAILS=$(aws ec2 describe-instances \ + --instance-ids "$INSTANCE_ID" \ + --query 'Reservations[0].Instances[0].{ID:InstanceId,Type:InstanceType,State:State.Name,PublicIP:PublicIpAddress}' \ + --output json) + +if [ $? -ne 0 ]; then + handle_error "Failed to get instance details" +fi + +log "Instance details: $INSTANCE_DETAILS" + +# Get the public IP address +PUBLIC_IP=$(echo "$INSTANCE_DETAILS" | grep -oP '"PublicIP": "\K[^"]+') +if [ -z "$PUBLIC_IP" ]; then + handle_error "Failed to get instance public IP" +fi + +log "Instance public IP: $PUBLIC_IP" +log "To connect to your instance, run: ssh -i $KEY_FILE ec2-user@$PUBLIC_IP" + +# Pause to allow user to connect if desired +read -p "Press Enter to continue to the next step (stopping and starting the instance)..." + +# Step 6: Stop and Start the Instance +log "Stopping instance $INSTANCE_ID..." +aws ec2 stop-instances --instance-ids "$INSTANCE_ID" > /dev/null +if [ $? -ne 0 ]; then + handle_error "Failed to stop instance" +fi + +log "Waiting for instance to stop..." +aws ec2 wait instance-stopped --instance-ids "$INSTANCE_ID" +if [ $? -ne 0 ]; then + handle_error "Failed while waiting for instance to stop" +fi + +log "Instance stopped. Starting instance again..." +aws ec2 start-instances --instance-ids "$INSTANCE_ID" > /dev/null +if [ $? -ne 0 ]; then + handle_error "Failed to start instance" +fi + +log "Waiting for instance to start..." +aws ec2 wait instance-running --instance-ids "$INSTANCE_ID" +if [ $? -ne 0 ]; then + handle_error "Failed while waiting for instance to start" +fi + +# Get the new public IP address +NEW_PUBLIC_IP=$(aws ec2 describe-instances \ + --instance-ids "$INSTANCE_ID" \ + --query 'Reservations[0].Instances[0].PublicIpAddress' \ + --output text) + +if [ $? -ne 0 ] || [ -z "$NEW_PUBLIC_IP" ]; then + handle_error "Failed to get new public IP" +fi + +log "Instance restarted with new public IP: $NEW_PUBLIC_IP" +log "To connect to your instance, run: ssh -i $KEY_FILE ec2-user@$NEW_PUBLIC_IP" + +# Step 7: Allocate and Associate an Elastic IP Address +log "Allocating Elastic IP address..." +ALLOCATION_RESULT=$(aws ec2 allocate-address \ + --domain vpc \ + --query '[PublicIp,AllocationId]' \ + --output text) + +if [ $? -ne 0 ] || [ -z "$ALLOCATION_RESULT" ]; then + handle_error "Failed to allocate Elastic IP" +fi + +ELASTIC_IP=$(echo "$ALLOCATION_RESULT" | awk '{print $1}') +ALLOCATION_ID=$(echo "$ALLOCATION_RESULT" | awk '{print $2}') + +log "Allocated Elastic IP: $ELASTIC_IP with ID: $ALLOCATION_ID" + +log "Associating Elastic IP with instance..." +ASSOCIATION_ID=$(aws ec2 associate-address \ + --instance-id "$INSTANCE_ID" \ + --allocation-id "$ALLOCATION_ID" \ + --query "AssociationId" \ + --output text) + +if [ $? -ne 0 ] || [ -z "$ASSOCIATION_ID" ]; then + handle_error "Failed to associate Elastic IP" +fi + +log "Associated Elastic IP with instance. Association ID: $ASSOCIATION_ID" +log "To connect to your instance using the Elastic IP, run: ssh -i $KEY_FILE ec2-user@$ELASTIC_IP" + +# Pause to allow user to connect if desired +read -p "Press Enter to continue to the next step (testing Elastic IP persistence)..." + +# Step 8: Test the Elastic IP by Stopping and Starting the Instance +log "Stopping instance $INSTANCE_ID to test Elastic IP persistence..." +aws ec2 stop-instances --instance-ids "$INSTANCE_ID" > /dev/null +if [ $? -ne 0 ]; then + handle_error "Failed to stop instance" +fi + +log "Waiting for instance to stop..." +aws ec2 wait instance-stopped --instance-ids "$INSTANCE_ID" +if [ $? -ne 0 ]; then + handle_error "Failed while waiting for instance to stop" +fi + +log "Instance stopped. Starting instance again..." +aws ec2 start-instances --instance-ids "$INSTANCE_ID" > /dev/null +if [ $? -ne 0 ]; then + handle_error "Failed to start instance" +fi + +log "Waiting for instance to start..." +aws ec2 wait instance-running --instance-ids "$INSTANCE_ID" +if [ $? -ne 0 ]; then + handle_error "Failed while waiting for instance to start" +fi + +# Verify the Elastic IP is still associated +CURRENT_IP=$(aws ec2 describe-instances \ + --instance-ids "$INSTANCE_ID" \ + --query 'Reservations[0].Instances[0].PublicIpAddress' \ + --output text) + +if [ $? -ne 0 ] || [ -z "$CURRENT_IP" ]; then + handle_error "Failed to get current public IP" +fi + +log "Current public IP address: $CURRENT_IP" +log "Elastic IP address: $ELASTIC_IP" + +if [ "$CURRENT_IP" = "$ELASTIC_IP" ]; then + log "Success! The Elastic IP is still associated with your instance." +else + log "Something went wrong. The Elastic IP is not associated with your instance." +fi + +log "To connect to your instance, run: ssh -i $KEY_FILE ec2-user@$ELASTIC_IP" + +# Step 9: Clean up resources +log "Tutorial completed successfully!" +cleanup + +exit 0 diff --git a/tuts/019-lambda-gettingstarted/README.md b/tuts/019-lambda-gettingstarted/README.md new file mode 100644 index 00000000..cf5b11c4 --- /dev/null +++ b/tuts/019-lambda-gettingstarted/README.md @@ -0,0 +1,5 @@ +# AWS Lambda getting started + +This tutorial provides a comprehensive introduction to AWS Lambda, covering how to create, deploy, and manage serverless functions that run your code without provisioning or managing servers. + +You can either run the automated script `lambda-gettingstarted.sh` to execute all the steps automatically, or follow the step-by-step instructions in the `lambda-gettingstarted.md` tutorial to understand each operation in detail. diff --git a/tuts/019-lambda-gettingstarted/lambda-gettingstarted.md b/tuts/019-lambda-gettingstarted/lambda-gettingstarted.md new file mode 100644 index 00000000..1a8c37e2 --- /dev/null +++ b/tuts/019-lambda-gettingstarted/lambda-gettingstarted.md @@ -0,0 +1,382 @@ +# Creating your first Lambda function with the AWS CLI + +This tutorial guides you through creating and testing your first AWS Lambda function using the AWS Command Line Interface (AWS CLI). You'll learn how to create a simple function that calculates the area of a rectangle, test it with sample input, and view the execution results. + +## Prerequisites + +Before you begin this tutorial, make sure you have the following: + +1. The AWS CLI. If you need to install it, follow the [AWS CLI installation guide](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). +2. Configured your AWS CLI with appropriate credentials. Run `aws configure` if you haven't set up your credentials yet. +3. Basic understanding of JSON formatting. +4. [Sufficient permissions](https://docs.aws.amazon.com/lambda/latest/dg/security_iam_service-with-iam.html) to create and manage Lambda functions, IAM roles, and CloudWatch logs in your AWS account. + +**Time to complete**: Approximately 15-20 minutes + +**Cost**: This tutorial uses AWS services that are included in the AWS Free Tier. If you follow the cleanup instructions at the end of the tutorial, you should incur no costs for completing this tutorial. For more information about AWS Free Tier, see [AWS Free Tier](https://aws.amazon.com/free/). + +## Create an IAM role for Lambda + +Before creating a Lambda function, you need to create an IAM role that grants your function permission to access AWS services and resources. In this case, the role will allow your function to write logs to CloudWatch. + +**Create a trust policy document** + +First, create a JSON file that defines the trust relationship for your Lambda role. This policy allows the Lambda service to assume the role. + +```bash +cat > trust-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "lambda.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] +} +EOF +``` + +This command creates a file named `trust-policy.json` with the necessary trust policy for Lambda. + +**Create the IAM role** + +Now, create the IAM role using the trust policy document you just created. + +```bash +ROLE_NAME="lambda-tutorial-role" +ROLE_ARN=$(aws iam create-role \ + --role-name "$ROLE_NAME" \ + --assume-role-policy-document file://trust-policy.json \ + --query 'Role.Arn' \ + --output text) + +echo "Created IAM role: $ROLE_ARN" +``` + +This command creates an IAM role named `lambda-tutorial-role` and captures its ARN (Amazon Resource Name) in the `ROLE_ARN` variable. + +**Attach permissions to the role** + +Attach the `AWSLambdaBasicExecutionRole` managed policy to your role. This policy grants permissions for your Lambda function to write logs to CloudWatch. + +```bash +aws iam attach-role-policy \ + --role-name "$ROLE_NAME" \ + --policy-arn "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" +``` + +After attaching the policy, wait a few seconds for the permissions to propagate through the AWS system. + +```bash +echo "Waiting for IAM role to propagate..." +sleep 10 +``` + +## Create function code + +Next, you'll create the code for your Lambda function. You can choose between Node.js or Python for this tutorial. + +**For Node.js** + +Create a file named `index.mjs` with the following content: + +```javascript +export const handler = async (event, context) => { + + const length = event.length; + const width = event.width; + let area = calculateArea(length, width); + console.log(`The area is ${area}`); + + console.log('CloudWatch log group: ', context.logGroupName); + + let data = { + "area": area, + }; + return JSON.stringify(data); + + function calculateArea(length, width) { + return length * width; + } +}; +``` + +This Node.js function takes an event object containing `length` and `width` parameters, calculates the area, and returns the result as a JSON string. + +**For Python** + +Create a file named `lambda_function.py` with the following content: + +```python +import json +import logging + +logger = logging.getLogger() +logger.setLevel(logging.INFO) + +def lambda_handler(event, context): + + # Get the length and width parameters from the event object + length = event['length'] + width = event['width'] + + area = calculate_area(length, width) + print(f"The area is {area}") + + logger.info(f"CloudWatch logs group: {context.log_group_name}") + + # return the calculated area as a JSON string + data = {"area": area} + return json.dumps(data) + +def calculate_area(length, width): + return length*width +``` + +This Python function performs the same calculation as the Node.js version, taking an event object with `length` and `width` parameters and returning the calculated area. + +**Create a deployment package** + +Lambda requires your code to be packaged as a ZIP file. Create a deployment package containing your function code: + +```bash +# For Node.js +zip function.zip index.mjs + +# For Python +zip function.zip lambda_function.py +``` + +This command creates a ZIP file containing your function code. + +## Create a Lambda function + +Now you'll create the Lambda function using the deployment package and IAM role you created earlier. + +**For Node.js** + +```bash +FUNCTION_NAME="myLambdaFunction" +aws lambda create-function \ + --function-name "$FUNCTION_NAME" \ + --runtime nodejs22.x \ + --handler index.handler \ + --role "$ROLE_ARN" \ + --zip-file fileb://function.zip \ + --architectures x86_64 +``` + +**For Python** + +```bash +FUNCTION_NAME="myLambdaFunction" +aws lambda create-function \ + --function-name "$FUNCTION_NAME" \ + --runtime python3.13 \ + --handler lambda_function.lambda_handler \ + --role "$ROLE_ARN" \ + --zip-file fileb://function.zip \ + --architectures x86_64 +``` + +This command creates a Lambda function with the specified runtime, handler, and role. The `--zip-file` parameter specifies the deployment package containing your function code. + +After creating the function, wait for it to become active before proceeding to the next step. + +```bash +echo "Waiting for Lambda function to become active..." +sleep 10 +``` + +You can verify the function's status with the following command: + +```bash +aws lambda get-function --function-name "$FUNCTION_NAME" --query 'Configuration.State' --output text +``` + +The output should be "Active" before you proceed. + +## Test your Lambda function + +Now that your function is created, you'll create a test event and invoke the function. + +**Create a test event** + +Create a JSON file containing the test event data: + +```bash +cat > test-event.json << EOF +{ + "length": 6, + "width": 7 +} +EOF +``` + +This creates a file named `test-event.json` with the test event data. + +**Invoke the function** + +Invoke your Lambda function with the test event: + +```bash +aws lambda invoke \ + --function-name "$FUNCTION_NAME" \ + --payload fileb://test-event.json \ + output.json +``` + +This command invokes your Lambda function with the test event and saves the response to a file named `output.json`. + +**View the function response** + +Examine the function's response: + +```bash +cat output.json +``` + +You should see output similar to: + +```json +{"area": 42} +``` + +This confirms that your function successfully calculated the area of the rectangle (6 × 7 = 42). + +## View CloudWatch logs + +When your Lambda function executes, it generates logs that are sent to CloudWatch Logs. You can view these logs to monitor your function's execution and troubleshoot any issues. + +**Get the log group name** + +The log group for your Lambda function follows the naming pattern `/aws/lambda/[function-name]`: + +```bash +LOG_GROUP_NAME="/aws/lambda/$FUNCTION_NAME" +``` + +**List log streams** + +List the log streams for your function: + +```bash +aws logs describe-log-streams \ + --log-group-name "$LOG_GROUP_NAME" \ + --order-by LastEventTime \ + --descending \ + --limit 1 +``` + +This command lists the most recent log stream for your function. + +**View log events** + +View the log events from the most recent log stream: + +```bash +LOG_STREAM=$(aws logs describe-log-streams \ + --log-group-name "$LOG_GROUP_NAME" \ + --order-by LastEventTime \ + --descending \ + --limit 1 \ + --query 'logStreams[0].logStreamName' \ + --output text) + +aws logs get-log-events \ + --log-group-name "$LOG_GROUP_NAME" \ + --log-stream-name "$LOG_STREAM" +``` + +The log events will show details about your function's execution, including: +- The calculated area (42) +- The CloudWatch log group name +- Execution metrics like duration and memory usage + +## Clean up resources + +When you're finished with this tutorial, you should clean up the resources you created to avoid incurring additional charges. + +**Delete the Lambda function** + +```bash +aws lambda delete-function --function-name "$FUNCTION_NAME" +``` + +This command deletes your Lambda function. + +**Delete the CloudWatch log group** + +```bash +aws logs delete-log-group --log-group-name "$LOG_GROUP_NAME" +``` + +This command deletes the CloudWatch log group associated with your function. + +**Delete the IAM role** + +First, detach the policy from the role: + +```bash +aws iam detach-role-policy \ + --role-name "$ROLE_NAME" \ + --policy-arn "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" +``` + +Then delete the role: + +```bash +aws iam delete-role --role-name "$ROLE_NAME" +``` + +These commands clean up the IAM role you created for your Lambda function. + +**Remove temporary files** + +```bash +rm -f function.zip test-event.json output.json trust-policy.json +``` + +This command removes the temporary files created during this tutorial. + +## Going to production + +This tutorial is designed to help you learn the basics of AWS Lambda and the AWS CLI. If you're planning to use Lambda in a production environment, consider the following best practices: + +### Security considerations + +1. **Use custom IAM policies**: Instead of the managed `AWSLambdaBasicExecutionRole` policy, create a custom policy that grants only the specific permissions your function needs. + +2. **Implement input validation**: Add validation to your function code to handle unexpected or malicious inputs. + +3. **Set CloudWatch Logs retention**: Configure a retention policy for your CloudWatch Logs to manage storage costs and reduce exposure of potentially sensitive information. + +4. **Use environment variables**: Store configuration values as environment variables and encrypt sensitive values. + +For more information on Lambda security best practices, see [Security in AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/lambda-security.html). + +### Architecture considerations + +1. **Error handling**: Implement robust error handling in your function code. + +2. **Optimize memory allocation**: Test different memory configurations to find the optimal balance between performance and cost. + +3. **Consider cold starts**: Implement strategies to mitigate cold start latency, such as provisioned concurrency. + +4. **Implement monitoring and alerting**: Set up CloudWatch alarms to monitor your function's performance and errors. + +For more information on building production-ready serverless applications, see the [AWS Well-Architected Framework - Serverless Applications Lens](https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/welcome.html). + +## Next steps + +Now that you've created your first Lambda function using the AWS CLI, you can explore more advanced Lambda features: + +1. [Deploy Node.js Lambda functions with .zip file archives](https://docs.aws.amazon.com/lambda/latest/dg/nodejs-package.html) - Learn how to include dependencies in your function. +2. [Using an Amazon S3 trigger to invoke a Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) - Configure your function to respond to S3 events. +3. [Using Lambda with API Gateway](https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway-tutorial.html) - Create a REST API that invokes your Lambda function. +4. [Using a Lambda function to access an Amazon RDS database](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-lambda-tutorial.html) - Connect your Lambda function to a database. +5. [Using an Amazon S3 trigger to create thumbnail images](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-tutorial.html) - Build a more complex application with Lambda and S3. diff --git a/tuts/019-lambda-gettingstarted/lambda-gettingstarted.sh b/tuts/019-lambda-gettingstarted/lambda-gettingstarted.sh new file mode 100755 index 00000000..e4410bac --- /dev/null +++ b/tuts/019-lambda-gettingstarted/lambda-gettingstarted.sh @@ -0,0 +1,330 @@ +#!/bin/bash + +# Lambda Getting Started Tutorial Script - Version 3 +# This script creates a Lambda function, tests it, and cleans up resources + +# Set up logging +LOG_FILE="lambda_tutorial_$(date +%Y%m%d_%H%M%S).log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +echo "Starting Lambda Getting Started Tutorial Script" +echo "Logging to $LOG_FILE" +echo "==============================================" + +# Function to handle errors +handle_error() { + echo "ERROR: $1" + echo "Resources created:" + if [ -n "$ROLE_NAME" ]; then echo "- IAM Role: $ROLE_NAME"; fi + if [ -n "$FUNCTION_NAME" ]; then echo "- Lambda Function: $FUNCTION_NAME"; fi + if [ -n "$LOG_GROUP_NAME" ]; then echo "- CloudWatch Log Group: $LOG_GROUP_NAME"; fi + + echo "Attempting to clean up resources..." + cleanup + exit 1 +} + +# Function to clean up resources +cleanup() { + echo "Cleaning up resources..." + + # Delete Lambda function if it exists + if [ -n "$FUNCTION_NAME" ]; then + echo "Deleting Lambda function: $FUNCTION_NAME" + aws lambda delete-function --function-name "$FUNCTION_NAME" || echo "Failed to delete Lambda function" + fi + + # Wait for Lambda function to be deleted before deleting the role + if [ -n "$FUNCTION_NAME" ]; then + echo "Waiting for Lambda function to be deleted..." + aws lambda get-function --function-name "$FUNCTION_NAME" 2>/dev/null + while [ $? -eq 0 ]; do + sleep 2 + aws lambda get-function --function-name "$FUNCTION_NAME" 2>/dev/null + done + fi + + # Delete CloudWatch log group if it exists + if [ -n "$LOG_GROUP_NAME" ]; then + echo "Deleting CloudWatch log group: $LOG_GROUP_NAME" + aws logs delete-log-group --log-group-name "$LOG_GROUP_NAME" 2>/dev/null || echo "Log group not found or already deleted" + fi + + # Delete IAM role if it exists + if [ -n "$ROLE_NAME" ]; then + echo "Detaching policy from role: $ROLE_NAME" + aws iam detach-role-policy --role-name "$ROLE_NAME" --policy-arn "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" || echo "Failed to detach policy" + + echo "Deleting IAM role: $ROLE_NAME" + aws iam delete-role --role-name "$ROLE_NAME" || echo "Failed to delete IAM role" + fi + + # Remove temporary files + rm -f function.zip test-event.json output.json trust-policy.json 2>/dev/null + + echo "Cleanup completed" +} + +# Function to prompt for runtime choice +choose_runtime() { + echo "" + echo "==============================================" + echo "CHOOSE RUNTIME" + echo "==============================================" + echo "Select a runtime for your Lambda function:" + echo "1) Node.js 22.x" + echo "2) Python 3.13" + echo "Enter your choice (1 or 2): " + read -r RUNTIME_CHOICE + + if [ "$RUNTIME_CHOICE" = "1" ]; then + RUNTIME="nodejs22.x" + HANDLER="index.handler" + CODE_FILE="index.mjs" + echo "You selected Node.js 22.x" + elif [ "$RUNTIME_CHOICE" = "2" ]; then + RUNTIME="python3.13" + HANDLER="lambda_function.lambda_handler" + CODE_FILE="lambda_function.py" + echo "You selected Python 3.13" + else + echo "Invalid choice. Defaulting to Node.js 22.x" + RUNTIME="nodejs22.x" + HANDLER="index.handler" + CODE_FILE="index.mjs" + fi +} + +# Function to wait for Lambda function to be active +wait_for_function_active() { + local function_name=$1 + local max_attempts=30 + local attempt=1 + local state="" + + echo "Waiting for Lambda function to become active..." + + while [ $attempt -le $max_attempts ]; do + state=$(aws lambda get-function --function-name "$function_name" --query 'Configuration.State' --output text 2>/dev/null) + + if [ "$state" = "Active" ]; then + echo "Lambda function is now active" + return 0 + fi + + echo "Function state: $state (attempt $attempt/$max_attempts)" + sleep 2 + ((attempt++)) + done + + echo "Timed out waiting for function to become active" + return 1 +} + +# Set variables +FUNCTION_NAME="myLambdaFunction" +ROLE_NAME="lambda-tutorial-role-$(date +%s)" +LOG_GROUP_NAME="/aws/lambda/$FUNCTION_NAME" + +# Choose runtime +choose_runtime + +echo "Creating resources for Lambda tutorial..." + +# Step 1: Create IAM role for Lambda +echo "Creating IAM role: $ROLE_NAME" + +# Create trust policy document +cat > trust-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "lambda.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] +} +EOF + +# Create IAM role +ROLE_ARN=$(aws iam create-role \ + --role-name "$ROLE_NAME" \ + --assume-role-policy-document file://trust-policy.json \ + --query 'Role.Arn' \ + --output text) + +if [ -z "$ROLE_ARN" ]; then + handle_error "Failed to create IAM role" +fi + +echo "Created IAM role: $ROLE_ARN" + +# Attach Lambda basic execution policy to the role +echo "Attaching Lambda basic execution policy to role" +aws iam attach-role-policy \ + --role-name "$ROLE_NAME" \ + --policy-arn "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" || handle_error "Failed to attach policy to role" + +# Wait for role to propagate +echo "Waiting for IAM role to propagate..." +sleep 10 + +# Step 2: Create function code +echo "Creating function code for $RUNTIME" + +if [ "$RUNTIME" = "nodejs22.x" ]; then + # Create Node.js function code + cat > index.mjs << EOF +export const handler = async (event, context) => { + + const length = event.length; + const width = event.width; + let area = calculateArea(length, width); + console.log(\`The area is \${area}\`); + + console.log('CloudWatch log group: ', context.logGroupName); + + let data = { + "area": area, + }; + return JSON.stringify(data); + + function calculateArea(length, width) { + return length * width; + } +}; +EOF +else + # Create Python function code + cat > lambda_function.py << EOF +import json +import logging + +logger = logging.getLogger() +logger.setLevel(logging.INFO) + +def lambda_handler(event, context): + + # Get the length and width parameters from the event object + length = event['length'] + width = event['width'] + + area = calculate_area(length, width) + print(f"The area is {area}") + + logger.info(f"CloudWatch logs group: {context.log_group_name}") + + # return the calculated area as a JSON string + data = {"area": area} + return json.dumps(data) + +def calculate_area(length, width): + return length*width +EOF +fi + +# Create ZIP deployment package +echo "Creating deployment package" +zip function.zip "$CODE_FILE" || handle_error "Failed to create ZIP file" + +# Step 3: Create Lambda function +echo "Creating Lambda function: $FUNCTION_NAME" +FUNCTION_ARN=$(aws lambda create-function \ + --function-name "$FUNCTION_NAME" \ + --runtime "$RUNTIME" \ + --handler "$HANDLER" \ + --role "$ROLE_ARN" \ + --zip-file fileb://function.zip \ + --architectures x86_64 \ + --query 'FunctionArn' \ + --output text) + +if [ -z "$FUNCTION_ARN" ]; then + handle_error "Failed to create Lambda function" +fi + +echo "Created Lambda function: $FUNCTION_ARN" + +# Wait for the function to become active +wait_for_function_active "$FUNCTION_NAME" || handle_error "Function did not become active in time" + +# Step 4: Create test event +echo "Creating test event" +cat > test-event.json << EOF +{ + "length": 6, + "width": 7 +} +EOF + +# Step 5: Invoke the function +echo "Invoking Lambda function with test event" +aws lambda invoke \ + --function-name "$FUNCTION_NAME" \ + --payload fileb://test-event.json \ + output.json || handle_error "Failed to invoke Lambda function" + +echo "Function response:" +cat output.json +echo "" + +# Step 6: Wait for logs to be available +echo "Waiting for logs to be available..." +sleep 10 + +echo "Getting CloudWatch logs for function" +LOG_STREAMS=$(aws logs describe-log-streams \ + --log-group-name "$LOG_GROUP_NAME" \ + --order-by LastEventTime \ + --descending \ + --limit 1 \ + --query 'logStreams[0].logStreamName' \ + --output text 2>/dev/null) + +if [ -n "$LOG_STREAMS" ] && [ "$LOG_STREAMS" != "None" ]; then + echo "Log stream found: $LOG_STREAMS" + echo "Log events:" + aws logs get-log-events \ + --log-group-name "$LOG_GROUP_NAME" \ + --log-stream-name "$LOG_STREAMS" \ + --query 'events[*].message' \ + --output text +else + echo "No log streams found yet. Logs may take a moment to appear." + echo "You can check logs later in the CloudWatch console at:" + echo "https://console.aws.amazon.com/cloudwatch/home#logsV2:log-groups/log-group/$LOG_GROUP_NAME" +fi + +# Display summary of created resources +echo "" +echo "==============================================" +echo "RESOURCES CREATED" +echo "==============================================" +echo "- IAM Role: $ROLE_NAME" +echo "- Lambda Function: $FUNCTION_NAME" +echo "- CloudWatch Log Group: $LOG_GROUP_NAME" + +# Prompt for cleanup +echo "" +echo "==============================================" +echo "CLEANUP CONFIRMATION" +echo "==============================================" +echo "Do you want to clean up all created resources? (y/n): " +read -r CLEANUP_CHOICE + +if [[ "$CLEANUP_CHOICE" =~ ^[Yy] ]]; then + cleanup +else + echo "Resources were not cleaned up. You can manually delete them later." + echo "To clean up resources manually:" + echo "1. Delete Lambda function: aws lambda delete-function --function-name $FUNCTION_NAME" + echo "2. Delete CloudWatch log group: aws logs delete-log-group --log-group-name $LOG_GROUP_NAME" + echo "3. Detach policy: aws iam detach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" + echo "4. Delete IAM role: aws iam delete-role --role-name $ROLE_NAME" +fi + +echo "Script completed successfully" diff --git a/tuts/021-cloudformation-gs/README.md b/tuts/021-cloudformation-gs/README.md new file mode 100644 index 00000000..be80d11f --- /dev/null +++ b/tuts/021-cloudformation-gs/README.md @@ -0,0 +1,5 @@ +# AWS CloudFormation getting started + +This tutorial introduces AWS CloudFormation, demonstrating how to use infrastructure as code to provision and manage AWS resources using templates in a predictable and repeatable way. + +You can either run the automated script `cloudformation-gs.sh` to execute all the steps automatically, or follow the step-by-step instructions in the `cloudformation-gs.md` tutorial to understand each operation in detail. diff --git a/tuts/021-cloudformation-gs/cloudformation-gs.md b/tuts/021-cloudformation-gs/cloudformation-gs.md new file mode 100644 index 00000000..32a95039 --- /dev/null +++ b/tuts/021-cloudformation-gs/cloudformation-gs.md @@ -0,0 +1,414 @@ +# Creating your first CloudFormation stack using the AWS CLI + +This tutorial walks you through creating your first CloudFormation stack using the AWS Command Line Interface (AWS CLI). By following this tutorial, you'll learn how to provision basic AWS resources, monitor stack events, and generate outputs. + +**Alternative title:** Getting started with AWS CloudFormation and the AWS CLI + +## Topics + +* [Prerequisites](#prerequisites) +* [Create a CloudFormation template](#create-a-cloudformation-template) +* [Validate and deploy the template](#validate-and-deploy-the-template) +* [Monitor stack creation](#monitor-stack-creation) +* [View stack resources and outputs](#view-stack-resources-and-outputs) +* [Test the web server](#test-the-web-server) +* [Troubleshoot common issues](#troubleshoot-common-issues) +* [Clean up resources](#clean-up-resources) +* [Going to production](#going-to-production) +* [Next steps](#next-steps) + +## Prerequisites + +Before you begin this tutorial, make sure you have the following: + +1. The AWS CLI. If you need to install it, follow the [AWS CLI installation guide](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). +2. Configured your AWS CLI with appropriate credentials. Run `aws configure` if you haven't set up your credentials yet. +3. Access to an AWS account with an IAM user or role that has permissions to use Amazon EC2, Amazon S3, and CloudFormation, or administrative user access. +4. A Virtual Private Cloud (VPC) that has access to the internet. This walkthrough requires a default VPC, which comes automatically with newer AWS accounts. + +**Time to complete:** Approximately 30 minutes + +**Cost estimate:** The resources created in this tutorial will cost approximately $0.0116 per hour for the t2.micro EC2 instance. If you're within your first 12 months of AWS account creation and haven't exhausted your Free Tier benefits, the t2.micro instance would be free (up to 750 hours per month). The tutorial includes cleanup instructions to delete all resources after completion to minimize or eliminate any charges. + +## Create a CloudFormation template + +CloudFormation uses templates to define the resources you want to provision. In this tutorial, you'll create a template that provisions an EC2 instance running a simple web server and a security group to control access to it. + +**Create the template file** + +Create a file named `webserver-template.yaml` with the following content: + +```yaml +AWSTemplateFormatVersion: 2010-09-09 +Description: CloudFormation Template for WebServer with Security Group and EC2 Instance + +Parameters: + LatestAmiId: + Description: The latest Amazon Linux 2023 AMI from the Parameter Store + Type: 'AWS::SSM::Parameter::Value' + Default: '/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-x86_64' + + InstanceType: + Description: WebServer EC2 instance type + Type: String + Default: t2.micro + AllowedValues: + - t3.micro + - t2.micro + ConstraintDescription: must be a valid EC2 instance type. + + MyIP: + Description: Your IP address in CIDR format (e.g. 203.0.113.1/32). + Type: String + MinLength: '9' + MaxLength: '18' + Default: 0.0.0.0/0 + AllowedPattern: '^(\d{1,3}\.){3}\d{1,3}/\d{1,2}$' + ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x. + +Resources: + WebServerSecurityGroup: + Type: AWS::EC2::SecurityGroup + Properties: + GroupDescription: Allow HTTP access via my IP address + SecurityGroupIngress: + - IpProtocol: tcp + FromPort: 80 + ToPort: 80 + CidrIp: !Ref MyIP + + WebServer: + Type: AWS::EC2::Instance + Properties: + ImageId: !Ref LatestAmiId + InstanceType: !Ref InstanceType + SecurityGroupIds: + - !Ref WebServerSecurityGroup + UserData: !Base64 | + #!/bin/bash + dnf update -y + dnf install -y httpd + systemctl start httpd + systemctl enable httpd + echo "

Hello World!

" > /var/www/html/index.html + +Outputs: + WebsiteURL: + Value: !Join + - '' + - - http:// + - !GetAtt WebServer.PublicDnsName + Description: Website URL +``` + +This template defines a simple web server infrastructure with the following components: + +* **Parameters**: Values that can be passed to the template when creating the stack, including the AMI ID, instance type, and your IP address. +* **Resources**: The AWS resources to create, including a security group that allows HTTP access from your IP address and an EC2 instance running Apache HTTP Server. +* **Outputs**: Values that are returned after the stack is created, including the URL of the web server. + +Note that we're using Amazon Linux 2023, the latest version of Amazon Linux, which includes several improvements over Amazon Linux 2. + +## Validate and deploy the template + +Before deploying your template, it's a good practice to validate it to ensure it's correctly formatted and doesn't contain any errors. + +**Validate the template** + +Run the following command to validate your template: + +```bash +aws cloudformation validate-template --template-body file://webserver-template.yaml +``` + +If the template is valid, you'll see output showing the parameters defined in the template. If there are any errors, the command will display error messages to help you fix them. + +**Get your public IP address** + +To restrict access to your web server, you'll need to specify your public IP address. Run the following command to get your IP address: + +```bash +MY_IP=$(curl -s https://checkip.amazonaws.com) +MY_IP="${MY_IP}/32" +echo "Your public IP address: $MY_IP" +``` + +This command retrieves your public IP address and formats it with a `/32` suffix, which in CIDR notation means a single IP address. + +**Create the CloudFormation stack** + +Now you can create the stack using the AWS CLI: + +```bash +aws cloudformation create-stack \ + --stack-name MyTestStack \ + --template-body file://webserver-template.yaml \ + --parameters \ + ParameterKey=InstanceType,ParameterValue=t2.micro \ + ParameterKey=MyIP,ParameterValue="$MY_IP" +``` + +The command returns a stack ID, which is the Amazon Resource Name (ARN) that uniquely identifies the stack. It will look something like this: + +```json +{ + "StackId": "arn:aws:cloudformation:us-east-2:123456789012:stack/MyTestStack/abcd1234-56a0-11f0-96d7-02f9abcd1234" +} +``` + +## Monitor stack creation + +After you create the stack, CloudFormation begins creating the resources specified in the template. You can monitor the progress of the stack creation using the AWS CLI. + +**Check stack status** + +To check the status of your stack, run the following command: + +```bash +aws cloudformation describe-stacks --stack-name MyTestStack +``` + +The output includes detailed information about the stack, including its status. Look for the `StackStatus` field, which will be `CREATE_IN_PROGRESS` while the stack is being created. + +**View stack events** + +To see detailed events during the stack creation process, run: + +```bash +aws cloudformation describe-stack-events --stack-name MyTestStack +``` + +This command returns a list of events in reverse chronological order, with the most recent events first. You'll see events for the start of the stack creation process and for the beginning and completion of the creation of each resource. + +**Wait for stack creation to complete** + +You can use the `wait` command to pause execution until the stack creation is complete: + +```bash +aws cloudformation wait stack-create-complete --stack-name MyTestStack +``` + +This command doesn't produce any output but will return only when the stack creation is complete or has failed. + +## View stack resources and outputs + +Once the stack is created, you can view the resources that were created and the outputs that were generated. + +**List stack resources** + +To see the resources created by the stack, run: + +```bash +aws cloudformation list-stack-resources --stack-name MyTestStack +``` + +The output will show the logical ID, physical ID, type, and status of each resource in the stack. It will look something like this: + +``` +-------------------------------------------------------------------------- +| ListStackResources | ++------------------------+-------------------+---------------------------+ +| LogicalID | Status | Type | ++------------------------+-------------------+---------------------------+ +| WebServer | CREATE_COMPLETE | AWS::EC2::Instance | +| WebServerSecurityGroup| CREATE_COMPLETE | AWS::EC2::SecurityGroup | ++------------------------+-------------------+---------------------------+ +``` + +**Get stack outputs** + +To retrieve the outputs from the stack, including the WebsiteURL, run: + +```bash +aws cloudformation describe-stacks --stack-name MyTestStack --query "Stacks[0].Outputs" +``` + +The output will include the WebsiteURL, which you'll use to access your web server: + +```json +[ + { + "OutputKey": "WebsiteURL", + "OutputValue": "http://ec2-203-0-113-75.us-east-2.compute.amazonaws.com", + "Description": "Website URL" + } +] +``` + +You can extract just the WebsiteURL value using this command: + +```bash +WEBSITE_URL=$(aws cloudformation describe-stacks --stack-name MyTestStack --query "Stacks[0].Outputs[?OutputKey=='WebsiteURL'].OutputValue" --output text) +echo "WebsiteURL: $WEBSITE_URL" +``` + +## Test the web server + +Now that your stack is created and you have the WebsiteURL, you can test the web server. + +**Access the web server** + +Open a web browser and navigate to the WebsiteURL you obtained in the previous step. You should see a simple "Hello World!" message displayed in the browser. + +You can also test the connection using the command line: + +```bash +curl -s $WEBSITE_URL +``` + +This command should return the HTML content of the web page: + +```html +

Hello World!

+``` + +If the web server isn't responding immediately, wait a few minutes for the EC2 instance to finish initializing and for the Apache HTTP Server to start. + +## Troubleshoot common issues + +If you encounter issues during the stack creation or when accessing the web server, here are some common problems and solutions. + +**No default VPC available** + +The template in this walkthrough requires a default VPC. If your stack creation fails because of VPC or subnet availability errors, you might not have a default VPC in your account. You have the following options: + +1. Create a new default VPC: + +```bash +aws ec2 create-default-vpc +``` + +2. Modify the template to specify a subnet. Add the following parameter to the template: + +```yaml +SubnetId: + Description: The subnet ID to launch the instance into + Type: AWS::EC2::Subnet::Id +``` + +Then, update the `WebServer` resource to include the subnet ID: + +```yaml +WebServer: + Type: AWS::EC2::Instance + Properties: + ImageId: !Ref LatestAmiId + InstanceType: !Ref InstanceType + SecurityGroupIds: + - !Ref WebServerSecurityGroup + SubnetId: !Ref SubnetId + UserData: !Base64 | + #!/bin/bash + dnf update -y + dnf install -y httpd + systemctl start httpd + systemctl enable httpd + echo "

Hello World!

" > /var/www/html/index.html +``` + +When creating the stack, you'll need to specify a subnet that has internet access: + +```bash +# List available subnets +aws ec2 describe-subnets --query "Subnets[*].{SubnetId:SubnetId,VpcId:VpcId,AvailabilityZone:AvailabilityZone,CidrBlock:CidrBlock}" + +# Create stack with subnet specified +aws cloudformation create-stack \ + --stack-name MyTestStack \ + --template-body file://webserver-template-with-subnet.yaml \ + --parameters \ + ParameterKey=InstanceType,ParameterValue=t2.micro \ + ParameterKey=MyIP,ParameterValue="$MY_IP" \ + ParameterKey=SubnetId,ParameterValue=subnet-1234abcd +``` + +## Clean up resources + +To avoid incurring charges for resources you no longer need, you should delete the stack and its resources. + +**Delete the stack** + +Run the following command to delete the stack: + +```bash +aws cloudformation delete-stack --stack-name MyTestStack +``` + +This command doesn't produce any output. To verify that the stack is being deleted, you can check its status: + +```bash +aws cloudformation describe-stacks --stack-name MyTestStack +``` + +The `StackStatus` field will show `DELETE_IN_PROGRESS` while the stack is being deleted. + +**Wait for stack deletion to complete** + +You can use the `wait` command to pause execution until the stack deletion is complete: + +```bash +aws cloudformation wait stack-delete-complete --stack-name MyTestStack +``` + +Once the stack is deleted, the `describe-stacks` command will return an error indicating that the stack doesn't exist, which confirms it has been successfully deleted. + +**Clean up local files** + +Finally, you can remove the template file you created: + +```bash +rm -f webserver-template.yaml +``` + +## Going to production + +This tutorial is designed to help you learn the basics of AWS CloudFormation using the AWS CLI. The architecture and configuration used in this tutorial are intentionally simple and are not suitable for production environments. If you're planning to deploy a similar solution in a production environment, consider the following improvements: + +### Security Improvements + +1. **Use HTTPS instead of HTTP** + - Set up HTTPS using AWS Certificate Manager + - Configure the security group to allow traffic on port 443 + - Redirect HTTP traffic to HTTPS + +2. **Implement proper IAM roles** + - Create an IAM role for the EC2 instance with least-privilege permissions + - Use IAM roles instead of access keys for AWS service access + +3. **Enhance network security** + - Use private subnets for instances that don't need direct internet access + - Implement network ACLs for additional network security + - Consider using AWS WAF to protect against common web exploits + +### Architecture Improvements + +1. **Implement high availability** + - Deploy instances across multiple Availability Zones + - Use an Application Load Balancer to distribute traffic + - Implement Auto Scaling to handle varying loads + +2. **Add monitoring and logging** + - Set up Amazon CloudWatch for monitoring and alerting + - Configure CloudWatch Logs for centralized logging + - Implement AWS X-Ray for distributed tracing + +3. **Optimize for performance and cost** + - Use Amazon CloudFront for content delivery + - Consider using Amazon S3 for static content + - Implement caching strategies + +For more information on building production-ready architectures, refer to: + +- [AWS Well-Architected Framework](https://aws.amazon.com/architecture/well-architected/) +- [AWS Security Best Practices](https://aws.amazon.com/architecture/security-identity-compliance/) +- [AWS Architecture Center](https://aws.amazon.com/architecture/) + +## Next steps + +Congratulations! You've successfully created a CloudFormation stack, monitored its creation, and used its output. Here are some suggestions for continuing your CloudFormation journey: + +1. Learn more about templates so that you can create your own. For more information, see [Working with CloudFormation templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html). +2. Explore [CloudFormation template parameters](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html) to make your templates more flexible and reusable. +3. Learn about [CloudFormation resource attributes](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-product-attribute-reference.html) to control resource behavior and dependencies. +4. Discover how to use [CloudFormation change sets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html) to preview and manage stack updates. +5. Explore [CloudFormation stack policies](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html) to protect resources from unintended updates or deletions. diff --git a/tuts/021-cloudformation-gs/cloudformation-gs.sh b/tuts/021-cloudformation-gs/cloudformation-gs.sh new file mode 100755 index 00000000..9196263e --- /dev/null +++ b/tuts/021-cloudformation-gs/cloudformation-gs.sh @@ -0,0 +1,263 @@ +#!/bin/bash + +# CloudFormation Getting Started Script +# This script creates a CloudFormation stack with a web server and security group, +# monitors the stack creation, and provides cleanup options. + +# Set up logging +LOG_FILE="cloudformation-tutorial.log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +echo "===================================================" +echo "AWS CloudFormation Getting Started Tutorial" +echo "===================================================" +echo "This script will create a CloudFormation stack with:" +echo "- An EC2 instance running a simple web server" +echo "- A security group allowing HTTP access from your IP" +echo "" +echo "Starting at: $(date)" +echo "" + +# Function to clean up resources +cleanup() { + echo "" + echo "===================================================" + echo "CLEANING UP RESOURCES" + echo "===================================================" + + if [ -n "$STACK_NAME" ]; then + echo "Deleting CloudFormation stack: $STACK_NAME" + aws cloudformation delete-stack --stack-name "$STACK_NAME" + + echo "Waiting for stack deletion to complete..." + aws cloudformation wait stack-delete-complete --stack-name "$STACK_NAME" + + echo "Stack deletion complete." + fi + + if [ -f "$TEMPLATE_FILE" ]; then + echo "Removing local template file: $TEMPLATE_FILE" + rm -f "$TEMPLATE_FILE" + fi + + echo "Cleanup completed at: $(date)" +} + +# Function to handle errors +handle_error() { + echo "" + echo "===================================================" + echo "ERROR: $1" + echo "===================================================" + echo "Resources created before error:" + if [ -n "$STACK_NAME" ]; then + echo "- CloudFormation stack: $STACK_NAME" + fi + echo "" + + echo "Would you like to clean up these resources? (y/n): " + read -r CLEANUP_CHOICE + + if [[ "$CLEANUP_CHOICE" =~ ^[Yy]$ ]]; then + cleanup + else + echo "Resources were not cleaned up. You may need to delete them manually." + fi + + exit 1 +} + +# Set up trap for script interruption +trap 'handle_error "Script interrupted"' INT TERM + +# Generate a unique stack name +STACK_NAME="MyTestStack" +TEMPLATE_FILE="webserver-template.yaml" + +# Step 1: Create the CloudFormation template file +echo "Creating CloudFormation template file: $TEMPLATE_FILE" +cat > "$TEMPLATE_FILE" << 'EOF' +AWSTemplateFormatVersion: 2010-09-09 +Description: CloudFormation Template for WebServer with Security Group and EC2 Instance + +Parameters: + LatestAmiId: + Description: The latest Amazon Linux 2 AMI from the Parameter Store + Type: 'AWS::SSM::Parameter::Value' + Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2' + + InstanceType: + Description: WebServer EC2 instance type + Type: String + Default: t2.micro + AllowedValues: + - t3.micro + - t2.micro + ConstraintDescription: must be a valid EC2 instance type. + + MyIP: + Description: Your IP address in CIDR format (e.g. 203.0.113.1/32). + Type: String + MinLength: '9' + MaxLength: '18' + Default: 0.0.0.0/0 + AllowedPattern: '^(\d{1,3}\.){3}\d{1,3}/\d{1,2}$' + ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x. + +Resources: + WebServerSecurityGroup: + Type: AWS::EC2::SecurityGroup + Properties: + GroupDescription: Allow HTTP access via my IP address + SecurityGroupIngress: + - IpProtocol: tcp + FromPort: 80 + ToPort: 80 + CidrIp: !Ref MyIP + + WebServer: + Type: AWS::EC2::Instance + Properties: + ImageId: !Ref LatestAmiId + InstanceType: !Ref InstanceType + SecurityGroupIds: + - !Ref WebServerSecurityGroup + UserData: !Base64 | + #!/bin/bash + yum update -y + yum install -y httpd + systemctl start httpd + systemctl enable httpd + echo "

Hello World!

" > /var/www/html/index.html + +Outputs: + WebsiteURL: + Value: !Join + - '' + - - http:// + - !GetAtt WebServer.PublicDnsName + Description: Website URL +EOF + +if [ ! -f "$TEMPLATE_FILE" ]; then + handle_error "Failed to create template file" +fi + +# Step 2: Validate the template +echo "" +echo "Validating CloudFormation template..." +VALIDATION_RESULT=$(aws cloudformation validate-template --template-body "file://$TEMPLATE_FILE" 2>&1) +if [ $? -ne 0 ]; then + handle_error "Template validation failed: $VALIDATION_RESULT" +fi +echo "Template validation successful." + +# Step 3: Get the user's public IP address +echo "" +echo "Retrieving your public IP address..." +MY_IP=$(curl -s https://checkip.amazonaws.com) +if [ -z "$MY_IP" ]; then + handle_error "Failed to retrieve public IP address" +fi +MY_IP="${MY_IP}/32" +echo "Your public IP address: $MY_IP" + +# Step 4: Create the CloudFormation stack +echo "" +echo "Creating CloudFormation stack: $STACK_NAME" +echo "This will create an EC2 instance and security group." +CREATE_RESULT=$(aws cloudformation create-stack \ + --stack-name "$STACK_NAME" \ + --template-body "file://$TEMPLATE_FILE" \ + --parameters \ + ParameterKey=InstanceType,ParameterValue=t2.micro \ + ParameterKey=MyIP,ParameterValue="$MY_IP" \ + --output text 2>&1) + +if [ $? -ne 0 ]; then + handle_error "Stack creation failed: $CREATE_RESULT" +fi + +STACK_ID=$(echo "$CREATE_RESULT" | tr -d '\r\n') +echo "Stack creation initiated. Stack ID: $STACK_ID" + +# Step 5: Monitor stack creation +echo "" +echo "Monitoring stack creation..." +echo "This may take a few minutes." + +# Wait for stack creation to complete +aws cloudformation wait stack-create-complete --stack-name "$STACK_NAME" +if [ $? -ne 0 ]; then + # Check if the stack exists and get its status + STACK_STATUS=$(aws cloudformation describe-stacks --stack-name "$STACK_NAME" --query "Stacks[0].StackStatus" --output text 2>/dev/null) + if [ $? -ne 0 ] || [ "$STACK_STATUS" == "ROLLBACK_COMPLETE" ] || [ "$STACK_STATUS" == "ROLLBACK_IN_PROGRESS" ]; then + handle_error "Stack creation failed. Status: $STACK_STATUS" + fi +fi + +echo "Stack creation completed successfully." + +# Step 6: List stack resources +echo "" +echo "Resources created by the stack:" +aws cloudformation list-stack-resources --stack-name "$STACK_NAME" --query "StackResourceSummaries[*].{LogicalID:LogicalResourceId, Type:ResourceType, Status:ResourceStatus}" --output table + +# Step 7: Get stack outputs +echo "" +echo "Stack outputs:" +OUTPUTS=$(aws cloudformation describe-stacks --stack-name "$STACK_NAME" --query "Stacks[0].Outputs" --output json) +if [ $? -ne 0 ]; then + handle_error "Failed to retrieve stack outputs" +fi + +# Extract the WebsiteURL +WEBSITE_URL=$(aws cloudformation describe-stacks --stack-name "$STACK_NAME" --query "Stacks[0].Outputs[?OutputKey=='WebsiteURL'].OutputValue" --output text) +if [ -z "$WEBSITE_URL" ]; then + handle_error "Failed to extract WebsiteURL from stack outputs" +fi + +echo "WebsiteURL: $WEBSITE_URL" +echo "" +echo "You can access the web server by opening the above URL in your browser." +echo "You should see a simple 'Hello World!' message." + +# Step 8: Test the connection via CLI +echo "" +echo "Testing connection to the web server..." +HTTP_RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" "$WEBSITE_URL") +if [ "$HTTP_RESPONSE" == "200" ]; then + echo "Connection successful! HTTP status code: $HTTP_RESPONSE" +else + echo "Warning: Connection test returned HTTP status code: $HTTP_RESPONSE" + echo "The web server might not be ready yet or there might be connectivity issues." +fi + +# Step 9: Prompt for cleanup +echo "" +echo "===================================================" +echo "CLEANUP CONFIRMATION" +echo "===================================================" +echo "Resources created:" +echo "- CloudFormation stack: $STACK_NAME" +echo " - EC2 instance" +echo " - Security group" +echo "" +echo "Do you want to clean up all created resources? (y/n): " +read -r CLEANUP_CHOICE + +if [[ "$CLEANUP_CHOICE" =~ ^[Yy]$ ]]; then + cleanup +else + echo "" + echo "Resources were not cleaned up. You can delete them later with:" + echo "aws cloudformation delete-stack --stack-name $STACK_NAME" + echo "" + echo "Note: You may be charged for AWS resources as long as they exist." +fi + +echo "" +echo "===================================================" +echo "Tutorial completed at: $(date)" +echo "Log file: $LOG_FILE" +echo "===================================================" diff --git a/tuts/038-redshift-serverless/README.md b/tuts/038-redshift-serverless/README.md new file mode 100644 index 00000000..4629a017 --- /dev/null +++ b/tuts/038-redshift-serverless/README.md @@ -0,0 +1,5 @@ +# Amazon Redshift serverless + +This tutorial demonstrates how to set up and use Amazon Redshift Serverless, a serverless data warehouse service that automatically scales compute capacity and eliminates the need to manage infrastructure. + +You can either run the automated script `redshift-serverless.sh` to execute all the steps automatically, or follow the step-by-step instructions in the `redshift-serverless.md` tutorial to understand each operation in detail. diff --git a/tuts/038-redshift-serverless/redshift-serverless.md b/tuts/038-redshift-serverless/redshift-serverless.md new file mode 100644 index 00000000..35f6bdf8 --- /dev/null +++ b/tuts/038-redshift-serverless/redshift-serverless.md @@ -0,0 +1,492 @@ +# Getting started with Amazon Redshift Serverless using the AWS CLI + +This tutorial guides you through setting up and using Amazon Redshift Serverless with the AWS Command Line Interface (AWS CLI). You'll learn how to create serverless resources, load sample data, and run queries against your data warehouse. + +## Topics + +* [Prerequisites](#prerequisites) +* [Creating an IAM role for Amazon S3 access](#creating-an-iam-role-for-amazon-s3-access) +* [Creating a Redshift Serverless namespace and workgroup](#creating-a-redshift-serverless-namespace-and-workgroup) +* [Creating tables and loading sample data](#creating-tables-and-loading-sample-data) +* [Running queries on your data](#running-queries-on-your-data) +* [Cleaning up resources](#cleaning-up-resources) +* [Going to production](#going-to-production) +* [Next steps](#next-steps) + +## Prerequisites + +Before you begin this tutorial, make sure you have the following: + +1. The AWS CLI. If you need to install it, follow the [AWS CLI installation guide](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). +2. Configured your AWS CLI with appropriate credentials. Run `aws configure` if you haven't set up your credentials yet. +3. Basic familiarity with SQL and database concepts. +4. Sufficient permissions to create and manage Redshift Serverless resources, IAM roles, and access Amazon S3 in your AWS account. + +Amazon Redshift Serverless requires an Amazon VPC with at least three subnets in three different availability zones, and at least 3 available IP addresses. Make sure your AWS account has a VPC that meets these requirements before proceeding. + +This tutorial will take approximately 30-45 minutes to complete. + +### Cost information + +The resources you create in this tutorial will incur costs while they exist. The primary cost driver is the Redshift Serverless compute capacity: + +- Redshift Serverless with 8 RPUs: Approximately $3.00 per hour +- Storage costs: Minimal for this tutorial (approximately $0.024 per GB-month) + +The total cost for completing this tutorial should be less than $3.00 if you follow the cleanup instructions. If you leave the resources running, you could incur charges of approximately $72.00 per day. + +For current pricing information, see [Amazon Redshift Serverless pricing](https://aws.amazon.com/redshift/serverless/pricing/). + +## Creating an IAM role for Amazon S3 access + +To load data from Amazon S3 into Redshift Serverless, you need to create an IAM role with the necessary permissions. This role allows Redshift Serverless to access objects in the S3 bucket. + +First, let's create a trust policy document that allows Redshift Serverless to assume the role: + +``` +cat > redshift-trust-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "redshift-serverless.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] +} +EOF +``` + +This trust policy specifies that the Redshift Serverless service can assume this role. + +Next, create a policy document that grants access to the S3 bucket containing the sample data: + +``` +cat > redshift-s3-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "s3:GetObject", + "s3:ListBucket" + ], + "Resource": [ + "arn:aws:s3:::amzn-s3-demo-bucket", + "arn:aws:s3:::amzn-s3-demo-bucket/*" + ] + } + ] +} +EOF +``` + +This policy grants read-only access to the Amazon Redshift sample data bucket. For this tutorial, we'll use the public Redshift sample data bucket, but in the commands we'll use the actual bucket name `redshift-downloads`. + +Now, create the IAM role using the trust policy: + +``` +aws iam create-role --role-name RedshiftServerlessS3Role --assume-role-policy-document file://redshift-trust-policy.json +``` + +The command returns details about the newly created role, including its Amazon Resource Name (ARN). + +Attach the S3 access policy to the role: + +``` +aws iam put-role-policy --role-name RedshiftServerlessS3Role --policy-name S3Access --policy-document file://redshift-s3-policy.json +``` + +Finally, store the role ARN in a variable for later use: + +``` +ROLE_ARN=$(aws iam get-role --role-name RedshiftServerlessS3Role --query 'Role.Arn' --output text) +echo "Role ARN: $ROLE_ARN" +``` + +The role ARN will be used when loading data from S3 into your Redshift Serverless database. + +## Creating a Redshift Serverless namespace and workgroup + +Amazon Redshift Serverless organizes resources into namespaces and workgroups: +- A namespace is a collection of database objects and users +- A workgroup is a collection of compute resources + +Let's create a namespace first. For security purposes, we'll generate a strong password instead of hardcoding one: + +``` +ADMIN_PASSWORD=$(openssl rand -base64 12) +echo "Generated password: $ADMIN_PASSWORD" +``` + +Make sure to save this password securely, as you'll need it to connect to your database. + +Now create the namespace: + +``` +aws redshift-serverless create-namespace \ + --namespace-name default-namespace \ + --admin-username admin \ + --admin-user-password "$ADMIN_PASSWORD" \ + --db-name dev +``` + +This command creates a namespace named "default-namespace" with an admin user and a database named "dev". + +Wait a few moments for the namespace to be available: + +``` +echo "Waiting for namespace to be available..." +sleep 10 +``` + +Now, associate the IAM role we created earlier with the namespace: + +``` +aws redshift-serverless update-namespace \ + --namespace-name default-namespace \ + --iam-roles "$ROLE_ARN" +``` + +Next, create a workgroup associated with the namespace: + +``` +aws redshift-serverless create-workgroup \ + --workgroup-name default-workgroup \ + --namespace-name default-namespace \ + --base-capacity 8 +``` + +The base-capacity parameter specifies the compute capacity for the workgroup in Redshift Processing Units (RPUs). Each RPU provides 16 GB of memory. + +Wait for the workgroup to be available: + +``` +echo "Waiting for workgroup to be available..." +sleep 60 +``` + +Once the workgroup is available, you can retrieve its endpoint: + +``` +WORKGROUP_ENDPOINT=$(aws redshift-serverless get-workgroup \ + --workgroup-name default-workgroup \ + --query 'workgroup.endpoint.address' \ + --output text) +echo "Workgroup endpoint: $WORKGROUP_ENDPOINT" +``` + +The endpoint is the connection point for your SQL client tools to connect to your Redshift Serverless database. + +## Creating tables and loading sample data + +Now that your Redshift Serverless resources are set up, you can create tables and load sample data. We'll use the Redshift Data API to execute SQL statements. + +First, let's create three tables for the sample data: + +``` +aws redshift-data execute-statement \ + --database dev \ + --workgroup-name default-workgroup \ + --sql "CREATE TABLE users( + userid INTEGER NOT NULL DISTKEY SORTKEY, + username CHAR(8), + firstname VARCHAR(30), + lastname VARCHAR(30), + city VARCHAR(30), + state CHAR(2), + email VARCHAR(100), + phone CHAR(14), + likesports BOOLEAN, + liketheatre BOOLEAN, + likeconcerts BOOLEAN, + likejazz BOOLEAN, + likeclassical BOOLEAN, + likeopera BOOLEAN, + likerock BOOLEAN, + likevegas BOOLEAN, + likebroadway BOOLEAN, + likemusicals BOOLEAN + );" +``` + +This command creates a "users" table with various columns to store user information. + +Next, create an "event" table: + +``` +aws redshift-data execute-statement \ + --database dev \ + --workgroup-name default-workgroup \ + --sql "CREATE TABLE event( + eventid INTEGER NOT NULL DISTKEY, + venueid SMALLINT NOT NULL, + catid SMALLINT NOT NULL, + dateid SMALLINT NOT NULL SORTKEY, + eventname VARCHAR(200), + starttime TIMESTAMP + );" +``` + +Finally, create a "sales" table: + +``` +aws redshift-data execute-statement \ + --database dev \ + --workgroup-name default-workgroup \ + --sql "CREATE TABLE sales( + salesid INTEGER NOT NULL, + listid INTEGER NOT NULL DISTKEY, + sellerid INTEGER NOT NULL, + buyerid INTEGER NOT NULL, + eventid INTEGER NOT NULL, + dateid SMALLINT NOT NULL SORTKEY, + qtysold SMALLINT NOT NULL, + pricepaid DECIMAL(8,2), + commission DECIMAL(8,2), + saletime TIMESTAMP + );" +``` + +Wait a moment for the tables to be created: + +``` +echo "Waiting for tables to be created..." +sleep 10 +``` + +Now, let's load data into these tables from the public Amazon Redshift sample data bucket using the COPY command: + +``` +aws redshift-data execute-statement \ + --database dev \ + --workgroup-name default-workgroup \ + --sql "COPY users + FROM 's3://redshift-downloads/tickit/allusers_pipe.txt' + DELIMITER '|' + TIMEFORMAT 'YYYY-MM-DD HH:MI:SS' + IGNOREHEADER 1 + IAM_ROLE '$ROLE_ARN';" +``` + +This command loads data into the "users" table from an S3 file with pipe-delimited values. + +Load data into the "event" table: + +``` +aws redshift-data execute-statement \ + --database dev \ + --workgroup-name default-workgroup \ + --sql "COPY event + FROM 's3://redshift-downloads/tickit/allevents_pipe.txt' + DELIMITER '|' + TIMEFORMAT 'YYYY-MM-DD HH:MI:SS' + IGNOREHEADER 1 + IAM_ROLE '$ROLE_ARN';" +``` + +Finally, load data into the "sales" table: + +``` +aws redshift-data execute-statement \ + --database dev \ + --workgroup-name default-workgroup \ + --sql "COPY sales + FROM 's3://redshift-downloads/tickit/sales_tab.txt' + DELIMITER '\t' + TIMEFORMAT 'MM/DD/YYYY HH:MI:SS' + IGNOREHEADER 1 + IAM_ROLE '$ROLE_ARN';" +``` + +Note that the sales data uses tab-delimited values, so we specify `\t` as the delimiter. + +Wait for the data loading to complete: + +``` +echo "Waiting for data loading to complete..." +sleep 30 +``` + +### Verifying data was loaded correctly + +Let's verify that our data was loaded correctly by running some simple COUNT queries: + +``` +USERS_COUNT_QUERY_ID=$(aws redshift-data execute-statement \ + --database dev \ + --workgroup-name default-workgroup \ + --sql "SELECT COUNT(*) FROM users;" \ + --query 'Id' --output text) + +echo "Waiting for query to complete..." +sleep 5 + +aws redshift-data get-statement-result --id "$USERS_COUNT_QUERY_ID" +``` + +This should return the number of rows in the users table. Similarly, check the event and sales tables: + +``` +EVENT_COUNT_QUERY_ID=$(aws redshift-data execute-statement \ + --database dev \ + --workgroup-name default-workgroup \ + --sql "SELECT COUNT(*) FROM event;" \ + --query 'Id' --output text) + +sleep 5 +aws redshift-data get-statement-result --id "$EVENT_COUNT_QUERY_ID" + +SALES_COUNT_QUERY_ID=$(aws redshift-data execute-statement \ + --database dev \ + --workgroup-name default-workgroup \ + --sql "SELECT COUNT(*) FROM sales;" \ + --query 'Id' --output text) + +sleep 5 +aws redshift-data get-statement-result --id "$SALES_COUNT_QUERY_ID" +``` + +If these queries return non-zero counts, your data was loaded successfully. + +## Running queries on your data + +Now that you have data loaded into your tables, you can run queries to analyze it. Let's run a couple of example queries. + +First, let's find the top 10 buyers by quantity: + +``` +QUERY1_ID=$(aws redshift-data execute-statement \ + --database dev \ + --workgroup-name default-workgroup \ + --sql "SELECT firstname, lastname, total_quantity + FROM (SELECT buyerid, sum(qtysold) total_quantity + FROM sales + GROUP BY buyerid + ORDER BY total_quantity desc limit 10) Q, users + WHERE Q.buyerid = userid + ORDER BY Q.total_quantity desc;" \ + --query 'Id' --output text) +``` + +The Redshift Data API executes queries asynchronously, so we need to wait for the query to complete: + +``` +echo "Waiting for query to complete..." +sleep 10 +``` + +Now, retrieve the query results: + +``` +aws redshift-data get-statement-result --id "$QUERY1_ID" +``` + +This command returns the results of the query, showing the top 10 buyers by quantity. + +Let's run another query to find events in the 99.9 percentile in terms of all-time gross sales: + +``` +QUERY2_ID=$(aws redshift-data execute-statement \ + --database dev \ + --workgroup-name default-workgroup \ + --sql "SELECT eventname, total_price + FROM (SELECT eventid, total_price, ntile(1000) over(order by total_price desc) as percentile + FROM (SELECT eventid, sum(pricepaid) total_price + FROM sales + GROUP BY eventid)) Q, event E + WHERE Q.eventid = E.eventid + AND percentile = 1 + ORDER BY total_price desc;" \ + --query 'Id' --output text) +``` + +Wait for the query to complete: + +``` +echo "Waiting for query to complete..." +sleep 10 +``` + +Retrieve the results: + +``` +aws redshift-data get-statement-result --id "$QUERY2_ID" +``` + +This query shows the events with the highest gross sales, representing the top 0.1% of all events. + +## Cleaning up resources + +When you're done experimenting with Redshift Serverless, you should clean up the resources to avoid incurring charges: + +``` +# Delete the workgroup +aws redshift-serverless delete-workgroup --workgroup-name default-workgroup + +# Wait for workgroup to be deleted before deleting namespace +echo "Waiting for workgroup to be deleted..." +sleep 60 + +# Delete the namespace +aws redshift-serverless delete-namespace --namespace-name default-namespace + +# Delete the IAM role policy +aws iam delete-role-policy --role-name RedshiftServerlessS3Role --policy-name S3Access + +# Delete the IAM role +aws iam delete-role --role-name RedshiftServerlessS3Role + +# Clean up temporary files +rm -f redshift-trust-policy.json redshift-s3-policy.json +``` + +These commands delete all the resources created during this tutorial, including the workgroup, namespace, and IAM role. + +## Going to production + +This tutorial is designed to help you learn the basics of Amazon Redshift Serverless using the AWS CLI. For production environments, consider the following additional best practices: + +### Security considerations + +1. **Password management**: Use AWS Secrets Manager to store and manage database credentials instead of generating them in scripts. + +2. **Network security**: Configure VPC security groups to restrict access to your Redshift Serverless resources. Consider using VPC endpoints for enhanced security. + +3. **Encryption**: Use customer-managed KMS keys for enhanced control over data encryption. + +4. **IAM permissions**: Further restrict IAM permissions based on the principle of least privilege. + +5. **Audit logging**: Enable audit logging to track database activities. + +For more information, see [Security in Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-security.html). + +### Architecture best practices + +1. **Infrastructure as Code**: Use AWS CloudFormation or AWS CDK to define and provision resources. + +2. **Monitoring and observability**: Set up CloudWatch dashboards and alarms to monitor performance and costs. + +3. **Workload management**: Configure workload management to optimize resource utilization. + +4. **Backup and recovery**: Implement a backup strategy using snapshots. + +5. **Cost optimization**: Use usage limits to control costs and monitor usage with AWS Cost Explorer. + +For more information, see the [AWS Well-Architected Framework](https://aws.amazon.com/architecture/well-architected/) and [Amazon Redshift best practices](https://docs.aws.amazon.com/redshift/latest/dg/best-practices.html). + +## Next steps + +Now that you've learned how to set up and use Amazon Redshift Serverless with the AWS CLI, you can explore more advanced features: + +* [Connect to Amazon Redshift Serverless using JDBC and ODBC drivers](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-connecting.html) +* [Use the Amazon Redshift Data API for programmatic access](https://docs.aws.amazon.com/redshift/latest/mgmt/data-api.html) +* [Build machine learning models with Amazon Redshift ML](https://docs.aws.amazon.com/redshift/latest/dg/getting-started-machine-learning.html) +* [Query data directly from an Amazon S3 data lake](https://docs.aws.amazon.com/redshift/latest/dg/c-getting-started-using-spectrum.html) +* [Manage Amazon Redshift Serverless workgroups and namespaces](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-workgroups-and-namespaces.html) + +You can also explore the [Amazon Redshift Serverless pricing](https://aws.amazon.com/redshift/serverless/pricing/) to understand the cost structure for your specific workloads. diff --git a/tuts/038-redshift-serverless/redshift-serverless.sh b/tuts/038-redshift-serverless/redshift-serverless.sh new file mode 100755 index 00000000..c102c42b --- /dev/null +++ b/tuts/038-redshift-serverless/redshift-serverless.sh @@ -0,0 +1,645 @@ +#!/bin/bash + +# Amazon Redshift Serverless Tutorial Script with Secrets Manager (No jq dependency) +# This script creates a Redshift Serverless environment, loads sample data, and runs queries +# Uses AWS Secrets Manager for secure password management without requiring jq + +# Set up logging +LOG_FILE="redshift-serverless-tutorial-v4.log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +echo "Starting Amazon Redshift Serverless tutorial script at $(date)" +echo "All commands and outputs will be logged to $LOG_FILE" + +# Function to check for errors in command output +check_error() { + local output=$1 + local cmd=$2 + + if echo "$output" | grep -i "error\|exception\|fail" > /dev/null; then + echo "ERROR: Command failed: $cmd" + echo "Output: $output" + cleanup_resources + exit 1 + fi +} + +# Function to generate a secure password that meets Redshift requirements +generate_secure_password() { + # Redshift password requirements: + # - 8-64 characters + # - At least one uppercase letter + # - At least one lowercase letter + # - At least one decimal digit + # - Can contain printable ASCII characters except /, ", ', \, @, space + + local password="" + local valid=false + local attempts=0 + local max_attempts=10 + + while [[ "$valid" == false && $attempts -lt $max_attempts ]]; do + # Generate base password with safe characters + local base=$(openssl rand -base64 12 | tr -d '/+=' | head -c 12) + + # Ensure we have at least one of each required character type + local upper=$(echo "ABCDEFGHIJKLMNOPQRSTUVWXYZ" | fold -w1 | shuf -n1) + local lower=$(echo "abcdefghijklmnopqrstuvwxyz" | fold -w1 | shuf -n1) + local digit=$(echo "0123456789" | fold -w1 | shuf -n1) + local special=$(echo "!#$%&*()_+-=[]{}|;:,.<>?" | fold -w1 | shuf -n1) + + # Combine and shuffle + password="${base}${upper}${lower}${digit}${special}" + password=$(echo "$password" | fold -w1 | shuf | tr -d '\n') + + # Validate password meets requirements + if [[ ${#password} -ge 8 && ${#password} -le 64 ]] && \ + [[ "$password" =~ [A-Z] ]] && \ + [[ "$password" =~ [a-z] ]] && \ + [[ "$password" =~ [0-9] ]] && \ + [[ ! "$password" =~ [/\"\'\\@[:space:]] ]]; then + valid=true + fi + + ((attempts++)) + done + + if [[ "$valid" == false ]]; then + echo "ERROR: Failed to generate valid password after $max_attempts attempts" + exit 1 + fi + + echo "$password" +} + +# Function to create secret in AWS Secrets Manager +create_secret() { + local secret_name=$1 + local username=$2 + local password=$3 + local description=$4 + + echo "Creating secret in AWS Secrets Manager: $secret_name" + + # Create the secret using AWS CLI without jq + local secret_output=$(aws secretsmanager create-secret \ + --name "$secret_name" \ + --description "$description" \ + --secret-string "{\"username\":\"$username\",\"password\":\"$password\"}" 2>&1) + + if echo "$secret_output" | grep -i "error\|exception\|fail" > /dev/null; then + echo "ERROR: Failed to create secret: $secret_output" + return 1 + fi + + echo "Secret created successfully: $secret_name" + return 0 +} + +# Function to retrieve password from AWS Secrets Manager +get_password_from_secret() { + local secret_name=$1 + + # Get the secret value and extract password using sed/grep instead of jq + local secret_value=$(aws secretsmanager get-secret-value \ + --secret-id "$secret_name" \ + --query 'SecretString' \ + --output text 2>/dev/null) + + if [[ $? -eq 0 ]]; then + # Extract password from JSON using sed + echo "$secret_value" | sed -n 's/.*"password":"\([^"]*\)".*/\1/p' + else + echo "" + fi +} + +# Function to wait for a resource to be available +wait_for_resource() { + local resource_type=$1 + local resource_name=$2 + local max_attempts=$3 + local wait_seconds=$4 + local check_cmd=$5 + + echo "Waiting for $resource_type $resource_name to be available..." + + for ((i=1; i<=$max_attempts; i++)); do + local output=$($check_cmd 2>/dev/null) + local status=$(echo "$output" | grep -o '"Status": "[^"]*' | cut -d'"' -f4 || echo "") + + if [[ "$status" == "AVAILABLE" ]]; then + echo "$resource_type $resource_name is now available" + return 0 + fi + + echo "Attempt $i/$max_attempts: $resource_type $resource_name status: $status. Waiting $wait_seconds seconds..." + sleep $wait_seconds + done + + echo "ERROR: Timed out waiting for $resource_type $resource_name to be available" + return 1 +} + +# Function to wait for a resource to be deleted +wait_for_resource_deletion() { + local resource_type=$1 + local resource_name=$2 + local max_attempts=$3 + local wait_seconds=$4 + local check_cmd=$5 + + echo "Waiting for $resource_type $resource_name to be deleted..." + + for ((i=1; i<=$max_attempts; i++)); do + local output=$($check_cmd 2>&1) + + if echo "$output" | grep -i "not found\|does not exist" > /dev/null; then + echo "$resource_type $resource_name has been deleted" + return 0 + fi + + echo "Attempt $i/$max_attempts: $resource_type $resource_name is still being deleted. Waiting $wait_seconds seconds..." + sleep $wait_seconds + done + + echo "ERROR: Timed out waiting for $resource_type $resource_name to be deleted" + return 1 +} + +# Function to clean up resources +cleanup_resources() { + echo "" + echo "===========================================" + echo "CLEANUP CONFIRMATION" + echo "===========================================" + echo "The following resources were created:" + echo "- Redshift Serverless Workgroup: $WORKGROUP_NAME" + echo "- Redshift Serverless Namespace: $NAMESPACE_NAME" + echo "- IAM Role: $ROLE_NAME" + echo "- Secrets Manager Secret: $SECRET_NAME" + echo "" + echo "Do you want to clean up all created resources? (y/n): " + read -r CLEANUP_CHOICE + + if [[ "${CLEANUP_CHOICE,,}" == "y" ]]; then + echo "Cleaning up resources..." + + # Delete the workgroup + echo "Deleting Redshift Serverless workgroup $WORKGROUP_NAME..." + WORKGROUP_DELETE_OUTPUT=$(aws redshift-serverless delete-workgroup --workgroup-name "$WORKGROUP_NAME" 2>&1) + echo "$WORKGROUP_DELETE_OUTPUT" + + # Wait for workgroup to be deleted before deleting namespace + wait_for_resource_deletion "workgroup" "$WORKGROUP_NAME" 20 30 "aws redshift-serverless get-workgroup --workgroup-name $WORKGROUP_NAME" + + # Delete the namespace + echo "Deleting Redshift Serverless namespace $NAMESPACE_NAME..." + NAMESPACE_DELETE_OUTPUT=$(aws redshift-serverless delete-namespace --namespace-name "$NAMESPACE_NAME" 2>&1) + echo "$NAMESPACE_DELETE_OUTPUT" + + # Wait for namespace to be deleted + wait_for_resource_deletion "namespace" "$NAMESPACE_NAME" 20 30 "aws redshift-serverless get-namespace --namespace-name $NAMESPACE_NAME" + + # Delete the IAM role policy + echo "Deleting IAM role policy..." + POLICY_DELETE_OUTPUT=$(aws iam delete-role-policy --role-name "$ROLE_NAME" --policy-name S3Access 2>&1) + echo "$POLICY_DELETE_OUTPUT" + + # Delete the IAM role + echo "Deleting IAM role $ROLE_NAME..." + ROLE_DELETE_OUTPUT=$(aws iam delete-role --role-name "$ROLE_NAME" 2>&1) + echo "$ROLE_DELETE_OUTPUT" + + # Delete the secret + echo "Deleting Secrets Manager secret $SECRET_NAME..." + SECRET_DELETE_OUTPUT=$(aws secretsmanager delete-secret --secret-id "$SECRET_NAME" --force-delete-without-recovery 2>&1) + echo "$SECRET_DELETE_OUTPUT" + + echo "Cleanup completed." + else + echo "Cleanup skipped. Resources will remain in your AWS account." + fi +} + +# Check if required tools are available +if ! command -v openssl &> /dev/null; then + echo "ERROR: openssl is required but not installed. Please install openssl to continue." + exit 1 +fi + +# Generate unique names for resources +RANDOM_SUFFIX=$(cat /dev/urandom | tr -dc 'a-z0-9' | head -c 6) +NAMESPACE_NAME="rs-namespace-${RANDOM_SUFFIX}" +WORKGROUP_NAME="rs-workgroup-${RANDOM_SUFFIX}" +ROLE_NAME="RedshiftServerlessS3Role-${RANDOM_SUFFIX}" +SECRET_NAME="redshift-serverless-admin-${RANDOM_SUFFIX}" +DB_NAME="dev" +ADMIN_USERNAME="admin" + +# Generate secure password +echo "Generating secure password..." +ADMIN_PASSWORD=$(generate_secure_password) + +# Create secret in AWS Secrets Manager +create_secret "$SECRET_NAME" "$ADMIN_USERNAME" "$ADMIN_PASSWORD" "Admin credentials for Redshift Serverless namespace $NAMESPACE_NAME" +if [[ $? -ne 0 ]]; then + echo "ERROR: Failed to create secret in AWS Secrets Manager" + exit 1 +fi + +# Track created resources +CREATED_RESOURCES=() + +echo "Using the following resource names:" +echo "- Namespace: $NAMESPACE_NAME" +echo "- Workgroup: $WORKGROUP_NAME" +echo "- IAM Role: $ROLE_NAME" +echo "- Secret: $SECRET_NAME" +echo "- Database: $DB_NAME" +echo "- Admin Username: $ADMIN_USERNAME" +echo "- Admin Password: [STORED IN SECRETS MANAGER]" + +# Step 1: Create IAM role for S3 access +echo "Creating IAM role for Redshift Serverless S3 access..." + +# Create trust policy document +cat > redshift-trust-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "redshift-serverless.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] +} +EOF + +# Create S3 access policy document +cat > redshift-s3-policy.json << EOF +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "s3:GetObject", + "s3:ListBucket" + ], + "Resource": [ + "arn:aws:s3:::redshift-downloads", + "arn:aws:s3:::redshift-downloads/*" + ] + } + ] +} +EOF + +# Create IAM role +echo "Creating IAM role $ROLE_NAME..." +ROLE_OUTPUT=$(aws iam create-role --role-name "$ROLE_NAME" --assume-role-policy-document file://redshift-trust-policy.json 2>&1) +echo "$ROLE_OUTPUT" +check_error "$ROLE_OUTPUT" "aws iam create-role" +CREATED_RESOURCES+=("IAM Role: $ROLE_NAME") + +# Attach S3 policy to the role +echo "Attaching S3 access policy to role $ROLE_NAME..." +POLICY_OUTPUT=$(aws iam put-role-policy --role-name "$ROLE_NAME" --policy-name S3Access --policy-document file://redshift-s3-policy.json 2>&1) +echo "$POLICY_OUTPUT" +check_error "$POLICY_OUTPUT" "aws iam put-role-policy" + +# Get the role ARN +ROLE_ARN=$(aws iam get-role --role-name "$ROLE_NAME" --query 'Role.Arn' --output text) +echo "Role ARN: $ROLE_ARN" + +# Step 2: Create a namespace +echo "Creating Redshift Serverless namespace $NAMESPACE_NAME..." +NAMESPACE_OUTPUT=$(aws redshift-serverless create-namespace \ + --namespace-name "$NAMESPACE_NAME" \ + --admin-username "$ADMIN_USERNAME" \ + --admin-user-password "$ADMIN_PASSWORD" \ + --db-name "$DB_NAME" 2>&1) +echo "$NAMESPACE_OUTPUT" +check_error "$NAMESPACE_OUTPUT" "aws redshift-serverless create-namespace" +CREATED_RESOURCES+=("Redshift Serverless Namespace: $NAMESPACE_NAME") + +# Wait for namespace to be available +wait_for_resource "namespace" "$NAMESPACE_NAME" 10 30 "aws redshift-serverless get-namespace --namespace-name $NAMESPACE_NAME" + +# Associate IAM role with namespace +echo "Associating IAM role with namespace..." +UPDATE_NAMESPACE_OUTPUT=$(aws redshift-serverless update-namespace \ + --namespace-name "$NAMESPACE_NAME" \ + --iam-roles "$ROLE_ARN" 2>&1) +echo "$UPDATE_NAMESPACE_OUTPUT" +check_error "$UPDATE_NAMESPACE_OUTPUT" "aws redshift-serverless update-namespace" + +# Step 3: Create a workgroup +echo "Creating Redshift Serverless workgroup $WORKGROUP_NAME..." +WORKGROUP_OUTPUT=$(aws redshift-serverless create-workgroup \ + --workgroup-name "$WORKGROUP_NAME" \ + --namespace-name "$NAMESPACE_NAME" \ + --base-capacity 8 2>&1) +echo "$WORKGROUP_OUTPUT" +check_error "$WORKGROUP_OUTPUT" "aws redshift-serverless create-workgroup" +CREATED_RESOURCES+=("Redshift Serverless Workgroup: $WORKGROUP_NAME") + +# Wait for workgroup to be available +wait_for_resource "workgroup" "$WORKGROUP_NAME" 20 30 "aws redshift-serverless get-workgroup --workgroup-name $WORKGROUP_NAME" + +# Get workgroup endpoint +WORKGROUP_ENDPOINT=$(aws redshift-serverless get-workgroup \ + --workgroup-name "$WORKGROUP_NAME" \ + --query 'workgroup.endpoint.address' \ + --output text) +echo "Workgroup endpoint: $WORKGROUP_ENDPOINT" + +# Wait additional time for the endpoint to be fully operational +echo "Waiting for endpoint to be fully operational..." +sleep 60 + +# Step 4: Create tables for sample data +echo "Creating tables for sample data..." + +# Create users table +echo "Creating users table..." +USERS_TABLE_OUTPUT=$(aws redshift-data execute-statement \ + --database "$DB_NAME" \ + --workgroup-name "$WORKGROUP_NAME" \ + --sql "CREATE TABLE users( + userid INTEGER NOT NULL DISTKEY SORTKEY, + username CHAR(8), + firstname VARCHAR(30), + lastname VARCHAR(30), + city VARCHAR(30), + state CHAR(2), + email VARCHAR(100), + phone CHAR(14), + likesports BOOLEAN, + liketheatre BOOLEAN, + likeconcerts BOOLEAN, + likejazz BOOLEAN, + likeclassical BOOLEAN, + likeopera BOOLEAN, + likerock BOOLEAN, + likevegas BOOLEAN, + likebroadway BOOLEAN, + likemusicals BOOLEAN + );" 2>&1) +echo "$USERS_TABLE_OUTPUT" +check_error "$USERS_TABLE_OUTPUT" "aws redshift-data execute-statement (users table)" +USERS_QUERY_ID=$(echo "$USERS_TABLE_OUTPUT" | grep -o '"Id": "[^"]*' | cut -d'"' -f4) + +# Wait for query to complete +echo "Waiting for users table creation to complete..." +sleep 5 + +# Create event table +echo "Creating event table..." +EVENT_TABLE_OUTPUT=$(aws redshift-data execute-statement \ + --database "$DB_NAME" \ + --workgroup-name "$WORKGROUP_NAME" \ + --sql "CREATE TABLE event( + eventid INTEGER NOT NULL DISTKEY, + venueid SMALLINT NOT NULL, + catid SMALLINT NOT NULL, + dateid SMALLINT NOT NULL SORTKEY, + eventname VARCHAR(200), + starttime TIMESTAMP + );" 2>&1) +echo "$EVENT_TABLE_OUTPUT" +check_error "$EVENT_TABLE_OUTPUT" "aws redshift-data execute-statement (event table)" +EVENT_QUERY_ID=$(echo "$EVENT_TABLE_OUTPUT" | grep -o '"Id": "[^"]*' | cut -d'"' -f4) + +# Wait for query to complete +echo "Waiting for event table creation to complete..." +sleep 5 + +# Create sales table +echo "Creating sales table..." +SALES_TABLE_OUTPUT=$(aws redshift-data execute-statement \ + --database "$DB_NAME" \ + --workgroup-name "$WORKGROUP_NAME" \ + --sql "CREATE TABLE sales( + salesid INTEGER NOT NULL, + listid INTEGER NOT NULL DISTKEY, + sellerid INTEGER NOT NULL, + buyerid INTEGER NOT NULL, + eventid INTEGER NOT NULL, + dateid SMALLINT NOT NULL SORTKEY, + qtysold SMALLINT NOT NULL, + pricepaid DECIMAL(8,2), + commission DECIMAL(8,2), + saletime TIMESTAMP + );" 2>&1) +echo "$SALES_TABLE_OUTPUT" +check_error "$SALES_TABLE_OUTPUT" "aws redshift-data execute-statement (sales table)" +SALES_QUERY_ID=$(echo "$SALES_TABLE_OUTPUT" | grep -o '"Id": "[^"]*' | cut -d'"' -f4) + +# Wait for tables to be created +echo "Waiting for tables to be created..." +sleep 10 + +# Step 5: Load sample data from Amazon S3 +echo "Loading sample data from Amazon S3..." + +# Load data into users table +echo "Loading data into users table..." +USERS_LOAD_OUTPUT=$(aws redshift-data execute-statement \ + --database "$DB_NAME" \ + --workgroup-name "$WORKGROUP_NAME" \ + --sql "COPY users + FROM 's3://redshift-downloads/tickit/allusers_pipe.txt' + DELIMITER '|' + TIMEFORMAT 'YYYY-MM-DD HH:MI:SS' + IGNOREHEADER 1 + IAM_ROLE '$ROLE_ARN';" 2>&1) +echo "$USERS_LOAD_OUTPUT" +check_error "$USERS_LOAD_OUTPUT" "aws redshift-data execute-statement (load users)" +USERS_LOAD_QUERY_ID=$(echo "$USERS_LOAD_OUTPUT" | grep -o '"Id": "[^"]*' | cut -d'"' -f4) + +# Wait for data loading to complete +echo "Waiting for users data loading to complete..." +sleep 10 + +# Load data into event table +echo "Loading data into event table..." +EVENT_LOAD_OUTPUT=$(aws redshift-data execute-statement \ + --database "$DB_NAME" \ + --workgroup-name "$WORKGROUP_NAME" \ + --sql "COPY event + FROM 's3://redshift-downloads/tickit/allevents_pipe.txt' + DELIMITER '|' + TIMEFORMAT 'YYYY-MM-DD HH:MI:SS' + IGNOREHEADER 1 + IAM_ROLE '$ROLE_ARN';" 2>&1) +echo "$EVENT_LOAD_OUTPUT" +check_error "$EVENT_LOAD_OUTPUT" "aws redshift-data execute-statement (load event)" +EVENT_LOAD_QUERY_ID=$(echo "$EVENT_LOAD_OUTPUT" | grep -o '"Id": "[^"]*' | cut -d'"' -f4) + +# Wait for data loading to complete +echo "Waiting for event data loading to complete..." +sleep 10 + +# Load data into sales table +echo "Loading data into sales table..." +SALES_LOAD_OUTPUT=$(aws redshift-data execute-statement \ + --database "$DB_NAME" \ + --workgroup-name "$WORKGROUP_NAME" \ + --sql "COPY sales + FROM 's3://redshift-downloads/tickit/sales_tab.txt' + DELIMITER '\t' + TIMEFORMAT 'MM/DD/YYYY HH:MI:SS' + IGNOREHEADER 1 + IAM_ROLE '$ROLE_ARN';" 2>&1) +echo "$SALES_LOAD_OUTPUT" +check_error "$SALES_LOAD_OUTPUT" "aws redshift-data execute-statement (load sales)" +SALES_LOAD_QUERY_ID=$(echo "$SALES_LOAD_OUTPUT" | grep -o '"Id": "[^"]*' | cut -d'"' -f4) + +# Wait for data loading to complete +echo "Waiting for sales data loading to complete..." +sleep 30 + +# Step 6: Run sample queries +echo "Running sample queries..." + +# Query 1: Find top 10 buyers by quantity +echo "Running query: Find top 10 buyers by quantity..." +QUERY1_OUTPUT=$(aws redshift-data execute-statement \ + --database "$DB_NAME" \ + --workgroup-name "$WORKGROUP_NAME" \ + --sql "SELECT firstname, lastname, total_quantity + FROM (SELECT buyerid, sum(qtysold) total_quantity + FROM sales + GROUP BY buyerid + ORDER BY total_quantity desc limit 10) Q, users + WHERE Q.buyerid = userid + ORDER BY Q.total_quantity desc;" 2>&1) +echo "$QUERY1_OUTPUT" +check_error "$QUERY1_OUTPUT" "aws redshift-data execute-statement (query 1)" +QUERY1_ID=$(echo "$QUERY1_OUTPUT" | grep -o '"Id": "[^"]*' | cut -d'"' -f4) + +# Wait for query to complete +echo "Waiting for query 1 to complete..." +sleep 10 + +# Get query 1 results +echo "Getting results for query 1..." +QUERY1_STATUS_OUTPUT=$(aws redshift-data describe-statement --id "$QUERY1_ID" 2>&1) +echo "$QUERY1_STATUS_OUTPUT" +check_error "$QUERY1_STATUS_OUTPUT" "aws redshift-data describe-statement (query 1)" + +QUERY1_STATUS=$(echo "$QUERY1_STATUS_OUTPUT" | grep -o '"Status": "[^"]*' | cut -d'"' -f4) +if [ "$QUERY1_STATUS" == "FINISHED" ]; then + QUERY1_RESULTS=$(aws redshift-data get-statement-result --id "$QUERY1_ID" 2>&1) + echo "Query 1 Results:" + echo "$QUERY1_RESULTS" +else + echo "Query 1 is not yet complete. Status: $QUERY1_STATUS" + echo "Waiting additional time for query to complete..." + sleep 20 + + # Check again + QUERY1_STATUS_OUTPUT=$(aws redshift-data describe-statement --id "$QUERY1_ID" 2>&1) + QUERY1_STATUS=$(echo "$QUERY1_STATUS_OUTPUT" | grep -o '"Status": "[^"]*' | cut -d'"' -f4) + + if [ "$QUERY1_STATUS" == "FINISHED" ]; then + QUERY1_RESULTS=$(aws redshift-data get-statement-result --id "$QUERY1_ID" 2>&1) + echo "Query 1 Results:" + echo "$QUERY1_RESULTS" + else + echo "Query 1 is still not complete. Status: $QUERY1_STATUS" + fi +fi + +# Query 2: Find events in the 99.9 percentile in terms of all time gross sales +echo "Running query: Find events in the 99.9 percentile in terms of all time gross sales..." +QUERY2_OUTPUT=$(aws redshift-data execute-statement \ + --database "$DB_NAME" \ + --workgroup-name "$WORKGROUP_NAME" \ + --sql "SELECT eventname, total_price + FROM (SELECT eventid, total_price, ntile(1000) over(order by total_price desc) as percentile + FROM (SELECT eventid, sum(pricepaid) total_price + FROM sales + GROUP BY eventid)) Q, event E + WHERE Q.eventid = E.eventid + AND percentile = 1 + ORDER BY total_price desc;" 2>&1) +echo "$QUERY2_OUTPUT" +check_error "$QUERY2_OUTPUT" "aws redshift-data execute-statement (query 2)" +QUERY2_ID=$(echo "$QUERY2_OUTPUT" | grep -o '"Id": "[^"]*' | cut -d'"' -f4) + +# Wait for query to complete +echo "Waiting for query 2 to complete..." +sleep 10 + +# Get query 2 results +echo "Getting results for query 2..." +QUERY2_STATUS_OUTPUT=$(aws redshift-data describe-statement --id "$QUERY2_ID" 2>&1) +echo "$QUERY2_STATUS_OUTPUT" +check_error "$QUERY2_STATUS_OUTPUT" "aws redshift-data describe-statement (query 2)" + +QUERY2_STATUS=$(echo "$QUERY2_STATUS_OUTPUT" | grep -o '"Status": "[^"]*' | cut -d'"' -f4) +if [ "$QUERY2_STATUS" == "FINISHED" ]; then + QUERY2_RESULTS=$(aws redshift-data get-statement-result --id "$QUERY2_ID" 2>&1) + echo "Query 2 Results:" + echo "$QUERY2_RESULTS" +else + echo "Query 2 is not yet complete. Status: $QUERY2_STATUS" + echo "Waiting additional time for query to complete..." + sleep 20 + + # Check again + QUERY2_STATUS_OUTPUT=$(aws redshift-data describe-statement --id "$QUERY2_ID" 2>&1) + QUERY2_STATUS=$(echo "$QUERY2_STATUS_OUTPUT" | grep -o '"Status": "[^"]*' | cut -d'"' -f4) + + if [ "$QUERY2_STATUS" == "FINISHED" ]; then + QUERY2_RESULTS=$(aws redshift-data get-statement-result --id "$QUERY2_ID" 2>&1) + echo "Query 2 Results:" + echo "$QUERY2_RESULTS" + else + echo "Query 2 is still not complete. Status: $QUERY2_STATUS" + fi +fi + +# Summary +echo "" +echo "===========================================" +echo "TUTORIAL SUMMARY" +echo "===========================================" +echo "You have successfully:" +echo "1. Created a Redshift Serverless namespace and workgroup" +echo "2. Created an IAM role with S3 access permissions" +echo "3. Stored admin credentials securely in AWS Secrets Manager" +echo "4. Created tables for sample data" +echo "5. Loaded sample data from Amazon S3" +echo "6. Run sample queries on the data" +echo "" +echo "Redshift Serverless Resources:" +echo "- Namespace: $NAMESPACE_NAME" +echo "- Workgroup: $WORKGROUP_NAME" +echo "- Database: $DB_NAME" +echo "- Endpoint: $WORKGROUP_ENDPOINT" +echo "- Credentials Secret: $SECRET_NAME" +echo "" +echo "To connect to your Redshift Serverless database using SQL tools:" +echo "- Host: $WORKGROUP_ENDPOINT" +echo "- Database: $DB_NAME" +echo "- Username: $ADMIN_USERNAME" +echo "- Password: Retrieve from AWS Secrets Manager secret '$SECRET_NAME'" +echo "" +echo "To retrieve the password from Secrets Manager (without jq):" +echo "aws secretsmanager get-secret-value --secret-id $SECRET_NAME --query 'SecretString' --output text | sed -n 's/.*\"password\":\"\([^\"]*\)\".*/\1/p'" +echo "" + +# Clean up temporary files +rm -f redshift-trust-policy.json redshift-s3-policy.json + +# Clean up resources +cleanup_resources + +echo "Tutorial completed at $(date)" diff --git a/tuts/073-aws-secrets-manager-gs/README.md b/tuts/073-aws-secrets-manager-gs/README.md new file mode 100644 index 00000000..cee012f1 --- /dev/null +++ b/tuts/073-aws-secrets-manager-gs/README.md @@ -0,0 +1,5 @@ +# AWS Secrets Manager getting started + +This tutorial introduces AWS Secrets Manager, showing how to securely store, retrieve, and manage sensitive information such as database credentials, API keys, and other secrets used by your applications. + +You can either run the automated script `aws-secrets-manager-gs.sh` to execute all the steps automatically, or follow the step-by-step instructions in the `aws-secrets-manager-gs.md` tutorial to understand each operation in detail. diff --git a/tuts/073-aws-secrets-manager-gs/aws-secrets-manager-gs.md b/tuts/073-aws-secrets-manager-gs/aws-secrets-manager-gs.md new file mode 100644 index 00000000..ba146684 --- /dev/null +++ b/tuts/073-aws-secrets-manager-gs/aws-secrets-manager-gs.md @@ -0,0 +1,362 @@ +# Moving hardcoded secrets to AWS Secrets Manager + +This tutorial guides you through the process of moving hardcoded secrets from your code to AWS Secrets Manager. By storing your secrets in Secrets Manager, you improve security by eliminating plaintext secrets in your code and gain the ability to rotate secrets without changing your code. + +## Prerequisites + +Before you begin this tutorial, you need: + +* An AWS account with permissions to create IAM roles and use AWS Secrets Manager +* The AWS Command Line Interface (AWS CLI) installed and configured +* Basic knowledge of the AWS CLI and IAM +* Approximately 15 minutes to complete the tutorial + +### Costs + +This tutorial creates IAM roles and a secret in AWS Secrets Manager. The IAM roles are free, and AWS Secrets Manager costs approximately $0.40 per secret per month. If you complete this tutorial in one hour and then delete the resources, the cost will be less than $0.01. To avoid ongoing charges, follow the cleanup steps at the end of this tutorial. + +## Create IAM roles + +In this tutorial, you'll use two IAM roles to manage permissions to your secret: + +* A role for managing secrets (SecretsManagerAdmin) +* A role for retrieving secrets at runtime (RoleToRetrieveSecretAtRuntime) + +First, create the SecretsManagerAdmin role. This role will have permissions to create and manage secrets. + +```bash +aws iam create-role \ + --role-name SecretsManagerAdmin \ + --assume-role-policy-document '{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "ec2.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] + }' +``` + +The command returns information about the newly created role: + +```json +{ + "Role": { + "Path": "/", + "RoleName": "SecretsManagerAdmin", + "RoleId": "AROAEXAMPLEXAMPLE", + "Arn": "arn:aws:iam::123456789012:role/SecretsManagerAdmin", + "CreateDate": "2025-01-13T00:20:27Z", + "AssumeRolePolicyDocument": { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "ec2.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] + } + } +} +``` + +Next, attach the SecretsManagerReadWrite policy to the admin role. This policy grants permissions to create and manage secrets in AWS Secrets Manager. + +```bash +aws iam attach-role-policy \ + --role-name SecretsManagerAdmin \ + --policy-arn arn:aws:iam::aws:policy/SecretsManagerReadWrite +``` + +Now, create the RoleToRetrieveSecretAtRuntime role. This role will be used by your application to retrieve secrets at runtime. + +```bash +aws iam create-role \ + --role-name RoleToRetrieveSecretAtRuntime \ + --assume-role-policy-document '{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "ec2.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] + }' +``` + +The command returns information about the newly created role: + +```json +{ + "Role": { + "Path": "/", + "RoleName": "RoleToRetrieveSecretAtRuntime", + "RoleId": "AROAEXAMPLEXAMPLE", + "Arn": "arn:aws:iam::123456789012:role/RoleToRetrieveSecretAtRuntime", + "CreateDate": "2025-01-13T00:20:29Z", + "AssumeRolePolicyDocument": { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "ec2.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] + } + } +} +``` + +Wait a few moments for the IAM roles to be fully created and propagated throughout the AWS system. + +## Create a secret in AWS Secrets Manager + +Now that you have the necessary IAM roles, you can create a secret in AWS Secrets Manager. In this example, you'll create a secret for an API key with a client ID and client secret. + +```bash +aws secretsmanager create-secret \ + --name "MyAPIKey" \ + --description "API key for my application" \ + --secret-string '{"ClientID":"my_client_id","ClientSecret":"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"}' +``` + +The command returns information about the newly created secret: + +```json +{ + "ARN": "arn:aws:secretsmanager:us-east-1:123456789012:secret:MyAPIKey-abcd1234", + "Name": "MyAPIKey", + "VersionId": "abcd1234-xmpl-4321-abcd-1234567890ab" +} +``` + +Next, you need to get your AWS account ID to use in the resource policy: + +```bash +ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) +``` + +Now, add a resource policy to the secret to allow the RoleToRetrieveSecretAtRuntime role to access it. Store the ARN of your secret in a variable to use in the resource policy: + +```bash +SECRET_ARN=$(aws secretsmanager describe-secret --secret-id "MyAPIKey" --query "ARN" --output text) + +aws secretsmanager put-resource-policy \ + --secret-id "MyAPIKey" \ + --resource-policy '{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": "arn:aws:iam::'$ACCOUNT_ID':role/RoleToRetrieveSecretAtRuntime" + }, + "Action": "secretsmanager:GetSecretValue", + "Resource": "'$SECRET_ARN'" + } + ] + }' \ + --block-public-policy +``` + +The command returns information about the secret: + +```json +{ + "ARN": "arn:aws:secretsmanager:us-east-1:123456789012:secret:MyAPIKey-abcd1234", + "Name": "MyAPIKey" +} +``` + +## Update your application code + +Now that you've stored your secret in AWS Secrets Manager, you need to update your application code to retrieve the secret instead of using hardcoded values. Here's an example using Python: + +```python +# Before: Hardcoded secrets (insecure) +# client_id = "my_client_id" +# client_secret = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" + +# After: Retrieve secrets from AWS Secrets Manager +import boto3 +import json +import base64 +from botocore.exceptions import ClientError + +def get_secret(): + secret_name = "MyAPIKey" + region_name = "us-east-1" # Replace with your region + + # Create a Secrets Manager client + session = boto3.session.Session() + client = session.client( + service_name='secretsmanager', + region_name=region_name + ) + + try: + get_secret_value_response = client.get_secret_value( + SecretId=secret_name + ) + except ClientError as e: + # Handle exceptions like ResourceNotFoundException, InvalidParameterException, etc. + print(f"Error retrieving secret: {e}") + raise e + else: + # Decrypts secret using the associated KMS key + if 'SecretString' in get_secret_value_response: + secret = get_secret_value_response['SecretString'] + return json.loads(secret) + else: + decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary']) + return json.loads(decoded_binary_secret) + +# Use the secret in your application +try: + secret_dict = get_secret() + client_id = secret_dict['ClientID'] + client_secret = secret_dict['ClientSecret'] + + # Now use client_id and client_secret in your application + print(f"Successfully retrieved secret for client ID: {client_id}") +except Exception as e: + # Implement appropriate error handling for your application + print(f"Failed to retrieve secret: {e}") +``` + +To test that your application can retrieve the secret, you can use the AWS CLI: + +```bash +aws secretsmanager get-secret-value \ + --secret-id "MyAPIKey" \ + --query "{ARN:ARN,Name:Name,VersionId:VersionId,VersionStages:VersionStages,CreatedDate:CreatedDate}" +``` + +The command returns metadata about the secret (without showing the actual secret value): + +```json +{ + "ARN": "arn:aws:secretsmanager:us-east-1:123456789012:secret:MyAPIKey-abcd1234", + "Name": "MyAPIKey", + "VersionId": "abcd1234-xmpl-4321-abcd-1234567890ab", + "VersionStages": [ + "AWSCURRENT" + ], + "CreatedDate": 1673596840.114 +} +``` + +## Update the secret + +After updating your application to retrieve secrets from Secrets Manager, you can update the secret with new values when needed. This is particularly useful when rotating credentials. + +```bash +aws secretsmanager update-secret \ + --secret-id "MyAPIKey" \ + --secret-string '{"ClientID":"my_new_client_id","ClientSecret":"bPxRfiCYEXAMPLEKEY/wJalrXUtnFEMI/K7MDENG"}' +``` + +The command returns information about the updated secret: + +```json +{ + "ARN": "arn:aws:secretsmanager:us-east-1:123456789012:secret:MyAPIKey-abcd1234", + "Name": "MyAPIKey", + "VersionId": "abcd1234-xmpl-5678-abcd-1234567890cd" +} +``` + +Verify that the secret was updated by retrieving it again: + +```bash +aws secretsmanager get-secret-value \ + --secret-id "MyAPIKey" \ + --query "{ARN:ARN,Name:Name,VersionId:VersionId,VersionStages:VersionStages,CreatedDate:CreatedDate}" +``` + +The command returns metadata about the updated secret: + +```json +{ + "ARN": "arn:aws:secretsmanager:us-east-1:123456789012:secret:MyAPIKey-abcd1234", + "Name": "MyAPIKey", + "VersionId": "abcd1234-xmpl-5678-abcd-1234567890cd", + "VersionStages": [ + "AWSCURRENT" + ], + "CreatedDate": 1673596843.522 +} +``` + +Notice that the VersionId has changed, indicating that this is a new version of the secret. + +## Going to production + +This tutorial demonstrates the basic functionality of AWS Secrets Manager, but there are additional considerations for production environments: + +### Security best practices + +1. **Use specific resource ARNs**: The resource policy should specify the exact ARN of the secret rather than using wildcards. + +2. **Implement secret rotation**: Set up automatic rotation for your secrets using Lambda functions to enhance security. + +3. **Use appropriate trust policies**: Customize IAM role trust policies based on the service that needs to access the secret (Lambda, ECS, etc.) rather than using EC2 as a generic service principal. + +4. **Add condition keys**: Use condition keys in your policies to further restrict access based on factors like source IP or requiring MFA. + +5. **Avoid plaintext secrets in commands**: When creating or updating secrets, consider using files or environment variables instead of typing secrets directly in the command line. + +### Architecture considerations + +1. **Implement caching**: To improve performance and reduce costs, implement client-side caching of secrets with appropriate TTL values. + +2. **Consider multi-region deployments**: For applications that operate in multiple regions, replicate secrets across regions to improve availability and reduce latency. + +3. **Set up monitoring**: Configure CloudTrail and CloudWatch to monitor and alert on suspicious access to your secrets. + +4. **Use infrastructure as code**: For production environments, manage your secrets using AWS CloudFormation or AWS CDK rather than manual CLI commands. + +For more information on AWS security best practices, see the [AWS Well-Architected Framework Security Pillar](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/welcome.html). + +## Clean up resources + +To avoid ongoing charges, delete the resources you created in this tutorial: + +```bash +# Delete the secret +aws secretsmanager delete-secret \ + --secret-id "MyAPIKey" \ + --force-delete-without-recovery + +# Delete the IAM roles +aws iam delete-role --role-name "RoleToRetrieveSecretAtRuntime" + +aws iam detach-role-policy \ + --role-name "SecretsManagerAdmin" \ + --policy-arn "arn:aws:iam::aws:policy/SecretsManagerReadWrite" + +aws iam delete-role --role-name "SecretsManagerAdmin" +``` + +## Next steps + +Now that you've learned how to move hardcoded secrets to AWS Secrets Manager, consider these next steps: + +* Implement [automatic rotation for your secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) to enhance security +* Learn how to [cache secrets in your application](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets.html) to improve performance and reduce costs +* For multi-region applications, explore [replicating secrets across regions](https://docs.aws.amazon.com/secretsmanager/latest/userguide/replicate-secrets.html) to improve latency +* Use [Amazon CodeGuru Reviewer](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/welcome.html) to find hardcoded secrets in your Java and Python applications +* Learn about different ways to [grant permissions to secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_resource-policies.html) using resource-based policies diff --git a/tuts/073-aws-secrets-manager-gs/aws-secrets-manager-gs.sh b/tuts/073-aws-secrets-manager-gs/aws-secrets-manager-gs.sh new file mode 100755 index 00000000..977095e6 --- /dev/null +++ b/tuts/073-aws-secrets-manager-gs/aws-secrets-manager-gs.sh @@ -0,0 +1,252 @@ +#!/bin/bash + +# Script to move hardcoded secrets to AWS Secrets Manager +# This script demonstrates how to create IAM roles, store a secret in AWS Secrets Manager, +# and set up appropriate permissions + +# Set up logging +LOG_FILE="secrets_manager_tutorial.log" +exec > >(tee -a "$LOG_FILE") 2>&1 + +echo "Starting AWS Secrets Manager tutorial script at $(date)" +echo "======================================================" + +# Function to check for errors in command output +check_error() { + local output=$1 + local cmd=$2 + + if echo "$output" | grep -i "error" > /dev/null; then + echo "ERROR: Command failed: $cmd" + echo "$output" + cleanup_resources + exit 1 + fi +} + +# Function to generate a random identifier +generate_random_id() { + echo "sm$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1)" +} + +# Function to clean up resources +cleanup_resources() { + echo "" + echo "===========================================" + echo "RESOURCES CREATED" + echo "===========================================" + + if [ -n "$SECRET_NAME" ]; then + echo "Secret: $SECRET_NAME" + fi + + if [ -n "$RUNTIME_ROLE_NAME" ]; then + echo "IAM Role: $RUNTIME_ROLE_NAME" + fi + + if [ -n "$ADMIN_ROLE_NAME" ]; then + echo "IAM Role: $ADMIN_ROLE_NAME" + fi + + echo "" + echo "===========================================" + echo "CLEANUP CONFIRMATION" + echo "===========================================" + echo "Do you want to clean up all created resources? (y/n): " + read -r CLEANUP_CHOICE + + if [[ "$CLEANUP_CHOICE" =~ ^[Yy]$ ]]; then + echo "Cleaning up resources..." + + # Delete secret if it exists + if [ -n "$SECRET_NAME" ]; then + echo "Deleting secret: $SECRET_NAME" + aws secretsmanager delete-secret --secret-id "$SECRET_NAME" --force-delete-without-recovery + fi + + # Detach policies and delete runtime role if it exists + if [ -n "$RUNTIME_ROLE_NAME" ]; then + echo "Deleting IAM role: $RUNTIME_ROLE_NAME" + aws iam delete-role --role-name "$RUNTIME_ROLE_NAME" + fi + + # Detach policies and delete admin role if it exists + if [ -n "$ADMIN_ROLE_NAME" ]; then + echo "Detaching policy from role: $ADMIN_ROLE_NAME" + aws iam detach-role-policy --role-name "$ADMIN_ROLE_NAME" --policy-arn "arn:aws:iam::aws:policy/SecretsManagerReadWrite" + + echo "Deleting IAM role: $ADMIN_ROLE_NAME" + aws iam delete-role --role-name "$ADMIN_ROLE_NAME" + fi + + echo "Cleanup completed." + else + echo "Resources will not be deleted." + fi +} + +# Trap to ensure cleanup on script exit +trap 'echo "Script interrupted. Running cleanup..."; cleanup_resources' INT TERM + +# Generate random identifiers for resources +ADMIN_ROLE_NAME="SecretsManagerAdmin-$(generate_random_id)" +RUNTIME_ROLE_NAME="RoleToRetrieveSecretAtRuntime-$(generate_random_id)" +SECRET_NAME="MyAPIKey-$(generate_random_id)" + +echo "Using the following resource names:" +echo "Admin Role: $ADMIN_ROLE_NAME" +echo "Runtime Role: $RUNTIME_ROLE_NAME" +echo "Secret Name: $SECRET_NAME" +echo "" + +# Step 1: Create IAM roles +echo "Creating IAM roles..." + +# Create the SecretsManagerAdmin role +echo "Creating admin role: $ADMIN_ROLE_NAME" +ADMIN_ROLE_OUTPUT=$(aws iam create-role \ + --role-name "$ADMIN_ROLE_NAME" \ + --assume-role-policy-document '{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "ec2.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] + }') + +check_error "$ADMIN_ROLE_OUTPUT" "create-role for admin" +echo "$ADMIN_ROLE_OUTPUT" + +# Attach the SecretsManagerReadWrite policy to the admin role +echo "Attaching SecretsManagerReadWrite policy to admin role" +ATTACH_POLICY_OUTPUT=$(aws iam attach-role-policy \ + --role-name "$ADMIN_ROLE_NAME" \ + --policy-arn "arn:aws:iam::aws:policy/SecretsManagerReadWrite") + +check_error "$ATTACH_POLICY_OUTPUT" "attach-role-policy for admin" +echo "$ATTACH_POLICY_OUTPUT" + +# Create the RoleToRetrieveSecretAtRuntime role +echo "Creating runtime role: $RUNTIME_ROLE_NAME" +RUNTIME_ROLE_OUTPUT=$(aws iam create-role \ + --role-name "$RUNTIME_ROLE_NAME" \ + --assume-role-policy-document '{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": "ec2.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] + }') + +check_error "$RUNTIME_ROLE_OUTPUT" "create-role for runtime" +echo "$RUNTIME_ROLE_OUTPUT" + +# Wait for roles to be fully created +echo "Waiting for IAM roles to be fully created..." +sleep 10 + +# Step 2: Create a secret in AWS Secrets Manager +echo "Creating secret in AWS Secrets Manager..." + +CREATE_SECRET_OUTPUT=$(aws secretsmanager create-secret \ + --name "$SECRET_NAME" \ + --description "API key for my application" \ + --secret-string '{"ClientID":"my_client_id","ClientSecret":"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"}') + +check_error "$CREATE_SECRET_OUTPUT" "create-secret" +echo "$CREATE_SECRET_OUTPUT" + +# Get AWS account ID +echo "Getting AWS account ID..." +ACCOUNT_ID_OUTPUT=$(aws sts get-caller-identity --query "Account" --output text) +check_error "$ACCOUNT_ID_OUTPUT" "get-caller-identity" +ACCOUNT_ID=$ACCOUNT_ID_OUTPUT +echo "Account ID: $ACCOUNT_ID" + +# Add resource policy to the secret +echo "Adding resource policy to secret..." +RESOURCE_POLICY=$(cat <