Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add http -> https redirect to load balancer #5

Draft
wants to merge 9 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions .github/workflows/shellcheck.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
name: 'Lint Jobs'

on:
push:
branch:
- master

jobs:
shellcheck:
name: Shellcheck
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run ShellCheck
uses: ludeeus/action-shellcheck@master
16 changes: 8 additions & 8 deletions clean.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# This is an unfinished script that will delete the cluster cloudformation along with any other loose ends
# Execute it with a tag to identify which cluster to delete such as "./clean.sh tag"

if grep '^[-0-9a-zA-Z]*$' <<<$1 && [ ! -z "$1" ];
if grep '^[-0-9a-zA-Z]*$' <<< "$1" && [ -n "$1" ];
then echo "Tag is valid";
else echo "Tag must be alphanumeric." && exit 1;
fi
Expand All @@ -13,14 +13,14 @@ TAG=$1
PROJECT="umsi-easy-hub"

aws ec2 delete-key-pair --key-name "$PROJECT-$TAG"
rm "$PROJECT-$TAG.pem"
rm -f "$PROJECT-$TAG.pem"
rm -rf dist
aws cloudformation delete-stack --stack-name "umsi-easy-hub-${TAG}-cluster"
# aws cloudformation wait stack-delete-complete --stack-name "umsi-easy-hub-${TAG}-cluster"

# Step 2: manually delete the loadbalancer that was automatically generated by the helm chart
# otherwise, the control node cloudformation delete will fail because there are dependent resources still active

# Step 3: manually delete the control node cloudformation from the AWS console.
echo "Deleting cluster and waiting for deletion"
aws cloudformation delete-stack --stack-name "umsi-easy-hub-${TAG}-cluster"
aws cloudformation wait stack-delete-complete --stack-name "umsi-easy-hub-${TAG}-cluster"
echo "Deleting control node and waiting for deletion"
aws cloudformation delete-stack --stack-name "umsi-easy-hub-${TAG}-control-node"
aws cloudformation wait stack-delete-complete --stack-name "umsi-easy-hub-${TAG}-control-node"


14 changes: 13 additions & 1 deletion deploy.py
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
#!/usr/bin/env python

# This script deploys the control node CloudFormation, which will then
# automatically deploy and configure the cluster CloudFormation and kubernetes
# deployment.
Expand All @@ -21,6 +23,7 @@
s3 = boto3.client('s3')
cloudformation = boto3.client('cloudformation')


def generate_ssh_key(config):
"""Generate an SSH key pair from EC2."""
name = "{}-{}".format(config["project"], config["tag"])
Expand Down Expand Up @@ -74,7 +77,7 @@ def create_control_node(config):
template_data = template_fileobj.read()
cloudformation.validate_template(TemplateBody=template_data)

response = cloudformation.create_stack(
cloudformation.create_stack(
TemplateBody=template_data,
StackName=stack_name(config),
Parameters=[{'ParameterKey': 'BillingTag',
Expand All @@ -87,6 +90,9 @@ def create_control_node(config):
{'ParameterKey': 'KeyName',
'ParameterValue': config['ssh_key_name'],
'UsePreviousValue': False},
{'ParameterKey': 'Domain',
'ParameterValue': config['domain'],
'UsePreviousValue': False},
{'ParameterKey': 'Tag',
'ParameterValue': config['tag'],
'UsePreviousValue': False}],
Expand Down Expand Up @@ -125,6 +131,11 @@ def fail(msg):
default="umsi-easy-hub",
help="name of project, used in all AWS resources")

parser.add_argument(
"--domain",
required=True,
help="The FQDN which will host the hub")

args = parser.parse_args()

# We plan to allow different names, but this project name is hard coded all
Expand All @@ -145,6 +156,7 @@ def fail(msg):
config = {}
config['tag'] = args.tag
config['project'] = args.project
config['domain'] = args.domain
config['account_id'] = boto3.client(
'sts').get_caller_identity().get('Account')
config['ssh_key_name'] = generate_ssh_key(config)
Expand Down
62 changes: 47 additions & 15 deletions src/cluster_cf.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,14 @@ Parameters:
Default: ""
Type: String

Domain:
Description: FQDN which this hub will be hosted on
Type: String

HostedZoneId:
Description: ID of the hosted zone which contains the Domain
Type: String

Resources:

EFS:
Expand Down Expand Up @@ -148,6 +156,25 @@ Resources:
- Ref: ControlNodeSecurityGroup
- Ref: NodeSecurityGroup

DNSRecord:
Type: AWS::Route53::RecordSet
Properties:
Name: !Ref Domain
HostedZoneId: !Ref HostedZoneId
Type: A
AliasTarget:
HostedZoneId: !GetAtt Alb.CanonicalHostedZoneID
DNSName: !GetAtt Alb.DNSName

DomainCertificate:
Type: AWS::CertificateManager::Certificate
Properties:
DomainName: !Ref Domain
ValidationMethod: DNS
DomainValidationOptions:
- DomainName: !Ref Domain
HostedZoneId: !Ref HostedZoneId

AlbSg:
Type: AWS::EC2::SecurityGroup
Properties:
Expand All @@ -168,28 +195,33 @@ Resources:
ToPort: 65535
CidrIp: 0.0.0.0/0

# AlbListenerHttps:
# Type: AWS::ElasticLoadBalancingV2::Listener
# Properties:
# Certificates: [ CertificateArn: !Ref DomainCertificateArn]
# DefaultActions:
# - Type: forward
# TargetGroupArn:
# Ref: AlbTargetGroupHttps
# LoadBalancerArn:
# Ref: Alb
# Port: 443
# Protocol: HTTPS

AlbListenerHttp:
AlbListenerHttps:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
Certificates: [ CertificateArn: !Ref DomainCertificate]
DefaultActions:
- Type: forward
TargetGroupArn:
Ref: AlbTargetGroupHttp
Ref: AlbTargetGroupHttps
LoadBalancerArn:
Ref: Alb
Port: 443
Protocol: HTTPS

AlbListenerHttp:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: redirect
RedirectConfig:
Protocol: "HTTPS"
Port: 443
Host: "#{host}"
Path: "/#{path}"
Query: "#{query}"
StatusCode: "HTTP_301"
LoadBalancerArn:
Ref: Alb
Port: 80
Protocol: HTTP

Expand Down
8 changes: 8 additions & 0 deletions src/control_node_cf.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,10 @@ Parameters:
Description: The EC2 Key Pair to allow SSH access to the master and node instances
Type: AWS::EC2::KeyPair::KeyName

Domain:
Description: FQDN which this hub will be hosted on
Type: String

Resources:
VPC:
Type: AWS::EC2::VPC
Expand Down Expand Up @@ -256,3 +260,7 @@ Outputs:
Instance:
Description: The control node instance
Value: !Ref ControlNode

Domain:
Description: FQDN which this hub will be hosted on
Value: !Ref Domain
90 changes: 56 additions & 34 deletions src/control_node_startup_script.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,36 +4,51 @@
set -x
exec > >(tee ~/user-data.log|logger -t user-data ) 2>&1

# Quit on error
set -e

# Sanity check of args
cd /home/ec2-user/
for X in "$@"
do
echo $X >> args.txt
for X in "$@"; do
echo "$X" >> args.txt
done

# Gather args passed to script
STACK_NAME=$1
TAG=$2
SCRIPT_BUCKET=$3
export STACK_NAME=$1
export TAG=$2
export SCRIPT_BUCKET=$3

# Ensure you are in the home directory of ec2-user
cd /home/ec2-user/
cd /home/ec2-user/ || exit
export HOME=/home/ec2-user/

# Include the local binaries in the path (this is where we will put the aws and kubectl binaries)
export PATH=/usr/local/bin/:$PATH && echo "export PATH=/usr/local/bin/:$PATH" >> ~/.bashrc

# Install aws cli v2
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

# Install kubectl binary which will expose control plane configuration options
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo cp ./kubectl /usr/local/bin/kubectl

# Install eksctl for AWS-specific operations inside an EKS cluster
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin

# Download files from s3
aws s3 cp --recursive s3://${SCRIPT_BUCKET}/ .
aws s3 cp --recursive "s3://$SCRIPT_BUCKET/" .

# Fetch the SSH key from the secret store
aws secretsmanager get-secret-value --secret-id umsi-easy-hub-${TAG}.pem \
--query SecretString --output text --region us-east-1 > umsi-easy-hub-${TAG}.pem
# export KEY_NAME="umsi-easy-hub-$TAG.pem"
# aws secretsmanager get-secret-value --secret-id $KEY_NAME \
# --query SecretString --output text > $KEY_NAME

# Install packages
sudo yum install python37 python37-pip -y
sudo pip3 install boto3
sudo pip3 install pyyaml
sudo pip3 install boto3 pyyaml

# Configure aws cli region
mkdir /home/ec2-user/.aws
Expand All @@ -42,40 +57,45 @@ sudo chown -R 1000:100 /home/ec2-user/.aws/

# Deploy cluster cloudformation stack. This includes the EKS, EFS, Autoscaler, and Loadbalancer
# This script needs output from the control node cloudformation stack
python3 deploy_cluster_cf.py --control-node-stackname ${STACK_NAME}
python3 deploy_cluster_cf.py --control-node-stackname "$STACK_NAME"

# Wait for the cluster cloudformation stack to complete before continuing...
aws cloudformation wait stack-create-complete --stack-name "umsi-easy-hub-${TAG}-cluster"
aws cloudformation wait stack-create-complete --stack-name "umsi-easy-hub-$TAG-cluster"

# Get output of cloudformation stack
output=($(python3 get_cluster_cf_output.py --cluster-stackname "umsi-easy-hub-${TAG}-cluster") )
IFS=" " read -r -a output <<< "$(python3 get_cluster_cf_output.py --cluster-stackname "umsi-easy-hub-$TAG-cluster")"
echo "${output[*]}"
export EKS_NAME="${output[0]}"
export NODE_ROLE_ARN="${output[1]}"
export ASG_ARN="${output[2]}"
# ${output[0]} = Tag
# ${output[1]} = EksName
# ${output[2]} = NodeRoleArn
# ${output[3]} = Asg

# Get kubectl binary which will expose control plane configuration options
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo cp ./kubectl /usr/local/bin/kubectl

# Get aws-iam-authenticator to authenticate kubectl binary with our EKS backplane
curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.5/2018-12-06/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
sudo cp ./aws-iam-authenticator /usr/local/bin/aws-iam-authenticator

# Install aws cli
# echo yes | sudo amazon-linux-extras install python3
sudo rm /usr/bin/aws
pip3 install --upgrade awscli --user
sudo cp ~/.local/bin/aws /usr/bin/aws

# Sync kubectl with the EKS we want
aws eks update-kubeconfig --name ${output[1]}
aws eks update-kubeconfig --name "$EKS_NAME"
curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09/aws-auth-cm.yaml
sed -i -e "s;<ARN of instance role (not instance profile)>;${output[2]};g" aws-auth-cm.yaml
sed -i -e "s;<ARN of instance role (not instance profile)>;$NODE_ROLE_ARN;g" aws-auth-cm.yaml
kubectl apply -f aws-auth-cm.yaml

# Upgrade some internal components of the cluster
eksctl utils update-kube-proxy --cluster "$EKS_NAME" --approve
eksctl utils update-aws-node --cluster "$EKS_NAME" --approve
eksctl utils update-coredns --cluster "$EKS_NAME" --approve
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you really need to update these components before installing the addon?
Since you are installing the last Kube (1.19, right?), I would expect everything to be up to date...


aws eks create-addon \
--cluster-name "$EKS_NAME" \
--addon-name vpc-cni \
--addon-version v1.9.0 \
--service-account-role-arn "$NODE_ROLE_ARN" \
--resolve-conflicts OVERWRITE

# Install Helm per https://helm.sh/docs/intro/install/
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

Expand All @@ -90,7 +110,7 @@ helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
export RELEASE=jhub
export NAMESPACE=jhub
JUPYTERHUB_IMAGE="jupyterhub/jupyterhub"
export JUPYTERHUB_IMAGE="jupyterhub/jupyterhub"

# Create namespace because helm expects it to exist already.
kubectl create namespace $NAMESPACE
Expand All @@ -99,8 +119,10 @@ helm upgrade --install $RELEASE $JUPYTERHUB_IMAGE --namespace $NAMESPACE --versi
# Add in autoscaler
sudo touch /etc/cron.d/autoscale_daemon
sudo chmod 777 /etc/cron.d/autoscale_daemon
sudo echo "* * * * * ec2-user python3 /home/ec2-user/autoscale_daemon.py --asg=${output[3]}" >> /etc/cron.d/autoscale_daemon
sudo echo "* * * * * ec2-user sleep 15 && python3 /home/ec2-user/autoscale_daemon.py --asg=${output[3]}" >> /etc/cron.d/autoscale_daemon
sudo echo "* * * * * ec2-user sleep 30 && python3 /home/ec2-user/autoscale_daemon.py --asg=${output[3]}" >> /etc/cron.d/autoscale_daemon
sudo echo "* * * * * ec2-user sleep 45 && python3 /home/ec2-user/autoscale_daemon.py --asg=${output[3]}" >> /etc/cron.d/autoscale_daemon
echo "* * * * * ec2-user python3 /home/ec2-user/autoscale_daemon.py --asg=$ASG_ARN
* * * * * ec2-user sleep 15 && python3 /home/ec2-user/autoscale_daemon.py --asg=$ASG_ARN
* * * * * ec2-user sleep 30 && python3 /home/ec2-user/autoscale_daemon.py --asg=$ASG_ARN
* * * * * ec2-user sleep 45 && python3 /home/ec2-user/autoscale_daemon.py --asg=$ASG_ARN" | sudo tee -a /etc/cron.d/autoscale_daemon
sudo chmod 644 /etc/cron.d/autoscale_daemon

env
Loading