Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

setup pulumi sample for eks #2

Merged
merged 2 commits into from
May 15, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions elastic-kubernetes-service/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
*.pyc
venv/
68 changes: 68 additions & 0 deletions elastic-kubernetes-service/Pulumi.localstack.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
config:
HarshCasper marked this conversation as resolved.
Show resolved Hide resolved
aws:accessKey: test
aws:endpoints:
- acm: http://localhost:4566
amplify: http://localhost:4566
apigateway: http://localhost:4566
apigatewayv2: http://localhost:4566
applicationautoscaling: http://localhost:4566
appsync: http://localhost:4566
athena: http://localhost:4566
autoscaling: http://localhost:4566
batch: http://localhost:4566
cloudformation: http://localhost:4566
cloudfront: http://localhost:4566
cloudsearch: http://localhost:4566
cloudtrail: http://localhost:4566
cloudwatch: http://localhost:4566
cloudwatchevents: http://localhost:4566
cloudwatchlogs: http://localhost:4566
codecommit: http://localhost:4566
cognitoidentity: http://localhost:4566
cognitoidp: http://localhost:4566
docdb: http://localhost:4566
dynamodb: http://localhost:4566
ec2: http://localhost:4566
ecr: http://localhost:4566
ecs: http://localhost:4566
eks: http://localhost:4566
elasticache: http://localhost:4566
elasticbeanstalk: http://localhost:4566
elb: http://localhost:4566
emr: http://localhost:4566
es: http://localhost:4566
firehose: http://localhost:4566
glacier: http://localhost:4566
glue: http://localhost:4566
iam: http://localhost:4566
iot: http://localhost:4566
kafka: http://localhost:4566
kinesis: http://localhost:4566
kinesisanalytics: http://localhost:4566
kms: http://localhost:4566
lambda: http://localhost:4566
mediastore: http://localhost:4566
neptune: http://localhost:4566
organizations: http://localhost:4566
qldb: http://localhost:4566
rds: http://localhost:4566
redshift: http://localhost:4566
route53: http://localhost:4566
s3: http://localhost:4566
sagemaker: http://localhost:4566
secretsmanager: http://localhost:4566
servicediscovery: http://localhost:4566
ses: http://localhost:4566
sns: http://localhost:4566
sqs: http://localhost:4566
ssm: http://localhost:4566
stepfunctions: http://localhost:4566
sts: http://localhost:4566
swf: http://localhost:4566
transfer: http://localhost:4566
xray: http://localhost:4566
aws:region: us-east-1
aws:s3ForcePathStyle: 'true'
aws:secretKey: test
aws:skipCredentialsValidation: 'true'
aws:skipRequestingAccountId: 'true'
11 changes: 11 additions & 0 deletions elastic-kubernetes-service/Pulumi.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
name: aws-py-eks
runtime:
name: python
options:
virtualenv: venv
description: A minimal AWS Python EKS example cluster
template:
config:
aws:region:
description: The AWS region to deploy into
default: us-east-1
60 changes: 60 additions & 0 deletions elastic-kubernetes-service/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Deploying an EKS Cluster using Pulumi on LocalStack

In this example, we will demonstrate how to deploy an AWS EKS cluster using Pulumi on LocalStack. With the help of the Pulumi Python SDK, we will declaratively provision AWS resources & infrastructure locally on LocalStack and on the real AWS cloud.

## Prerequisites

- LocalStack
- Pulumi & `pulumilocal` CLI
HarshCasper marked this conversation as resolved.
Show resolved Hide resolved
- Docker
- `awslocal` CLI
- `kubectl`

## Starting up

Start LocalStack via:

```bash
localstack start -d
```

Create a new Pulumi stack via:

```bash
pulumilocal stack init python-eks-testing
```

Set the AWS region to `us-east-1` via:

```bash
pulumi config set aws:region us-east-1
```

## Deploying the stack

To preview and deploy the stack, run:

```bash
pulumilocal up
```

You can view the deployed EKS cluster by running:

```bash
awslocal eks list-clusters
```

## Authenticating with the cluster

You can next update your KubeConfig, authenticate to your Kubernetes Cluster and verify you have API access and nodes running by running the following commands:

```bash
awslocal eks update-kubeconfig --name <CLUSTER_NAME>
kubectl get nodes
```

Replace `<CLUSTER_NAME>` with the name of your EKS cluster.

HarshCasper marked this conversation as resolved.
Show resolved Hide resolved
## License

This code is available under the Apache 2.0 license.
39 changes: 39 additions & 0 deletions elastic-kubernetes-service/__main__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
import iam
import vpc
import utils
import pulumi
from pulumi_aws import eks

## EKS Cluster

eks_cluster = eks.Cluster(
'eks-cluster',
role_arn=iam.eks_role.arn,
tags={
'Name': 'pulumi-eks-cluster',
},
vpc_config=eks.ClusterVpcConfigArgs(
public_access_cidrs=['0.0.0.0/0'],
security_group_ids=[vpc.eks_security_group.id],
subnet_ids=vpc.subnet_ids,
),
)

eks_node_group = eks.NodeGroup(
'eks-node-group',
cluster_name=eks_cluster.name,
node_group_name='pulumi-eks-nodegroup',
node_role_arn=iam.ec2_role.arn,
subnet_ids=vpc.subnet_ids,
tags={
'Name': 'pulumi-cluster-nodeGroup',
},
scaling_config=eks.NodeGroupScalingConfigArgs(
desired_size=2,
max_size=2,
min_size=1,
),
)

pulumi.export('cluster-name', eks_cluster.name)
pulumi.export('kubeconfig', utils.generate_kube_config(eks_cluster))
72 changes: 72 additions & 0 deletions elastic-kubernetes-service/iam.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
from pulumi_aws import config, iam
import json

## EKS Cluster Role

eks_role = iam.Role(
'eks-iam-role',
assume_role_policy=json.dumps({
'Version': '2012-10-17',
'Statement': [
{
'Action': 'sts:AssumeRole',
'Principal': {
'Service': 'eks.amazonaws.com'
},
'Effect': 'Allow',
'Sid': ''
}
],
}),
)

iam.RolePolicyAttachment(
'eks-service-policy-attachment',
role=eks_role.id,
policy_arn='arn:aws:iam::aws:policy/AmazonEKSServicePolicy',
)


iam.RolePolicyAttachment(
'eks-cluster-policy-attachment',
role=eks_role.id,
policy_arn='arn:aws:iam::aws:policy/AmazonEKSClusterPolicy',
)

## Ec2 NodeGroup Role

ec2_role = iam.Role(
'ec2-nodegroup-iam-role',
assume_role_policy=json.dumps({
'Version': '2012-10-17',
'Statement': [
{
'Action': 'sts:AssumeRole',
'Principal': {
'Service': 'ec2.amazonaws.com'
},
'Effect': 'Allow',
'Sid': ''
}
],
}),
)

iam.RolePolicyAttachment(
'eks-workernode-policy-attachment',
role=ec2_role.id,
policy_arn='arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy',
)


iam.RolePolicyAttachment(
'eks-cni-policy-attachment',
role=ec2_role.id,
policy_arn='arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy',
)

iam.RolePolicyAttachment(
'ec2-container-ro-policy-attachment',
role=ec2_role.id,
policy_arn='arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly',
)
2 changes: 2 additions & 0 deletions elastic-kubernetes-service/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
pulumi>=3.5.1,<4.0.0
pulumi-aws>=5.0.0,<6.0.0
41 changes: 41 additions & 0 deletions elastic-kubernetes-service/utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
import json

import pulumi


def generate_kube_config(eks_cluster):

kubeconfig = pulumi.Output.json_dumps({
"apiVersion": "v1",
"clusters": [{
"cluster": {
"server": eks_cluster.endpoint,
"certificate-authority-data": eks_cluster.certificate_authority.apply(lambda v: v.data)
},
"name": "kubernetes",
}],
"contexts": [{
"context": {
"cluster": "kubernetes",
"user": "aws",
},
"name": "aws",
}],
"current-context": "aws",
"kind": "Config",
"users": [{
"name": "aws",
"user": {
"exec": {
"apiVersion": "client.authentication.k8s.io/v1beta1",
"command": "aws-iam-authenticator",
"args": [
"token",
"-i",
eks_cluster.endpoint,
],
},
},
}],
})
return kubeconfig
85 changes: 85 additions & 0 deletions elastic-kubernetes-service/vpc.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
from pulumi_aws import ec2, get_availability_zones

## VPC

vpc = ec2.Vpc(
'eks-vpc',
cidr_block='10.100.0.0/16',
instance_tenancy='default',
enable_dns_hostnames=True,
enable_dns_support=True,
tags={
'Name': 'pulumi-eks-vpc',
},
)

igw = ec2.InternetGateway(
'vpc-ig',
vpc_id=vpc.id,
tags={
'Name': 'pulumi-vpc-ig',
},
)

eks_route_table = ec2.RouteTable(
'vpc-route-table',
vpc_id=vpc.id,
routes=[ec2.RouteTableRouteArgs(
cidr_block='0.0.0.0/0',
gateway_id=igw.id,
)],
tags={
'Name': 'pulumi-vpc-rt',
},
)

## Subnets, one for each AZ in a region

zones = get_availability_zones()
subnet_ids = []

for zone in zones.names:
vpc_subnet = ec2.Subnet(
f'vpc-subnet-{zone}',
assign_ipv6_address_on_creation=False,
vpc_id=vpc.id,
map_public_ip_on_launch=True,
cidr_block=f'10.100.{len(subnet_ids)}.0/24',
availability_zone=zone,
tags={
'Name': f'pulumi-sn-{zone}',
},
)
ec2.RouteTableAssociation(
f'vpc-route-table-assoc-{zone}',
route_table_id=eks_route_table.id,
subnet_id=vpc_subnet.id,
)
subnet_ids.append(vpc_subnet.id)

## Security Group

eks_security_group = ec2.SecurityGroup(
'eks-cluster-sg',
vpc_id=vpc.id,
description='Allow all HTTP(s) traffic to EKS Cluster',
tags={
'Name': 'pulumi-cluster-sg',
},
ingress=[
ec2.SecurityGroupIngressArgs(
cidr_blocks=['0.0.0.0/0'],
from_port=443,
to_port=443,
protocol='tcp',
description='Allow pods to communicate with the cluster API Server.'
),
ec2.SecurityGroupIngressArgs(
cidr_blocks=['0.0.0.0/0'],
from_port=80,
to_port=80,
protocol='tcp',
description='Allow internet access to pods'
),
],
)