Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Milan (eu-south-1) and Cape Town (af-south-1) regions #2490

Merged
merged 1 commit into from
Jul 30, 2020

Conversation

martina-if
Copy link
Contributor

@martina-if martina-if commented Jul 29, 2020

  • Manually tested
  • Added labels for change area (e.g. area/nodegroup), target version (e.g. version/0.12.0) and kind (e.g. kind/improvement)
  • Make sure the title of the PR is a good description that can go into the release notes

@martina-if martina-if added the kind/feature New feature or request label Jul 29, 2020
@martina-if
Copy link
Contributor Author

After enabling the regions in AWS console -> My Account -> AWS Regions

create cluster in Milan
$ eksctl create cluster --region eu-south-1 -N 1
[ℹ]  eksctl version 0.26.0-dev+2b347d92.2020-07-29T15:38:30Z
[ℹ]  using region eu-south-1
[ℹ]  setting availability zones to [eu-south-1b eu-south-1a eu-south-1c]
[ℹ]  subnets for eu-south-1b - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for eu-south-1a - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for eu-south-1c - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  nodegroup "ng-1adb0959" will use "ami-0331b4c1ee09d17ff" [AmazonLinux2/1.17]
[ℹ]  using Kubernetes version 1.17
[ℹ]  creating EKS cluster "amazing-creature-1596030828" in "eu-south-1" region with un-managed nodes
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-south-1 --cluster=amazing-creature-1596030828'
[ℹ]  CloudWatch logging will not be enabled for cluster "amazing-creature-1596030828" in "eu-south-1"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=eu-south-1 --cluster=amazing-creature-1596030828'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "amazing-creature-1596030828" in "eu-south-1"
[ℹ]  2 sequential tasks: { create cluster control plane "amazing-creature-1596030828", 2 sequential sub-tasks: { no tasks, create nodegroup "ng-1adb0959" } }
[ℹ]  building cluster stack "eksctl-amazing-creature-1596030828-cluster"
[ℹ]  deploying stack "eksctl-amazing-creature-1596030828-cluster"
[ℹ]  building nodegroup stack "eksctl-amazing-creature-1596030828-nodegroup-ng-1adb0959"
[ℹ]  --nodes-min=1 was set automatically for nodegroup ng-1adb0959
[ℹ]  --nodes-max=1 was set automatically for nodegroup ng-1adb0959
[ℹ]  deploying stack "eksctl-amazing-creature-1596030828-nodegroup-ng-1adb0959"
[ℹ]  waiting for the control plane availability...
[✔]  saved kubeconfig as "/home/martina/.kube/config"
[ℹ]  no tasks
[✔]  all EKS cluster resources for "amazing-creature-1596030828" have been created
[ℹ]  adding identity "arn:aws:iam::083751696308:role/eksctl-amazing-creature-159603082-NodeInstanceRole-RBJCHEHTD6BT" to auth ConfigMap
[ℹ]  nodegroup "ng-1adb0959" has 0 node(s)
[ℹ]  waiting for at least 1 node(s) to become ready in "ng-1adb0959"
[ℹ]  nodegroup "ng-1adb0959" has 1 node(s)
[ℹ]  node "ip-192-168-22-55.eu-south-1.compute.internal" is ready
[ℹ]  kubectl command should work with "/home/martina/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "amazing-creature-1596030828" in "eu-south-1" region is ready

$ eksctl get cluster --region eu-south-1
NAME				REGION
amazing-creature-1596030828	eu-south-
$ kube get pods
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          3m
creating a cluster in Cape Town
$ eksctl create cluster --region af-south-1 -N 1
[ℹ]  eksctl version 0.26.0-dev+2b347d92.2020-07-29T15:38:30Z
[ℹ]  using region af-south-1
[ℹ]  setting availability zones to [af-south-1a af-south-1c af-south-1b]
[ℹ]  subnets for af-south-1a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for af-south-1c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for af-south-1b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  nodegroup "ng-1194ce12" will use "ami-05d5868477b7dffd6" [AmazonLinux2/1.17]
[ℹ]  using Kubernetes version 1.17
[ℹ]  creating EKS cluster "ridiculous-mongoose-1596032464" in "af-south-1" region with un-managed nodes
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=af-south-1 --cluster=ridiculous-mongoose-1596032464'
[ℹ]  CloudWatch logging will not be enabled for cluster "ridiculous-mongoose-1596032464" in "af-south-1"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=af-south-1 --cluster=ridiculous-mongoose-1596032464'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "ridiculous-mongoose-1596032464" in "af-south-1"
[ℹ]  2 sequential tasks: { create cluster control plane "ridiculous-mongoose-1596032464", 2 sequential sub-tasks: { no tasks, create nodegroup "ng-1194ce12" } }
[ℹ]  building cluster stack "eksctl-ridiculous-mongoose-1596032464-cluster"
[ℹ]  deploying stack "eksctl-ridiculous-mongoose-1596032464-cluster"
[ℹ]  building nodegroup stack "eksctl-ridiculous-mongoose-1596032464-nodegroup-ng-1194ce12"
[ℹ]  --nodes-min=1 was set automatically for nodegroup ng-1194ce12
[ℹ]  --nodes-max=1 was set automatically for nodegroup ng-1194ce12
[ℹ]  deploying stack "eksctl-ridiculous-mongoose-1596032464-nodegroup-ng-1194ce12"
[ℹ]  waiting for the control plane availability...
[✔]  saved kubeconfig as "/home/martina/.kube/config"
[ℹ]  no tasks
[✔]  all EKS cluster resources for "ridiculous-mongoose-1596032464" have been created
[ℹ]  adding identity "arn:aws:iam::083751696308:role/eksctl-ridiculous-mongoose-159603-NodeInstanceRole-1PO8JTAZX6TD7" to auth ConfigMap
[ℹ]  nodegroup "ng-1194ce12" has 1 node(s)
[ℹ]  node "ip-192-168-54-169.af-south-1.compute.internal" is not ready
[ℹ]  waiting for at least 1 node(s) to become ready in "ng-1194ce12"
[ℹ]  nodegroup "ng-1194ce12" has 1 node(s)
[ℹ]  node "ip-192-168-54-169.af-south-1.compute.internal" is ready
[ℹ]  kubectl command should work with "/home/martina/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "ridiculous-mongoose-1596032464" in "af-south-1" region is ready

$ eksctl get cluster --region af-south-1
NAME				REGION
ridiculous-mongoose-1596032464	af-south-1
$ eksctl get cluster --region af-south-1
NAME				REGION
ridiculous-mongoose-1596032464	af-south-1

$ kube config current-context 
martina@weave.works@ridiculous-mongoose-1596032464.af-south-1.eksctl.io


)$ kube apply -f ../kube_resources/busybox_deployment.yaml 
kube get deploypod/busybox created

$ kube get pods
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          8s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants