Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output status messages are missing info #912

Closed
derrickburns opened this issue Jun 19, 2019 · 1 comment · Fixed by #913

Comments

@derrickburns
Copy link

commented Jun 19, 2019

What happened?
When I create a cluster using a config file, I see that the region name and cluster name are missing.

What you expected to happen?
I expect to see the region name and the cluster name as if I had included these from the command line.

How to reproduce it?
Create a cluster from a config file and look at the output.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: qae
  region: us-west-2
  version: "1.13"

nodeGroups:
  - name: ng-1
    instanceType: m5.large
    desiredCapacity: 5
    ssh:
      publicKeyPath:  ~/.ssh/aws-tidepool-derrickburns.pub
    iam:
      withAddonPolicies:
        certManager: true

Versions

$eksctl version
 version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.1.37"}
$ uname -a
Darwin TP-MBP 18.6.0 Darwin Kernel Version 18.6.0: Thu Apr 25 23:16:27 PDT 2019; root:xnu-4903.261.4~2/RELEASE_X86_64 x86_64
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-07T09:55:27Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.6-eks-d69f1b", GitCommit:"d69f1bf3669bf00b7f4a758e978e0e7a1e3a68f7", GitTreeState:"clean", BuildDate:"2019-02-28T20:26:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"

Logs

+ eksctl create cluster --config-file=config.yaml --kubeconfig kubeconfig.yaml
[ℹ]  using region
[ℹ]  setting availability zones to [us-west-2a us-west-2d us-west-2b]
[ℹ]  subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for us-west-2d - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for us-west-2b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  nodegroup "ng-1" will use "ami-0f11fd98b02f12a4c" [AmazonLinux2/1.13]
[ℹ]  using SSH public key "/Users/derrickburns/.ssh/aws-tidepool-derrickburns.pub" as "eksctl--nodegroup-ng-1-5a:13:38:5e:a3:a6:20:54:78:52:90:65:02:da:9a:38"
[ℹ]  creating EKS cluster "" in "" region
[ℹ]  1 nodegroup (ng-1) was included
[ℹ]  will create a CloudFormation stack for cluster itself and 1 nodegroup stack(s)
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region= --name='
[ℹ]  2 sequential tasks: { create cluster control plane "qae", create nodegroup "ng-1" }
[ℹ]  building cluster stack "eksctl-qae-cluster"
[ℹ]  deploying stack "eksctl-qae-cluster"

@errordeveloper errordeveloper self-assigned this Jun 20, 2019

@errordeveloper errordeveloper referenced this issue Jun 20, 2019
2 of 2 tasks complete

@errordeveloper errordeveloper added this to the 0.1.39 milestone Jun 20, 2019

@braderhart

This comment has been minimized.

Copy link

commented Jun 21, 2019

Whatever you did here also fixed an issue for me of the wrong node AMI version being used with version 1.13.

@errordeveloper errordeveloper modified the milestones: 0.2.0, 0.1.38 Jul 2, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.