Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor the main API struct #283

Merged
merged 1 commit into from Oct 26, 2018

Conversation

@errordeveloper
Copy link
Member

commented Oct 24, 2018

Description

These changes will allow for:

This is a big change, but it shouldn't affect user-visible functionality at all.

Checklist

  • Code compiles correctly (i.e make build)
  • All tests passing (i.e. make test)
@errordeveloper

This comment has been minimized.

Copy link
Member Author

commented Oct 24, 2018

I'm running integration tests now. Would be great to see some reviews, as it's a big change, and we can use the opportunity to rename/restructure things etc.

@@ -32,7 +32,8 @@ var (
)

func createClusterCmd() *cobra.Command {
cfg := &api.ClusterConfig{}
cfg := api.NewClusterConfig()
ng := &cfg.NodeGroups[0]

This comment has been minimized.

Copy link
@errordeveloper

errordeveloper Oct 24, 2018

Author Member

This is not idea, but what would be better?

Some options:

  • ng := cfg.GetDefaultNodeGroup()
  • cfg, ng := api.NewClusterConfig()
  • cfg.DefaultNodeGroup as a pointer to first element of cfg.NodeGroups

Hard to decide.

This comment has been minimized.

Copy link
@errordeveloper

errordeveloper Oct 24, 2018

Author Member

This is only needed here, and in few other places where default nodegroup files need to be set, perhaps something simple that looks like a proper API, in which case cfg.GetDefaultNodeGroup() seems just fine. What do people think?

This comment has been minimized.

Copy link
@Lazyshot

Lazyshot Oct 24, 2018

Collaborator

I agree with having a GetDefaultNodeGroup() is ideal or at the very least a GetNodeGroup(id) (NodeGroup, error)

Or perhaps remove the creation of the default node group from the NewClusterConfig call and have a ng := cfg.CreateDefaultNodeGroup()

This comment has been minimized.

Copy link
@errordeveloper

errordeveloper Oct 24, 2018

Author Member

@Lazyshot you are right, let's make it a distinct step!

@errordeveloper errordeveloper changed the title Refactor the main API struct WIP: Refactor the main API struct Oct 24, 2018

@errordeveloper

This comment has been minimized.

Copy link
Member Author

commented Oct 24, 2018

Marked WIP as integration tests failed.

@errordeveloper errordeveloper force-pushed the refactor-cluster-config branch from aff744a to 216a104 Oct 24, 2018

@errordeveloper errordeveloper force-pushed the refactor-cluster-config branch from 216a104 to 5579070 Oct 24, 2018

@errordeveloper

This comment has been minimized.

Copy link
Member Author

commented Oct 24, 2018

I think there is still a fair bit of refactoring needed to consistently represent multiple nodegroups, I'll try to have a go at it tomorrow, but it might be a thing for another PR.

@errordeveloper errordeveloper force-pushed the refactor-cluster-config branch from 5579070 to 91d685d Oct 24, 2018

@errordeveloper

This comment has been minimized.

Copy link
Member Author

commented Oct 24, 2018

Actually, it turned to be easy enough to remove single "initial" nodegroup assumption, but comments and log messages still need to be updated.

@errordeveloper errordeveloper force-pushed the refactor-cluster-config branch 2 times, most recently from 92d32e4 to d0a6007 Oct 24, 2018

@errordeveloper errordeveloper force-pushed the refactor-cluster-config branch 5 times, most recently from 48d8cec to 7f6bb22 Oct 25, 2018

@errordeveloper

This comment has been minimized.

Copy link
Member Author

commented Oct 25, 2018

Need to re-run integration tests, and if all is well will remove WIP tag.

@errordeveloper

This comment was marked as resolved.

Copy link
Member Author

commented Oct 25, 2018

Not so happy yet:

=== RUN   TestCreateIntegration
Running Suite: (Integration) Create, Get, Scale & Delete
========================================================
Random Seed: 1540463467
Will run 12 of 12 specs

• [SLOW TEST:956.681 seconds]
(Integration) Create, Get, Scale & Delete
/Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:51
  when creating a cluster with 1 node
  /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:70
    should not return an error
    /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:71
------------------------------
•••
------------------------------
• [SLOW TEST:23.726 seconds]
(Integration) Create, Get, Scale & Delete
/Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:51
  when creating a cluster with 1 node
  /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:70
    and we create a deployment using kubectl
    /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:126
      should deploy the service to the cluster
      /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:142
------------------------------
••
------------------------------
• [SLOW TEST:114.749 seconds]
(Integration) Create, Get, Scale & Delete
/Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:51
  when creating a cluster with 1 node
  /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:70
    and scale the cluster
    /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:187
      should not return an error
      /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:188
------------------------------
• [SLOW TEST:33.347 seconds]
(Integration) Create, Get, Scale & Delete
/Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:51
  when creating a cluster with 1 node
  /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:70
    and scale the cluster
    /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:187
      should make it 2 nodes total
      /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:206
------------------------------
• [SLOW TEST:22.701 seconds]
(Integration) Create, Get, Scale & Delete
/Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:51
  when creating a cluster with 1 node
  /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:70
    and deleting the cluster
    /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:219
      should not return an error
      /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:220
------------------------------
•
------------------------------
• Failure [0.651 seconds]
(Integration) Create, Get, Scale & Delete
/Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:51
  when creating a cluster with 1 node
  /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:70
    and deleting the cluster
    /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:219
      should have deleted the required cloudformation stacks [It]
      /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:252

      Expected NOT to find a Cloudformation stack named eksctl-hilarious-unicorn-1540463468-cluster but it found

      /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:257
------------------------------


Summarizing 1 Failure:

[Fail] (Integration) Create, Get, Scale & Delete when creating a cluster with 1 node and deleting the cluster [It] should have deleted the required cloudformation stacks
/Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:257

Ran 12 of 12 Specs in 1156.546 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 0 Skipped
--- FAIL: TestCreateIntegration (1156.55s)
FAIL
FAIL    github.com/weaveworks/eksctl/integration        1156.576s
make: *** [integration-test] Error 1

@errordeveloper errordeveloper force-pushed the refactor-cluster-config branch from 7f6bb22 to efad73a Oct 25, 2018

@errordeveloper

This comment has been minimized.

Copy link
Member Author

commented Oct 25, 2018

Turns out ListReadyStacks was the issue. It didn't return nodegroup stack because its status was UPDATE_COMPLETED, and ListReadyStacks doesn't handle those yet - not sure if it should.
Here, we need any kind of stacks (even the ones with dodgy status, i.e. failed updated or whatever), so we want to list all stacks and filter-out only the ones that had been deleted. I've applied a fix (use ListStacks basically), and re-running integration tests now.

@errordeveloper errordeveloper force-pushed the refactor-cluster-config branch from efad73a to dfccefb Oct 25, 2018

@errordeveloper

This comment has been minimized.

Copy link
Member Author

commented Oct 25, 2018

=== RUN   TestCreateIntegration
Running Suite: (Integration) Create, Get, Scale & Delete
========================================================
Random Seed: 1540465964
Will run 12 of 12 specs

• [SLOW TEST:891.016 seconds]
(Integration) Create, Get, Scale & Delete
/Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:51
  when creating a cluster with 1 node
  /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:70
    should not return an error
    /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:71
------------------------------
•••
------------------------------
• [SLOW TEST:12.745 seconds]
(Integration) Create, Get, Scale & Delete
/Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:51
  when creating a cluster with 1 node
  /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:70
    and we create a deployment using kubectl
    /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:126
      should deploy the service to the cluster
      /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:142
------------------------------
••
------------------------------
• [SLOW TEST:96.492 seconds]
(Integration) Create, Get, Scale & Delete
/Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:51
  when creating a cluster with 1 node
  /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:70
    and scale the cluster
    /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:187
      should not return an error
      /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:188
------------------------------
• [SLOW TEST:33.326 seconds]
(Integration) Create, Get, Scale & Delete
/Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:51
  when creating a cluster with 1 node
  /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:70
    and scale the cluster
    /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:187
      should make it 2 nodes total
      /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:206
------------------------------
• [SLOW TEST:718.169 seconds]
(Integration) Create, Get, Scale & Delete
/Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:51
  when creating a cluster with 1 node
  /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:70
    and deleting the cluster
    /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:219
      should not return an error
      /Users/ilya/Code/eksctl/src/github.com/weaveworks/eksctl/integration/creategetdelete_test.go:220
------------------------------
••
Ran 12 of 12 Specs in 1757.606 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 0 Skipped
--- PASS: TestCreateIntegration (1757.61s)
PASS
ok      github.com/weaveworks/eksctl/integration        1757.643s

@errordeveloper errordeveloper changed the title WIP: Refactor the main API struct Refactor the main API struct Oct 25, 2018

@errordeveloper errordeveloper referenced this pull request Oct 25, 2018
2 of 2 tasks complete

// HasSuffiecentPublicSubnets validates if there is a suffiecent
// number of subnets available to create a cluster
func (c ClusterVPC) HasSuffiecentPublicSubnets() bool {

This comment has been minimized.

Copy link
@Lazyshot

Lazyshot Oct 26, 2018

Collaborator

Nitpick here, but just noticed the typo: Sufficient

This comment has been minimized.

Copy link
@errordeveloper

errordeveloper Oct 26, 2018

Author Member

Thanks - it's one of the words I always forget how to spell!

Refactor the main API struct
This will accommodate for

- multiple nodegroups
- public and private subnets
- use taskmanager tasks for nodegroup(s) deletion
- cleanup nodegroup terminology

@errordeveloper errordeveloper force-pushed the refactor-cluster-config branch from dfccefb to 9c0d8a6 Oct 26, 2018

@stefanprodan
Copy link
Member

left a comment

LGTM Great work Ilya!

@richardcase
Copy link
Collaborator

left a comment

This looks really good, a great change that will help a lot with future changes. The structual and naming changes are a real benefit. Great work @errordeveloper 🍾

type task func(chan error) error
type task struct {
call func(chan error, interface{}) error
data interface{}

This comment has been minimized.

Copy link
@richardcase

richardcase Oct 26, 2018

Collaborator

Great addition, this is along the lines of something i wanted to add.

@@ -32,13 +32,14 @@ var (
)

func createClusterCmd() *cobra.Command {
cfg := &api.ClusterConfig{}
cfg := api.NewClusterConfig()
ng := cfg.NewNodeGroup()

This comment has been minimized.

Copy link
@richardcase

richardcase Oct 26, 2018

Collaborator

Perfect! 👍

@@ -89,13 +99,13 @@ func doDeleteCluster(cfg *api.ClusterConfig, name string) error {

if clusterErr {
if handleIfError(ctl.DeprecatedDeleteControlPlane(), "control plane") {
handleIfError(stackManager.DeprecatedDeleteStackControlPlane(waitDelete), "stack control plane")
handleIfError(stackManager.DeprecatedDeleteStackControlPlane(waitDelete), "stack control plane (deprecated)")

This comment has been minimized.

Copy link
@richardcase

richardcase Oct 26, 2018

Collaborator

At what point can we deleted the deprecated stuff? v0.2.0?

This comment has been minimized.

Copy link
@errordeveloper

errordeveloper Oct 26, 2018

Author Member

Yes, that seems like the best plan!

@@ -28,7 +29,7 @@ func scaleNodeGroupCmd() *cobra.Command {

fs.StringVarP(&cfg.ClusterName, "name", "n", "", "EKS cluster name")

fs.IntVarP(&cfg.Nodes, "nodes", "N", -1, "total number of nodes (scale to this number)")
fs.IntVarP(&ng.DesiredCapacity, "nodes", "N", -1, "total number of nodes (scale to this number)")

This comment has been minimized.

Copy link
@richardcase

richardcase Oct 26, 2018

Collaborator

Really like the general renaming of struct fields. The consistency is a real help.

@errordeveloper

This comment has been minimized.

Copy link
Member Author

commented Oct 26, 2018

@stefanprodan @richardcase thanks for the reviews!

@errordeveloper errordeveloper merged commit 7339396 into master Oct 26, 2018

2 checks passed

GolangCI No issues found!
Details
ci/circleci: make-eksctl-image Your tests passed on CircleCI!
Details

@errordeveloper errordeveloper deleted the refactor-cluster-config branch Oct 26, 2018

@errordeveloper errordeveloper referenced this pull request Oct 26, 2018
3 of 4 tasks complete
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants
You can’t perform that action at this time.