Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GKE self managed setup script should have an e2e test #760

Closed
KatrinaHoffert opened this issue May 24, 2019 · 6 comments
Closed

GKE self managed setup script should have an e2e test #760

KatrinaHoffert opened this issue May 24, 2019 · 6 comments
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@KatrinaHoffert
Copy link
Contributor

Note: should be viewed as dependent #759.

The script was broken already, so there's certainly merit in a test to avoid that. However, it's difficult because the test will be slow and require many things:

  1. Script requires cluster ownership
  2. Ideally should have push access to some container registry
  3. Requires an existing cluster (so likely have to set one up or at least be dependent on another e2e test)
  4. Slow because it needs to restart a cluster and realistically, a test would also need to create an ingress, which takes a few minutes and uses GCP resources.

Things that could be done to make this task easier:

  1. Commands that we use to populate the gce.conf file could be split into their own functions and tested on their own, then the larger test can just use the CLI flags to override em.
  2. We could just check the output files we created and the commands we run. Fake the gcloud, kubectl, and make and just test what they execute. Would make it a unit test instead of an e2e test, but at least would catch changes and would be much faster to run.
  3. We could use --dry-run with kubectl to at least confirm our YAML is valid (since that's one thing that was broken before!).
  4. We could really avoid making any ingress resources. Just checking that the controller pod starts without errors would probably be sufficient combined with other tests. Just risky due to the fact the configuration flow is not the same as those tests.
@KatrinaHoffert
Copy link
Contributor Author

/good-first-issue
/kind feature
/cc KatrinaHoffert

@k8s-ci-robot
Copy link
Contributor

@KatrinaHoffert:
This request has been marked as suitable for new contributors.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

/good-first-issue
/kind feature
/cc KatrinaHoffert

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels May 24, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 22, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 21, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants