Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend e2e testing #129

Closed
mrIncompetent opened this issue Mar 12, 2018 · 7 comments
Closed

Extend e2e testing #129

mrIncompetent opened this issue Mar 12, 2018 · 7 comments
Assignees

Comments

@mrIncompetent
Copy link
Contributor

We should add the following test cases:

  • Hetzner
    • Ubuntu + Docker 1.13
    • Ubuntu + Docker 17.03
    • Ubuntu + CRI-O 1.9
  • Digitalocean
    • Ubuntu + Docker 1.13
    • Ubuntu + Docker 17.03
    • Ubuntu + CRI-O 1.9
    • CoreOS + Docker 1.13
    • CoreOS + Docker 17.03
  • AWS
    • Ubuntu + Docker 1.13
    • Ubuntu + Docker 17.03
    • Ubuntu + CRI-O 1.9
    • CoreOS + Docker 1.13
    • CoreOS + Docker 17.03
  • Openstack (We need a sponsor here)
    • Ubuntu + Docker 1.13
    • Ubuntu + Docker 17.03
    • Ubuntu + CRI-O 1.9
    • CoreOS + Docker 1.13
    • CoreOS + Docker 17.03
@alvaroaleman
Copy link
Contributor

To keep tests still efficient and also not repeat ourselves too much I suggest putting this in the verify tool by:

  • Adding a config file that allows to specify a matrix just like the one above
  • Add a --create-only flag that only creates the machine objects so they will be created in parallel
  • Add a --wait-for=<provider>-<distro>-<container-runtime> flag that blocks until the given machine was successfully created and then deletes it to see at a glance in CircleCI what failed if something failed

@mrIncompetent
Copy link
Contributor Author

Why not implementing decent concurrency into the command?
Might be cleaner than just juggling with different command line calls

@alvaroaleman
Copy link
Contributor

Because then you can't have a dedicated step per scenario in CircleCI which helps a lot if tests failed and you want to find out which one

@mrIncompetent
Copy link
Contributor Author

We could also do the setup (terraform & cluster creation) in one step, and then run all tests in parallel.
Then we don't have to modify the command at all.

@p0lyn0mial p0lyn0mial self-assigned this Apr 3, 2018
@p0lyn0mial
Copy link
Contributor

@alvaroaleman @mrIncompetent Could we slightly change approach to writing test cases ?

I'd like to start writing test cases as standard golang tests.
But at the same I'd like to utilise what we have so far, that is: cluster creation (kubeadm) and the verify tool. The whole process would do the following steps:

  1. create cluster (provision_master.sh)
  2. run machine-controller (provision_master.sh)
  3. run e2e test (using standard go test command)

Individual test cases would hold only data/parameters needed to feed the verify tool. We could also utilise go test's build-in concurrency to parallelise tests and various flags like --short to control how many test we would like to run on each build.

We would also need to change the verify tool to check ownerRef instead of counting the number of nodes.

@mrIncompetent
Copy link
Contributor Author

Good idea!

@alvaroaleman
Copy link
Contributor

Yes, sounds like a very good idea.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants