-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validate HA configs #11
Comments
Some thoughts on this one: Need a test system to perform continuous test for a cluster definition (ex. our examples), especially HA tests:
Part of this sounds out of reach of Archon. So it's potentially a new tool. Maybe a script that makes use of Archon. Maybe your new test framework too. |
Will https://github.com/kubernetes/test-infra be helpful? |
Possible. But due to lack of documentation, we have limited knowledge of what it is and how it should be used. For a deployed cluster, the most useful tests are e2e tests, which can be run from remote and are perfect for this case. However after looking through conformance test code, I suspect they don't really cover the issues listed above. |
Can we build something like chaos monkey to simulate failures while watching for overall service health? We could leave it running for a while and gather some stats. I think all the failure cases can be tested in a e2e tests. Maybe we just need a way to detect service disruption. |
Yes. But that's only one of the steps. If we want to automate the whole thing, we will need to automate the setup of the cluster in a real environment (Aliyun or AWS) with Archon, and the deployment of all these test related tools. Jenkins is an obvious option. Maybe we can start with automating setup of a staging 3-node cluster on Aliyun and run e2e tests on it. |
How about we setup a cluster with archon running in it. Each test case could define their own cluster with yaml files. In Jenkinsfile, they setup a new cluster with kubectl. Then launch test in the newly created cluster and get the result. After the test ended, they could tear down the cluster using kubectl delete. |
Why not just use --local? |
Because we don't have to setup credentials locally? |
I think it's acceptable to put a credential with limited privileges in the CI system for testing purposes. However, if the intermediate cluster you talked about is also a part of the test, it somehow makes sense. |
There should be no service disruption when:
The text was updated successfully, but these errors were encountered: