Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Real-World Tests as Integration Tests #80

Open
elm- opened this issue Oct 16, 2017 · 3 comments
Open

Real-World Tests as Integration Tests #80

elm- opened this issue Oct 16, 2017 · 3 comments

Comments

@elm-
Copy link
Contributor

elm- commented Oct 16, 2017

I've seen there are some tests already. I was wondering what the goal of them is at the moment? The reason is if it makes sense to contribute the following:

I'm testing now this for a production rollout for us. Key part are rolling updates / upgrades of kafka, node failures due to Kubernetes upgrades, etc. and that everything recovers without an issue. I have internally already a test setup that has services communicating via a Kafka queues and doing checksums at the end to make sure that everything worked. In between I'm playing chaos monkey to see if it's impacted by any issues I put to the Kafka nodes.

I was thinking of building some automated integration tests that test different scenarios like this with the Kubernetes cluster. It's actually dead simple to simulate this kind of things. Has anyone already considered this or some thoughts on it?

@solsson
Copy link
Contributor

solsson commented Oct 16, 2017

Sounds very interesting. Increased test coverage is priority 1 for this repo, and in particular the form of resilience tests that you describe.

Tests based on the concept in #51 are essentially a way to document how to "smoke test" a resource. It works well for exploring things like the REST addon. The scope should be sanity checks and documentation. The benefit, but also the limitation, is that they don't require a dedicated image and don't depend on anyone's local environment.

That limitation is quite clear when testing for #78 using #79. Thus I'm working on a Java API based test for the same thing, currently in https://github.com/Yolean/kafka-test-failover.

All sorts of tests are welcome. How easy they are to adopt for others probably depend on:

  • Do they have any local dependencies? Or is it just kubectl apply -f?
  • What assumptions do they make about the kafka setup? Are they specific to some hosting provider?
  • Do they require new docker images?

For new images, are those builds automated at Docker Hub, like https://hub.docker.com/r/solsson/kafka/? Automated builds are easy to fork, and to build in minikube using eval $(minikube docker-env).

How would you want to package your tests? Separate repositories, bridged to the kafka setup using kubectl, would be an interesting approach for complex setups.

@solsson
Copy link
Contributor

solsson commented Oct 19, 2017

I've tried to see if Kafka's own tests could be used here. They are nicely dockerized, but they require control over brokers. I see no obvious way to identify tests that are applicable with arbitrary bootstrap.servers. There's also https://cwiki.apache.org/confluence/display/KAFKA/System+Test+Improvements.

@solsson
Copy link
Contributor

solsson commented Nov 9, 2017

Found https://github.com/linkedin/kafka-monitor now. I'll give it a try.

The status of https://github.com/Yolean/kafka-test-failover is that I can't get enough value out of it to motivate further development at this stage. I would like to test how it affects clients when broker pods are moved between nodes, for example at cluster upgrades, because I think this is quite unusual given the manual and broker-per-server approach of traditional Kafka ops (my interpretation of http://shop.oreilly.com/product/0636920044123.do).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants