New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KIC 2.0 Integration Test Happy Path for v1.Ingress #1102
Conversation
This has been replaced by an upstream project: https://github.com/kong/kubernetes-testing-framework
Previous Storer implementations have used client-go and its cache, but this implementation relies instead on the sigs.k8s.io/controller-runtime/client package as this has some benefits, including native type handling and smaller code footprint.
Codecov Report
@@ Coverage Diff @@
## next #1102 +/- ##
==========================================
+ Coverage 56.34% 57.13% +0.79%
==========================================
Files 35 34 -1
Lines 3333 3278 -55
==========================================
- Hits 1878 1873 -5
+ Misses 1316 1266 -50
Partials 139 139
Continue to review full report at Codecov.
|
8efcfe9
to
dcbfa83
Compare
Eventually all this should be supported by a container image but for now it's just a simple bash script.
f4325b0
to
8cf6946
Compare
Co-Authored-By: Shane Utt <shaneutt@linux.com>
@rainest I added your test here: ac86ece The reason it was failing is that the proxy on the cluster has HTTPS disabled (explicitly), so the packets could never make it to the proxy server itself. To keep scope low how would you feel about us implementing another test, and then we can add HTTPS support for tests in as a follow-up? |
Co-authored-by: Michał Flendrich <m.flendrich@gmail.com>
NOTE: CI wont pass for now due to lack of HTTPS listen in the container image's proxy, waiting on @rainest for some comment feedback and we'll get that taken care of later. Feel free to continue reviewing even though CI is red 🖖 |
Alright, following more local testing, I'm confident in the fix portion of my comment on the redirect test, but am quite confused as to why everything else appears to have broken for me following the other changes, but works fine in CI. The Kong deployment now frequently fails to spawn at all. The KIND container indicates that the job failed, though logs there don't provide a whole lot of interesting info:
Shelling in and starting it manually succeeds, though the local controller instance doesn't talk to it, apparently. Not sure how to see more info about that since it no longer logs to a temp file. Maybe that wasn't working before? Since I hadn't encountered issues previously I hadn't tried to check. Simply reverting to bdde23e doesn't appear to fix that, however, so something else appears to have gone haywire on my machine. Edit: after further toying around with the broken stuff I found a (bad) path to making things work. If I shell into the KIND container more or less immediately and start Kong, it comes up quickly enough that the controller does manage to start talking to it in time, and both tests pass (with the fix). Based on some un-scientific log watching, Systemd apparently tries to install Kong before the control plane is even online, which naturally fails, and it gives up after retrying too many times, so Kong never starts without manual intervention. What should have ensured that it waits until K8S is running, or at least kept retrying until it succeeded? Not sure why that was working for me fine the other day, since it still seems like that should have been entirely up to the KIND image, which didn't change. |
Co-authored-by: Travis Raines <raines.travis@gmail.com>
Sounds like we have some race conditions and potential fragile points in our provisioning logic that need to be sorted out 🤔 I haven't run into anything like this on my end but the current systemd unit approach to pre-deploying the proxy is a hack until that can be baked into the image proper, so I think naturally that's probably where changes need to occur. I'll dig into this a little further today and do some testing to see if I can make the Kong deployment mechanism more resilient. |
After some investigation @rainest I have reason to believe our problems may relate to issues with image pulling that we've experienced elsewhere recently: in its current state the images are pulled on every test deploy (something that was intended to be taken care of in a later iteration of I'll continue digging in, it may be that we need to go ahead and do the intended follow up now to improve this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conditional LGTM, assuming that #1114 happens before 2.0 GA
Co-authored-by: Michał Flendrich <m.flendrich@gmail.com>
kubernetes-sigs/kind#2143 is somewhat relevant to improving our kind images for integration tests. |
This PR adds the happy path integration tests and includes some newly completed Kubernetes Testing Framework (KTF) functionality from upstream to provide a very basic Golang integration test for
v1.Ingress
.This resolves #1044.
v1.Ingress
established/.github/workflows
and actions created to exercise