-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug: Passive healthcheck can't disable #3304
Comments
btw https://getkong.org/docs/0.12.x/health-checks-circuit-breakers/ target enable url |
@mengskysama Thanks for the report. Pushed Kong/docs.konghq.com#627 for now - we will look at your PR soon! |
@mengskysama I left a comment in your PR! @thibaultcha thank you for the doc fix! |
@hishamhm Thanks for your reply! Passive health check can be disable in single instance but cluster. Here is code can help reproduce problem. Modify to yours
Delete all of upstream and api before run
output
Please let me know if you can reproduce this problem. |
@mengskysama Thank you for the test case! I will try to reproduce it. |
In the upstream event handler, `create_balancer` was being called with the object received via the event, which contains "id" and "name" only, and not the entire entity table containing the rest of the upstream fields. This caused it create a healthchecker with an empty configuration (ignoring the user's configuration), which then fell back into the lua-resty-healthcheck defaults. This fix obtains the proper entity object from the id and passes it to `create_balancer`. A regression test is included, which spawns two Kong instances and reproduces the error scenario described by @mengskysama. Fixes #3304.
@mengskysama I reproduced your test case and submitted a PR with the fix! Thank you once again! If possible, please download the PR branch ( |
In the upstream event handler, `create_balancer` was being called with the object received via the event, which contains "id" and "name" only, and not the entire entity table containing the rest of the upstream fields. This caused it create a healthchecker with an empty configuration (ignoring the user's configuration), which then fell back into the lua-resty-healthcheck defaults. This fix obtains the proper entity object from the id and passes it to `create_balancer`. A regression test is included, which spawns two Kong instances and reproduces the error scenario described by @mengskysama. Fixes #3304 From #3319
In the upstream event handler, `create_balancer` was being called with the object received via the event, which contains "id" and "name" only, and not the entire entity table containing the rest of the upstream fields. This caused it create a healthchecker with an empty configuration (ignoring the user's configuration), which then fell back into the lua-resty-healthcheck defaults. This fix obtains the proper entity object from the id and passes it to `create_balancer`. A regression test is included, which spawns two Kong instances and reproduces the error scenario described by @mengskysama. Fixes #3304 From #3319
@hishamhm Great! I double check |
Awesome, thank you!
|
The regression test for issue #3304 was flaky because it launches two Kong nodes and it waited for the second one to be ready by reading the logs. This is not a reliable way of determining if a node is immediately ready to proxy a configured route. Reversing the order of proxy calls in the test made it fail more consistently, which helped debugging the issue. This changes the check to verify if the router has been rebuilt, using a dummy route for triggering the routing rebuild before the proper test starts. (Thanks @thibaultcha for the idea!) The changes are also backported to `spec-old-api/`.
The regression test for issue #3304 was flaky because it launches two Kong nodes and it waited for the second one to be ready by reading the logs. This is not a reliable way of determining if a node is immediately ready to proxy a configured route. Reversing the order of proxy calls in the test made it fail more consistently, which helped debugging the issue. This changes the check to verify if the router has been rebuilt, using a dummy route for triggering the routing rebuild before the proper test starts. (Thanks @thibaultcha for the idea!) The changes are also backported to `spec-old-api/`. From #3454
Summary
Upstream passive healthcheck can't disable.
Steps To Reproduce
Additional Details & Logs
Kong version (0.12.3)
The text was updated successfully, but these errors were encountered: