New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RHPAM-263] fix for Liveness probe for Business Central #40
[RHPAM-263] fix for Liveness probe for Business Central #40
Conversation
https://issues.jboss.org/browse/RHPAM-263 replace the probe script with HTTP request for business central's home page. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@calvinzhuca , can you please also apply similar change to the new templates/rhpam70-prod.yaml template that was recently added? Thanks!
@calvinzhuca When you're done, can you please squash your commits into one? Thanks again! |
Signed-off-by: calvin zhu <calvinzhu@hotmail.com>
@calvinzhuca @errantepiphany Just a question, isn't 180 second initial delay too much? From my own experience the Workbench usually starts in around 60-70 seconds. Wouldn't it have more sense to set delay to 120 seconds, to start Workbench pod faster while still keeping some time buffer? |
@sutaakar I would be okay with that if you want to submit a PR which adjusts the time. @calvinzhuca , what do you think? |
I think 180 seconds is a reason time, I discussed this value with Babak. Leaving the time a little bit longer, will also benefit the client if anything goes wrong, they have enough time to check the log before the pod will be restarted automatically. |
@calvinzhuca Ok. In that case wouldn't it have sense to create a custom probe to check readiness and liveness as is mentioned in JIRA? The current implementation forces user to wait for some time even when Workbench is actually started. |
@calvinzhuca @errantepiphany @sutaakar I actually had concerns about this and negotiated Calvin down from 5 mins to 180 seconds :-). Now that I'm looking further into this and hear Karel's concern, I agree. We don't want to kill the pod if it hasn't started in 120 seconds, but we also shouldn't have to wait an extra 1+ minute to use it, even though it's ready. Instead of a high initialDelaySeconds, we can have higher values for failureThreshold and timeoutSeconds to avoid unnecessary pod restarts, without preventing access to pods that are ready to use. |
Signed-off-by: calvin zhu calvinzhu@hotmail.com
Thanks for submitting your Pull Request!
Please make sure your PR meets the following requirements:
[CLOUD-XYA] Subject
CONTRIBUTING.md
)Signed-off-by: Your Name <yourname@example.com>
- usegit commit -s