-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disable refresh and add some warning to documentation #363
base: main
Are you sure you want to change the base?
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this a OK workaround for now.
Short term: Maybe a admin endpoint to force catalog refresh could be nice (so one could port forward to the pod and trigger refresh)?
Long term: Would have to think about some solution to keep replicas in sync. Maybe you have some ideas.
It could be doable, maybe by restricting traffic to localhost only and providing a convenience sh script within the image. But I'm not sure that's worth it and more usable than deleting the pod
I think for that it will be partly related to #359. |
One possible way to deal with multiple pod with less pain could be https://medium.com/@bectorhimanshu/how-to-create-a-scheduled-task-using-a-cron-job-in-spring-boot-a1987e679d60 a cron task to plan a refrehs of catalogs at same time on all pods. If a pod restart and in the same time catalogs were updated between two cron executions, there is no magic unless you deal with some kind of creation timestamp in helm metadata, probably overkill regarding the risks |
Still another way little more tricky but will completely answer the need is to add a distributed cache solution in spingboot. One could be https://infinispan.org/docs/stable/titles/spring_boot/starter.html. the embedded mode need no installation of redis or persistent caching solution, the embedded mode need a springboot configuration and the pods speak with each others to manage the same cache. As helm chart is for me not a heavy load of cache it could probably be a solution. In this case the logic is not the same you configure a timeout on the cache and first pod that call the cache if timeout is reached will refresh the cache. |
Fixes #360