-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
- Rebuild the cache #14
Conversation
+ There is a use case where none of the cached servers may be available. This may be due to a general outage or that the servers have been decommissioned and replaced. When non of the cached servers are reachable the server cache gets re-built and we try to establish a connection to servers consistent with the new data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are still functions that shadow smt
import. Would be ideal to clean up those names.
plyflakes only reported the one function where I attempted to change the name and it took me 2 tries ;) But agreed |
The code that populates cache was previously in Can |
@ikapelyukhin as long as we do not have the "force new registration" feature when we detect that we are in a "brand new" data situation I don't see a good opportunity to re-factor at this point. The new __populate_srv_cache() function only considers the new update server data, but the cached server from previously may still be accessible and they in fact may hold the registration. So when #15 gets addressed and we detect that the new servers don't hold the registration credentials and we need to re-register then we can drop the old server from the cache and refactor the executable to depend on __populate_srv_cache() only. Will merge this. |
This may be due to a general outage or that the servers have been
decommissioned and replaced. When non of the cached servers are reachable
the server cache gets re-built and we try to establish a connection
to servers consistent with the new data.