You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, if the locally cached metadata is outdated, the entire database is downloaded from the backend.
We could make this a lot faster by storing the MD5 of each metadata object as object-metadata, and then only downloading objects whose MD5 differs from that of the locally cached block. For this, we probably want to implement a highly-parallel get_metadata(prefix) method for backend classes that uses multiple connections (multiplexed through async IO) to retrieve metadata for multiple objects.
The question is just: is it worth the complexity? How often do people have a partially outdated local cache.
The text was updated successfully, but these errors were encountered:
Currently, if the locally cached metadata is outdated, the entire database is downloaded from the backend.
We could make this a lot faster by storing the MD5 of each metadata object as object-metadata, and then only downloading objects whose MD5 differs from that of the locally cached block. For this, we probably want to implement a highly-parallel
get_metadata(prefix)
method for backend classes that uses multiple connections (multiplexed through async IO) to retrieve metadata for multiple objects.The question is just: is it worth the complexity? How often do people have a partially outdated local cache.
The text was updated successfully, but these errors were encountered: