Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reactive disk usage accounting can lead to breaching cache size restriction for brief periods #463

Closed
shubhamtagra opened this issue Oct 1, 2020 · 6 comments · Fixed by #464
Labels

Comments

@shubhamtagra
Copy link
Contributor

Today the way disk space utilization by a file, and hence the full Cache, is calculated is when the client calls the setAllCached api to inform BookKeeper that new data was cached by it. In this api handler, we update the disk usage of the file.

Consider a scenario where we already have 100% utilization of the space configured for Rubix. Now a new file is read, say 100MB in one inputStream by client. RemoteRRC or FileDownloadRRC would cache that 100MB first and then inform BookKeeper via setAllCached about the new data. BookKeeper will update the cache entry of this file with new weight=100MB and then realize that evictions are needed to accommodate this data. It will then do necessary evictions but for some time the disk usage by Rubix will go over configured percentage.

We can fix this by updating weight of the file in getCacheStatus api itself. When this api is returning LOCAL blockStatus then we know that client would eventually bring those blocks in. We could use this info to pro-actively update the weight of the file.

Now there might be a mis-behaving client that calls getCacheStatus but never downloads the data, e.g. a task cancellation comes before the download could start. In that case we would be over estimating the disk usage. One way to fix this is to have a scheduled thread to update file weights periodically based on usage on disk, this would ensure that realtime tracking is conservative and will never cross the configured threshold and we fixed the under-estimations by the scheduled service.

@shubhamtagra
Copy link
Contributor Author

Another example, which might be more common, is that due to task cancellations coming in CachingInputStream might not be updating BookKeeper even though it downloaded the data. This could be fixed by ensuring updateCacheAndStats is scheduled irrespective of Interrupts coming in or not in CachingInputStream but taking the proposed approach of moving file weights updation work to getCacheStatus api would solve this case too.

@shubhamtagra shubhamtagra added the q4 label Oct 1, 2020
@JamesRTaylor
Copy link
Contributor

Would one potential solution be to rejects further loads into the cache above a given threshold (90% of available disk space)?

@shubhamtagra
Copy link
Contributor Author

Would one potential solution be to rejects further loads into the cache above a given threshold (90% of available disk space)?

Even if we add that new threshold, same condition of delayed accounting can occur and cause this same problem.

@JamesRTaylor
Copy link
Contributor

It seems like to be completely accurate you'd need to evict at the same time you add to the cache.

@shubhamtagra
Copy link
Contributor Author

Right @JamesRTaylor. The proposal tries to do that by doing evictions before addition of data because "evict at the same time you add to the cache" cannot be guaranteed given the asynchronous nature of addition to cache and updation of metadata.

@shubhamtagra
Copy link
Contributor Author

shubhamtagra commented Oct 6, 2020

There is another option which can get us improved accounting (not accurate though) at a very low dev cost:

All download of data in async warmup happens in FileDownloadRequestChain. In that process:

  1. Limit the size of each readRequest in FileDownloadRequestChain to 100MB. The slight penalty on opening of additional connections in new positional reads (due to additional ReadRequests formed now) would be small enough to ignore
  2. With the download of each readRequest itself update the metadata instead of existing behavior of downloading all the data in the Chain and then updating metadata

What this gets us is that without any intrusive changes we are ensured that cache will never cross ~1GB beyond the configured threshold (10 download threads * 100MB delayed accounting per thread)

This approach will work only with async warmup case because in sync warmup the updation of metadata is over thrift that can be too costly with number of request we will get. Given that async warmup is default and we want to deprecate sync warmup over time, this should be an acceptable solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants