-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving cache management #82
Comments
Yes, initially it was actually kept in cache, and you identified one of the reasons why it was changed:
That being said, nothing prevents us from making this optional. I don't know if it would be possible to always send a deep copy of the objects to the stages as input. Probably still faster than reading the files, but I'm not sure if there is a generic way to do that? Feel free to send a PR, we can merge it then. I'm about to test the current PR with some of the existing pipelines. |
This is why I suggest deleting the useless cache before each execution. As I work with tens of gigabytes of data, I/O times and memory use are critical for me!
|
Yes, that's fine! |
Currently, if a working directory is given, all cache files are deleted from memory after each stage execution.
We could accelerate the whole execution of a pipeline by avoiding re-loading results that have been used or produced during the preceding stage.
This is how it would work:
We would use a cache map for all stage executions. At each stage execution, this cache is filled with dependency results/caches and, in the end, with the stage result.
This is already the case when no working directory is given. So technically, we could use the same cache.
Then, at the beginning of each stage execution, we delete the cache that will not be used during the subsequent execution.
This reduces the execution time of the whole pipeline significantly depending on the cache file sizes. This may be explained by the fact that a topological sort gives execution order: for each stage, its result is probably used during the next stage.
There is one major drawback: the cache is mutable, so any modification of a retrieved cache will be transmitted to the following stages using this same retrieved cache (which does not happen currently and we generally do not want). I do not know how to prevent that because efficiently detecting a modification of an object or making it immutable depends on the kind of this object. A heavy way to do that is to compare the serialization before and after execution, but then we may lose most of the speedup depending on the cost of a serialization.
Let me know if you see any solution apart from warning the developer or if you note any other drawbacks.
Maybe we could make my suggestion an optional behavior?
I already implemented it in my fork (diffs). If you agree with my suggestion, should I wait for #81 to be merged to open a new PR? Or may you want to implement it by yourself?
The text was updated successfully, but these errors were encountered: