Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory strategies at the whole workflow level #22

Closed
wlandau opened this issue Apr 11, 2020 · 3 comments
Closed

Memory strategies at the whole workflow level #22

wlandau opened this issue Apr 11, 2020 · 3 comments

Comments

@wlandau
Copy link
Member

wlandau commented Apr 11, 2020

Related: #19. We should think about when objects should be kept in memory or released. The advantage of keeping them for longer is we do not need to access storage, but that could make memory blow up.

@wlandau
Copy link
Member Author

wlandau commented Apr 11, 2020

I guess this could be as simple as the decision to use or not use a centralized cache object. The target's cache object should remain unaffected (we should always clear it after the target builds and stores its value, regardless of the memory strategy). So we won't get to this issue until it's time to implement centralized components. There is nothing we need to implement in the target class.

This was referenced Apr 19, 2020
@wlandau
Copy link
Member Author

wlandau commented Apr 19, 2020

New idea: just use the "lookahead" strategy from drake. We can make it efficient by using a second priority queue for downstream targets. For strategies lighter on memory, just use dynamic files.

@wlandau
Copy link
Member Author

wlandau commented Apr 19, 2020

Now implemented.

@wlandau wlandau closed this as completed Apr 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant