Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cached input/outputs from initialization are used for fastest model #48

Open
jwarner308 opened this issue Aug 19, 2019 · 0 comments
Open

Comments

@jwarner308
Copy link
Collaborator

We need to restructure the cache implementation so that the stored initialization input/outputs are used for the most expensive high fidelity model on the highest level. Currently the lowest level model is evaluated first in the simulation loop so it generally uses all the cached input/outputs before MLMC gets to the higher levels.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
@jwarner308 and others