This repository has been archived by the owner on Nov 16, 2022. It is now read-only.
avoid repeated downloading of large datafiles #2
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In osmose-model/osmose-web-api#167 behavior was describe that caused web page crashes due to out of memory errors.
After debugging and code inspection, I noticed that large datafiles were downloaded repeatedly, causing a >1GB of data to be loaded into memory when many functional groups / species were selected (e.g., default settings for Gulf of Mexico). After reproducing the issue locally, I applied a fix that avoided the repeated downloads / loading of large data files. After applying the fix, I was unable to reproduce the issue documented in osmose-model/osmose-web-api#167, suggesting that the root cause for the webpage crash has been resolved.
Please merge this pull request to apply the fix to the site at http://fin-casey.github.io/wizard.html# .
This took me about 2 hours to complete. Most of the work was to analyze the code, reproduce the issue and setup a local setup to reproduce and fix the issue.