-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory management is a little worrying #219
Comments
I think the bulk data when we first developed this was only 1GB or something. Originally my concerns were with the time it took to process the file, and I didn't really see many ways to optimize it because of the way the data is stored. Breaking the file up seems a reasonable interim plan - is it possible to search for changes in data year and break it up that way? Similarly, reading the file in chunks. In some future version where we use EIA API, this problem likely goes away, and we can at least process jsons rather than plain text. |
Please, please, please, let there be an API for that! |
Now that I'm on to testing ELCI_3, I'm hitting more seg faults (and one bus fault) and it's giving me flashbacks to my early coding career when I used to do too much with passing variables globally. There are a lot of hints of that going on here, especially where modules are imported within scope of a method, initializing globals used elsewhere, globals being referenced in methods, globals being sliced and modified. All are a good recipe for unmanaged memory.
|
Added new checks for bulk data vintage to trigger a new download with the latest data. |
The latest runs of ELCI_1 trigger the
ba_io_trading_model
in eia_io_trading.py. There is a bulk data file and during the call toba_exchange_to_df
, the memory demand spikes to >11 GB. I've hit Python segmentation faults during this, which was solved by restarting my computer and re-running. Seems worthy of a cautionary tale for users.I see a few instances of memory management where the massive lists of strings are deleted after processing.
In response, I started to parse out subroutines from the really long method. Not sure what all else can be done given the shear size of the bulk text file (>3 GB) and that it's stored primarily as Python string objects. I might look into optimizing the data types when the text file is processed. An alternative may just be to break the monster file into smaller files, process them individually, then put it back together.
The text was updated successfully, but these errors were encountered: