-
-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Displacement Memory Growth & Purge Option + Disk Caching of Displacements #178
Comments
Let me know if this is something that you'd support, and I can write a PR for this! |
You've identified one of the hardest parts of finite element analysis - the shear amount of data and memory it consumes. I've had large models in commercial software run out of disk space when running calculations. For every 1 node you have 12 displacements multiplied by the number of load combinations to store. What your code has done works, but purging the displacements for unused load combinations is basically purging all the results for those combinations too, since they are derived "on the fly" from the displacements. You could also use load combination tags to only run selected load combinations. That has already been implemented in a recent change. See "Tags" here https://pynite.readthedocs.io/en/latest/load_combo.html I'll admit I've never worked with weak references. I am not a programmer by trade, I'm a structural engineer. My code is probably a little weak, but the math/science behind it is pretty solid. I skimmed the python docs on weak references, and it's an interesting concept for efficient memory management. Not sure how I'd go about implementing it in Pynite. Regarding |
Sounds like this is absolutely a pain point for many an FEA tool! PyNite should have a solution for this, and maybe to speed up analysis by loading results from a previous run. That would prevent rework if say you accidentally shutdown your python session. The more I think about it the more it would make sense to have some kind of As far as weak references I didn't really get them until I learned about python garbage collection where an object is kept as long as it has references to it, as explained by this example https://eli.thegreenplace.net/2009/06/12/safely-using-destructors-in-python/. As far as implementation I think it would be relatively straight forward. The node dictionaries would be replaced with the I'll find some time to write a demo of this and see if there are any gotchas BTW Glad to see those load combo tags made it into the repo! I will be able to merge my fork on the latest release before submitting a PR so itll be on the same branch. The reorganization looks wonderful with the analysis code segmented out! |
Hi again,
I've been running a very large number of load combos in a distributed system, with an early stopping on failure routine. I am having trouble predicting the size of task to allocate by memory since the memory grows as I continue to perform further analysis on load combos.
I am currently addressing this with the following function running after I've processed the combo:
This is working well and keeping my memory consumption per task consistent (and allowing me to fully utilize my AWS resources!)
A reorganization of some of the data structures to have one authoritative dictionary for displacements
_D
would be useful, and the node dictionariesDX,DY...RX
could be replaced with weakref.WeakRefValueDictionary so that removal of the object from_D
would automatically drop the reference from the node's as well without looping.Of course this brings into question what happens if you want to reactivate the combo, and believe there might be some interesting options with https://github.com/grantjenks/python-diskcache although this invokes issues with how to identify the
structure,combo
pair.The text was updated successfully, but these errors were encountered: