New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make default of Number of grouped files 16 #1649
Conversation
Won't that kill the possibility of parallel rendering of viz data? Maybe more to the point, what is the problem you have with it? |
A good default is probably one file per node you run on, but that is difficult to put into this parameter. I agree that "0" rarely makes sense. |
If there is one file per timestep instead of say a few hundred it makes it easier to see what has been run plus there are fewer files to transfer from the cluster. For a new user it might also be less confusing to have one file per timestep. It does kill the possibility of parallel rendering of viz data but I never use that and if that's what someone wants they can always change the default. I don't feel strongly about this change, just a suggestion. |
Actually, I would prefer to keep it at 0. It is true that creating hundreds of files per timestep is annoying, however the alternative seems worse to me (namely creating one file on a big cluster-run that you are then unable to read, because no available machine has enough memory). I agree this is a case that rarely happens, but if it happens a lot of computing time will be wasted. One could argue that users running these large models should know what they are doing, but in reality I am sure a PhD student will run into this issue. @jaustermann did you had problems with the large number of files? I once ran into our clusters file limit, but mostly because I had many models lying around. |
We could set it to something like 16, which will still allow you to do parallel viz and avoid producing hundreds of files per timestep. |
16 seems too small to me. If you have a big model, you'd want there to be 64 or 128 files. |
I often use a parameter of 16 files, it seems like a reasonable compromise to me. If you need more, you likely have a model exceeding a billion dofs, in which case you really should know what you are doing. What happens if there are less processors in the run than the number of grouped files? Will it simply write the smaller number of files? |
That seems like reasonable behavior -- in other words, if it doesn't automatically happen that we write only up to as many files as there are processors, then that is what ought to happen. |
We do |
I think this would be a good solution as well, I will test it to be sure. |
a2537b4
to
f3a31fd
Compare
Did a couple of tests on different numbers of cores and it behaves as expected. |
@jaustermann -- please add a changelog entry for this! |
f3a31fd
to
18a2e08
Compare
done! |
I would prefer the default to be 1 ... what do others think?