Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BP5 strategies for large I/O #3679

Closed
liangwang0734 opened this issue Jun 29, 2023 · 0 comments
Closed

BP5 strategies for large I/O #3679

liangwang0734 opened this issue Jun 29, 2023 · 0 comments

Comments

@liangwang0734
Copy link

Hi Adios,

I'd appreciate some suggestions on properly configuring BP5 for our application, especially to reduce memory consumption:

  • We use adias2 as file I/O, so no stream, interaction, code coupling is needed.
  • It's a mpi program involving 20k cores or even more
  • Every some (say, 5000) steps we write, say, ten, 1d arrays of size ~10^11 or bigger to a file.
  • Every, say, 1000 steps we write a 3d array of size (20, 1e5, 1e5) or even bigger to a file.
  • right now, for all io we set the substream number to 0, hoping to have one data file per node.

We are happy with the I/o speed so far, but in some extreme cases, the first (1d-array) writing fails due to out-of-memory errors.

Could you advise on strategies like number of substreams, aggregators, buffer size, shared memory size, etc., based on the machine memory and core number per node.

I understand my description right now might be vague but I can add more details to help with your advice.

Thank you again for making this great piece of software.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant