Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel Particle Init #68

Closed
ax3l opened this issue Feb 26, 2022 · 3 comments · Fixed by #73
Closed

Parallel Particle Init #68

ax3l opened this issue Feb 26, 2022 · 3 comments · Fixed by #73
Assignees

Comments

@ax3l
Copy link
Member

ax3l commented Feb 26, 2022

We need to improve the particle initialization: to allow having a first few boxes initially, initialize in parallel & redistribute.

// Redistribute(); // TODO

Currently, we will be limited to initializing as many particles as a single GPU can handle (even if we run on multiple GPUs).

@ax3l
Copy link
Member Author

ax3l commented Feb 26, 2022

initially ping-ing @atmyers and @RemiLehe to discuss this in the coming meeting

@ax3l ax3l added this to the First Release Version milestone Mar 30, 2022
@atmyers
Copy link
Member

atmyers commented Apr 4, 2022

Getting back to this. Currently, we can't use the ParticleReduce functions in AMReX prior to calling Redistribute. The reason is that we always add the particles to grid 0, tile 0 of level 0 to the particle container, whether or not that's a valid destination for a given process. However, it wouldn't be hard to modify the reduction functions AMReX to simply loop over all tiles, whether they are "valid" or not. I think this is maybe the cleanest way to do this, since then we can just call AddNParticles, then the existing "compute min and max particle positions" function in ImpactX, then call Redistribute. What do you think @ax3l ?

@ax3l
Copy link
Member Author

ax3l commented Apr 8, 2022

Discussed on Slack and implemented in AMReX via AMReX-Codes/amrex#2695 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants