New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add DistributedMemlet node and scheduling function #120
base: master
Are you sure you want to change the base?
Add DistributedMemlet node and scheduling function #120
Conversation
Created using spr 1.3.5-beta.1
Created using spr 1.3.5-beta.1
Codecov Report
@@ Coverage Diff @@
## master #120 +/- ##
==========================================
+ Coverage 69.70% 70.87% +1.17%
==========================================
Files 65 70 +5
Lines 7232 7621 +389
==========================================
+ Hits 5041 5401 +360
- Misses 2191 2220 +29
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
Created using spr 1.3.5-beta.1
Created using spr 1.3.5-beta.1
Created using spr 1.3.5-beta.1
Created using spr 1.3.5-beta.1
Created using spr 1.3.5-beta.1
Created using spr 1.3.5-beta.1
Created using spr 1.3.5-beta.1
Created using spr 1.3.5-beta.1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comments only :) I'm a bit worried about size_exact being used a lot but it's fine for now
commworld = MPI.COMM_WORLD | ||
rank = commworld.Get_rank() | ||
size = commworld.Get_size() | ||
if size < utils.prod(sizes): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did you know that the pytest dist plugin supports giving a number of ranks as a marker?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but here I'd rather fail than skip. Also depends on the schedule sizes.
for is_input, result in zip([True, False], results): | ||
|
||
# gather internal memlets by the out array they write to | ||
internal_memlets: Dict[ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe look in scope_subgraph
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean? In case there is a global write in the subgraph? Is that allowed?
Created using spr 1.3.5-beta.1
Created using spr 1.3.5-beta.1
This change adds the DistributedMemlet library node and the scheduling function for distributed computation. This allows you to distribute the work in the top-level map of the SDFG by specifying block sizes. The lowering function will analyze the SDFG and try to find MPI nodes that implement the required communication. Pull Request: #120
This change adds the DistributedMemlet library node and the scheduling
function for distributed computation.
This allows you to distribute the work in the top-level map of the SDFG
by specifying block sizes. The lowering function will analyze the SDFG
and try to find MPI nodes that implement the required communication.