-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] support for shared memory provider of libfabric in SST + simple support for manual data progress via threading #3964
base: master
Are you sure you want to change the base?
Conversation
I can take a look at enabling a manual progress thread, or possibly using EVPath tools to tie progress to FDs if supported by CXI, but realistically I have one week before I disappear for two weeks, and given the other things on my plate the odds of this happening before January are unfortunately small. WRT the future work notes, yes, supporting different data planes between different ranks is probably impactical given how SST is architected. It would have to be a single data plane that supported both transport mechanisms, which is still a lot of work, but fits the way dataplanes integrate into SST. I've also long had in mind an extension to the data access mechanisms that might reduce the copy overheads for RDMA and shared memory data access, but it involves several changes from the BP5Deserializer, through the engine and down to the data plane, so it has remained on the to-do list for a long time. But it's something to re-examine at some point. |
No problem, this is not urgent.
Thank you for the info. My main motivation in posting these ideas was to get a rough estimation of how viable these are to implement. It does not surprise me very much that lots of work would be required. |
It seems that the current thread-based implementation runs into a libfabric bug, fixed by ofiwg/libfabric#9644. The bug means that calls to |
This reverts commit 553f605.
Ref. the meeting with the Maestro team: The fabric should not be bombarded with too many requests at once. Batch size currently hardcoded as 10.
Todo: Better than doing this, initialize endpoints on demand only
Background: For some of our intended SST workflows, we exchange data only within nodes. Using the system networks is an unnecessary detour in that case. Since libfabric implements shared memory, this is a somewhat low-hanging fruit to enable truly in-memory staging workflows with SST.
Necessary changes:
FI_MR_BASIC
registration mode prints an error, but interestingly it still works anyway. This PR still replacesFI_MR_BASIC
with the equivalentFI_MR_VIRT_ADDR | FI_MR_ALLOCATED | FI_MR_PROV_KEY | FI_MR_LOCAL
.The manual data progress has turned out to be somewhat annoying. My idea was to spawn a thread that regularly pokes the provider, but this approach does not work well with any less than a busy loop.
Every call to
fi_read()
by the Reader requires an accompanying call tofi_cq_read()
by the Writer.fi_read()
will fail withEAGAIN
until the writer has acknowledged the load request. It seems that (at least with my current approach) this requires a ping-pong sort of protocol: I tried decreasing latencies by processingfi_read()
as well asfi_cq_read()
in batches and it made no difference, the provider only processes one load request at a time. In consequence, the current implementation has extreme latencies since it puts the progress thread to sleep before poking the provider again.@eisenhauer mentioned in a video call that the control plane of SST implements a potential alternative approach based on file-descriptor polling.
Further benefit of implementing manual progress:
One of the most common issues with a badly-configured installation of the libfabric dataplane are hangups. Having support in SST for manual progress might fix this in some instances.
I have observed that this PR also "unlocks" the tcp provider which previously did not work.
Future potential / ideas:
Both these following points are probably an immense amount of work. Just some ideas that I wanted to put out here on how SST might be used to implement zero-overhead staging workflows:
I imagine that this is probably not an easy change to implement, since the control plane would need to deal with two data planes at once.
void Engine::Get<T>(Variable<T>, T**) const;
for data from the same node (currently used by the Inline Engine).TODO: