[Question/Documentation] Is it possible to run an experiment with trials distributed across GPUs #66
Labels
enhancement
New feature or request
fixready
Fix has landed on master.
question
Further information is requested
Tool looks great! But just wondering how you would go about running an experiment with trials distributed across GPUs (on a single machine). I am looking at the Service API / Developer API pages but cannot see how a client/server or queue structure would work (have not dug through code yet).
I'm after something like Ray to do optimisation.
I think distributed experiments is a pretty important feature so I'm assuming it has to be there somewhere, a tutorial would be great. I'm interested on single host / multi GPU environment but I'm sure multi-host would also be of value to people.
The text was updated successfully, but these errors were encountered: