[Caspar] GPU device selection for solver #461
Conversation
…with kernel generation
…t/caspar-gpu-idx-option-#460
|
Feature requested in colmap/colmap#4018 |
aaron-skydio
left a comment
There was a problem hiding this comment.
Cool, general approach here makes sense to me, the solver object gets an associated device because it doesn't make sense to move it across devices between calls, because its allocated buffers are on a particular device; and the free functions infer the right device.
Curious if Matias has any other feedback but otherwise this looks good to me
|
Should we wait to merge this until someone with a multi-gpu rig can test this? I currently don't have a rig with more than one GPU. |
|
Hmm, maybe worth asking someone on the Colmap thread if they can do that? It seemed like someone over there might already have a multi-gpu rig sitting around. I don't think I really have time to test this on a multi-gpu rig right now unfortunately |
|
I'll make a new PR after colmap/colmap#4018 is merged so this can be tested. |
|
Testing this in colmap/colmap#4379 |
|
Multi-GPU support worked well, according to COLMAP contributors. However, this does not test it on the Python side. Do we want to exclude those from this PR, or just gamble on it working as expected? |
|
I'm just going to go ahead and merge |
Adds device_id throughout the Caspar stack so callers can pick all GPU operations to a specific device. Default is set to device 0.
Changes in Solver (solver.h/cc.jinja, solver_pybinding.h.jinja, lib.pyi.jinja):
Was advised to ping @matias-christensen-skydio for best CUDA practises. Would gladly appreciate any feedback, especially on the device inference via cuda array interface for the pybindings.
Blocked by #458, fixes #460.