You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to add pose optimization on the basis of this algorithm, but I found that GaussianRasterization will not pass the gradient to the viewpoint_camera, how can I do it?
The text was updated successfully, but these errors were encountered:
Since the rasterizer is a custom CUDA extension, you would have to derive and compute the gradients w.r.t. the camera in CUDA code and then return them. Depending on your familiarity with CUDA, I am afraid this can be a challenging task. But if you want to attempt it, the methods preprocess and computeCov2D are the ones you want to look at and where you would collect the gradients for the camera in the backward.
Alternatively, you could try to do any CUDA steps that the camera is involved in through pytorch (computing the 2D covariance matrix, projection of means). That would make things quite a bit slower however, but you would not need to derive camera gradients yourself.
I'm afraid that neither option is trivial, both will need a good amount of programming and math skills.
I want to add pose optimization on the basis of this algorithm, but I found that GaussianRasterization will not pass the gradient to the viewpoint_camera, how can I do it?
The text was updated successfully, but these errors were encountered: