Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

View poses rendering #19

Closed
filipzag opened this issue Oct 18, 2019 · 2 comments
Closed

View poses rendering #19

filipzag opened this issue Oct 18, 2019 · 2 comments

Comments

@filipzag
Copy link

Hi,
can you please explain me process of generating and storing values for positions of views for making video since I would like to modify it and make my own paths?

Thank you

@rodrygojose
Copy link
Collaborator

Hi @bolemebrige
The functions that generate camera positions can be found here:
https://github.com/Fyusion/LLFF/blob/master/llff/math/pose_math.py#L68
You could modify those if you want to generate your own paths.
Best

@bmild
Copy link
Collaborator

bmild commented Oct 18, 2019

Sure. The basic script we provide for generating the video rendering paths is imgs2renderpath.py. This is a wrapper around the function generate_render_path, that deals with loading the input view poses, creating the output path, and saving it to a text file ready for our CUDA renderer or the script mpis2video.py. So I would recommend making a new function to take the place of generate_render_path to do a custom path.

First thing to note, about our pose matrix convention: our Tensorflow graph expects poses with rotation matrices in the form [down, right backwards], but this line in imgs2renderpath.py changes them to the more traditional [right, up, backwards] since that is what the rendering code expects.

To explain some of the functions in llff.math.pose_math:

  • poses_avg(...) takes a list of poses and returns a central average pose that can be used as an "origin" for centering the whole pose path
  • viewmatrix(...) will create a new pose matrix from a z-axis (camera axis) direction, up vector, and camera position
  • render_path_axis(...) renders a linear motion along axis ax with the camera rotated to point at a distance focal along the z axis
  • render_path_spiral(...) renders an elliptical motion in the XY plane combined with a sinusoidal motion in the Z axis

Generally it is easiest to calculate a series of look-directions (z-axis/camera-axis), up vectors, and camera origins, then use viewmatrix to turn these into poses and save them out. Note that you need to keep around the hwf vector that encodes height, width, and focal length, and concatenate it on the end of your pose matrices like np.concatenate([viewmatrix(z, up, c), hwf], 1) to get a 3x5 matrix for each new view.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants