Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for documentation on how to use multiple textures & multiple meshes per scene #15

Closed
aluo-x opened this issue Jan 28, 2020 · 8 comments
Assignees
Labels
question Further information is requested

Comments

@aluo-x
Copy link

aluo-x commented Jan 28, 2020

馃殌 Feature

Would appreciate more documentation or functionality to allow for rendering multiple meshes in a single scene, each with its own texture or vertex colors. In neural mesh renderer this can be achieved by setting a tensor with size (1, F, T, T, T, 3), where F are number of faces, and T are "texture size", while in the Nvidia Kaolin repository, there is the vertex texture mode for DIB-R.

Requesting an example of how this could be accomplished using this renderer.

Motivation

Complex scene require multiple objects, often with its own texture. This functionality is present in other differentiable renderers.

Pitch

Possible pseudo-code:

MeshList = [Mesh1(faces1, verts1, text1), Mesh2(faces2, verts2, text2) ... ]

a=Mesh()

for i in MeshList:
   a = a.mesh_join(i)

img = Renderer(a)
@aluo-x aluo-x changed the title Requesting for documentation on how to use multiple textures & multiple meshes per scene Request for documentation on how to use multiple textures & multiple meshes per scene Jan 28, 2020
@nikhilaravi
Copy link
Contributor

nikhilaravi commented Jan 28, 2020

@aluo-x If I understand your question correctly, you want to concatenate multiple meshes into one mesh and then render one image.

This is definitely possible with PyTorch3d. You would just need to format the input data correctly yourself as internally we assume that for a batch of meshes, each mesh is rendered onto a separate image (the same assumption made in NMR and Kaolin).

For example, take two ico spheres where one has a blue texture and the other has a red texture. We can initialize the textures as an RGB color per vertex. We can then create one mesh which contains both the meshes and render them to a single image.

from pytorch3d.utils import ico_sphere
from pytorch3d.structures import Textures 

# Initialize two ico spheres of different sizes
mesh1 = ico_sphere(3)  # (42 verts, 80 faces)
mesh2 = ico_sphere(4)  # (162 verts, 320 faces)
verts1, faces1 = mesh1.get_mesh_verts_faces(0)
verts2, faces2 = mesh2.get_mesh_verts_faces(0)

# Initalize the textures as an RGB color per vertex
tex1 = torch.ones_like(verts1) 
tex2 = torch.ones_like(verts2)
tex1[:, 1:] *= 0.0  # red
tex2[:, :2] *= 0.0  # blue

# Create one mesh which contains two spheres of different sizes.
# To do this we can concatenate verts1 and verts2
# but we need to offset the face indices of faces2 so they index
# into the correct positions in the combined verts tensor. 

# Make the red sphere smaller and offset both spheres so they are not overlapping
verts1 *= 0.25  
verts1[:, 0] += 0.8
verts2[:, 0] -= 0.5
verts = torch.cat([verts1, verts2])  #(204, 3)

#  Offset by the number of vertices in mesh1
faces2 = faces2 + verts1.shape[0]  
faces = torch.cat([faces1, faces2])  # (400, 3)

tex = torch.cat([tex1, tex2])[None]  # (1, 204, 3)
textures = Textures(verts_rgb=tex)

mesh = Meshes(verts=[verts], faces=[faces], textures=textures)

# Initialize a renderer separately and then render the mesh
image = renderer(mesh)   # (1, H, W, 4)

The output as seen below is a single RGBA image which contains both meshes.

twosphere

Let me know if that answered your question. Bear in mind that the texturing API is still experimental and we are working on improvements and more functionality.


NOTE

To learn how to initialize a renderer please refer to one of the tutorials e.g. camera position optimization


@aluo-x
Copy link
Author

aluo-x commented Jan 28, 2020

Wow, thanks for the extremely fast response and the great tool!
The example is very clear and is exactly what I was looking for. Somehow I missed the verts_rgb option while browsing the texture structure code. The documentation between the world/camera/image space is also a missing part of many other renderers.

One minor suggestion, this method (like the vertex color mode in DIB-R) are limited by the mesh resolution to some degree. It would still be a minor improvement to allow for barycentric like interpolation of textures per face. So instead of [F, 3] for texture, you can specify [F, T, T, T, 3] for texture.

This solves my problem so I will close the issue for now.

@aluo-x aluo-x closed this as completed Jan 28, 2020
@nikhilaravi
Copy link
Contributor

nikhilaravi commented Jan 28, 2020

@aluo-x Great! :) Specifying a texture atlas with a T*T*3 texture map per face is a feature we are planning to add. Stay tuned!

@nikhilaravi nikhilaravi self-assigned this Mar 20, 2020
@nikhilaravi nikhilaravi added the question Further information is requested label Mar 20, 2020
@rahuldey91
Copy link

rahuldey91 commented Mar 31, 2020

I am facing problems with rendering a mesh with Texture initialized using verts_rgb. I used the same code as above and my renderer is like:

MeshRenderer(
  (rasterizer): MeshRasterizer()
  (shader): TexturedSoftPhongShader()
)

When I try to render using these

tex = torch.cat([tex1, tex2])[None]  # (1, 204, 3)
textures = Textures(verts_rgb=tex)

mesh = Meshes(verts=[verts], faces=[faces], textures=textures)

# Initialize a renderer separately and then render the mesh
image = renderer(mesh)   # (1, H, W, 4)

I get the error the following error:

-> image = renderer(mesh)   # (1, H, W, 4)
(Pdb) c
Traceback (most recent call last):
  File "main.py", line 93, in <module>
    main()
  File "main.py", line 68, in main
    loss_train = trainer.train(epoch, loaders)
  File "~/3DMM/pytorchnet_3d/train.py", line 369, in train
    image = renderer(mesh)   # (1, H, W, 4)
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/renderer/mesh/renderer.py", line 69, in forward
    images = self.shader(fragments, meshes_world, **kwargs)
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/renderer/mesh/shader.py", line 269, in forward
    texels = interpolate_texture_map(fragments, meshes)
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/renderer/mesh/texturing.py", line 43, in interpolate_texture_map
    faces_uvs = meshes.textures.faces_uvs_packed()
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/structures/textures.py", line 147, in faces_uvs_packed
    return list_to_packed(self.faces_uvs_list())[0]
  File "~/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/structures/utils.py", line 116, in list_to_packed
    N = len(x)
TypeError: object of type 'NoneType' has no len()

However, the rendering works only when the Texture is initialized with verts_uvs, faces_uvs and texture_maps.

@nikhilaravi
Copy link
Contributor

TexturedSoftPhongShader supports only textures specified as texture maps and vertex uv coordinates. Please use SoftPhongShader instead.

@nikhilaravi
Copy link
Contributor

@aluo-x support for textures as a per face atlas of shape (F, R, R, 3) has now been added (based on the SoftRas implementation). Here is a complete example of how to load the textures as an atlas, create a mesh and and render it: https://github.com/facebookresearch/pytorch3d/blob/master/tests/test_render_meshes.py#L468.

@rainsoulsrx
Copy link

@aluo-x If I understand your question correctly, you want to concatenate multiple meshes into one mesh and then render one image.

This is definitely possible with PyTorch3d. You would just need to format the input data correctly yourself as internally we assume that for a batch of meshes, each mesh is rendered onto a separate image (the same assumption made in NMR and Kaolin).

For example, take two ico spheres where one has a blue texture and the other has a red texture. We can initialize the textures as an RGB color per vertex. We can then create one mesh which contains both the meshes and render them to a single image.

from pytorch3d.utils import ico_sphere
from pytorch3d.structures import Textures 

# Initialize two ico spheres of different sizes
mesh1 = ico_sphere(3)  # (42 verts, 80 faces)
mesh2 = ico_sphere(4)  # (162 verts, 320 faces)
verts1, faces1 = mesh1.get_mesh_verts_faces(0)
verts2, faces2 = mesh2.get_mesh_verts_faces(0)

# Initalize the textures as an RGB color per vertex
tex1 = torch.ones_like(verts1) 
tex2 = torch.ones_like(verts2)
tex1[:, 1:] *= 0.0  # red
tex2[:, :2] *= 0.0  # blue

# Create one mesh which contains two spheres of different sizes.
# To do this we can concatenate verts1 and verts2
# but we need to offset the face indices of faces2 so they index
# into the correct positions in the combined verts tensor. 

# Make the red sphere smaller and offset both spheres so they are not overlapping
verts1 *= 0.25  
verts1[:, 0] += 0.8
verts2[:, 0] -= 0.5
verts = torch.cat([verts1, verts2])  #(204, 3)

#  Offset by the number of vertices in mesh1
faces2 = faces2 + verts1.shape[0]  
faces = torch.cat([faces1, faces2])  # (400, 3)

tex = torch.cat([tex1, tex2])[None]  # (1, 204, 3)
textures = Textures(verts_rgb=tex)

mesh = Meshes(verts=[verts], faces=[faces], textures=textures)

# Initialize a renderer separately and then render the mesh
image = renderer(mesh)   # (1, H, W, 4)

The output as seen below is a single RGBA image which contains both meshes.

twosphere

Let me know if that answered your question. Bear in mind that the texturing API is still experimental and we are working on improvements and more functionality.

NOTE

To learn how to initialize a renderer please refer to one of the tutorials e.g. camera position optimization

When the two objects has overlap, how can I render them in correct order?

@bottler
Copy link
Contributor

bottler commented Jan 19, 2022

@rainsoulsrx This is an old closed issue. There is a now a function join_meshes_as_scene which makes joining meshes together into a single mesh easier.聽The "order" objects appear is determined by mainly by their z-distances and the blending function, and slightly also by settings of the rasterizer. I don't know what you mean by "correct", but if you are not getting what you expect then you might want to open a new issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants