Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to render separate meshes as silhouettes with unique values #756

Closed
jbohnslav opened this issue Jul 12, 2021 · 7 comments
Closed

How to render separate meshes as silhouettes with unique values #756

jbohnslav opened this issue Jul 12, 2021 · 7 comments
Assignees
Labels
how to How to use PyTorch3D in my project Stale

Comments

@jbohnslav
Copy link

jbohnslav commented Jul 12, 2021

❓ Questions on how to use PyTorch3D

Thanks for your great work.

I have a scene with two different objects. I have a semantic segmentation model which can give me estimates of which pixels belong to these different objects. I would like to optimize the parameters of these two meshes such that the silhouettes match the segmentation outputs. To do this, I would like to use the correct shader such that object 1's silhouette has a different value than object 2's silhouette (e.g. all 1s for object 1, and 2s for object 2). Then, my loss function can be something like loss = smooth_l1(segmentation_outputs, rendered_image). I don't know which shader to use or how to do this.

Options, as I see them, but correct me if I'm wrong.

  • Use a SoftSilhouetteShader, while joining_meshes_as_batch. That way, the separate objects are in different elements of the batch dimension. However, I cannot do this, as I am using the batch dimension for multiple camera views already.
  • Use something like the PhongShader, and use the color channels of the output image; I could take the argmax in the color dimension to get the pixel IDs. However, I'm not sure if this will be properly optimizable; SoftSihouetteShader blends the faces for each pixel so that the objective is smoother, rather than hard targets
  • Combine the outputs of the alpha dimension with the RGB values of the texture; perhaps this would help the objective function be smoother.

As an example base, inspired by this very helpful issue let's use big_ball and little_ball as the two objects. I've updated the code to version 0.4.0 to render these two balls . I've plotted the outputs of some of the options noted above.

I am not an expert in computer graphics. I'm hoping that your expertise will help me choose which option is the best one, or if all of these ideas are wrong and there is a better way.

initialize spheres

# imports 
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
from pytorch3d.utils import ico_sphere
from pytorch3d.structures import Meshes, join_meshes_as_scene
from pytorch3d.renderer.mesh.textures import TexturesVertex
from pytorch3d.renderer import (
    BlendParams,
    look_at_view_transform,
    FoVPerspectiveCameras, 
    PointLights, 
    RasterizationSettings, 
    MeshRenderer, 
    MeshRasterizer,  
    SoftPhongShader,
    TexturesVertex, 
    SoftSilhouetteShader
)
import torch

# Initialize two ico spheres of different sizes
mesh1 = ico_sphere(3)  # (42 verts, 80 faces)
mesh2 = ico_sphere(4)  # (162 verts, 320 faces)
verts1, faces1 = mesh1.get_mesh_verts_faces(0)
verts2, faces2 = mesh2.get_mesh_verts_faces(0)

# Initalize the textures as an RGB color per vertex
tex1 = torch.zeros_like(verts1) 
tex2 = torch.zeros_like(verts2)
tex1[:, 1] = 1.0 # green
tex2[:, 2] = 1.0 # blue

# Make the green sphere smaller and offset both spheres so they are not overlapping
verts1 *= 0.25  
verts1[:, 0] += 0.8
verts2[:, 0] -= 0.5

tex1 = TexturesVertex(verts_features=[tex1])
tex2 = TexturesVertex(verts_features=[tex2])

mesh1 = Meshes(verts=[verts1], faces=[faces1], textures=tex1)
mesh2 = Meshes(verts=[verts2], faces=[faces2], textures=tex2)

# join meshes in the same batch element
mesh = join_meshes_as_scene([mesh1, mesh2])

set up rendering

device = torch.device('cuda:0')
# make the spheres overlap from the perspective of the camera
R, T = look_at_view_transform(2.7, 0, 60) 
cameras = FoVPerspectiveCameras(device=device, R=R, T=T)

raster_settings = RasterizationSettings(
    image_size=256, 
    blur_radius=np.log(1. / 1e-4 - 1.) * blend_params.sigma, 
    faces_per_pixel=100, 
    bin_size=None, 
    max_faces_per_bin=None
)

# Make an arbitrary light source
lights = PointLights(device=device, location=[[0.0, 0.0, -3.0]])

# Create a Phong renderer by composing a rasterizer and a shader. The textured Phong shader will 
# interpolate the texture uv coordinates for each vertex, sample from a texture image and 
# apply the Phong lighting model
renderer = MeshRenderer(
    rasterizer=MeshRasterizer(
        cameras=cameras, 
        raster_settings=raster_settings
    ),
    shader=SoftPhongShader(
        device=device, 
        cameras=cameras,
        lights=lights
    )
)

blend_params = BlendParams(sigma=1e-4, gamma=1e-4)
silhouette_renderer = MeshRenderer(
    rasterizer=MeshRasterizer(
        cameras=cameras, 
        raster_settings=raster_settings
    ),
    shader=SoftSilhouetteShader(blend_params=blend_params)
)

render

# phong shader
image = renderer(mesh.to(device)) 
# softsilhouetteshader
im_silhouette = silhouette_renderer(mesh.to(device))
# argmax of the channel dimensions, to figure out which pixel was green and which was blue
argmax = torch.argmax(image[..., :3], dim=-1)
# blending the maximum pixel dim with the alpha channel
blended = argmax.float()*image[..., 3]

visualize

# for visualization
display_images = {'phong': image.detach().cpu().numpy()[0, ..., :3], 
                 'softsilhoette': im_silhouette.detach().cpu().numpy()[0, ..., 3], 
                 'phong_argmax': argmax.detach().cpu().numpy()[0], 
                 'phong_alpha_argmax': blended.detach().cpu().numpy()[0]}

fig, axes = plt.subplots(2,2, figsize=(12,12))
axes = axes.flatten()

for i, (title, im) in enumerate(display_images.items()):
    ax = axes[i]
    imh = ax.imshow(im)
    # hate that this much work is required to put colorbars next to axes
    divider = make_axes_locatable(ax)
    cax = divider.append_axes("right", size="5%", pad=0.05)
    fig.colorbar(imh, cax=cax)
    
    ax.set_xticks([])
    ax.set_yticks([])
    ax.set_title(title)
plt.tight_layout()
fig.suptitle('Options for rendering meshes with textures==object ID', size=16)
fig.subplots_adjust(top=0.9)
plt.show()

outputs of final cell

image

Edits:

  • fixed blend params + faces per pixel for silhouette shading
@jbohnslav
Copy link
Author

Some further thoughts: It's a bit of a hack to use the color channels of the Phong output as a target; there can only be up to 3 objects. Furthermore, the Phong shader requires there being lighting in the scene. I do not plan to actually model lighting, reflectance, or anything else; right now I'm just using that as a hack to get color-based mesh ID in the outputs.

@nikhilaravi nikhilaravi self-assigned this Jul 13, 2021
@nikhilaravi nikhilaravi added the how to How to use PyTorch3D in my project label Jul 13, 2021
@github-actions
Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the Stale label Aug 12, 2021
@jbohnslav
Copy link
Author

Not stale! Still an open question. I'll try to tackle it myself in the coming days.

@github-actions github-actions bot removed the Stale label Aug 13, 2021
@github-actions
Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the Stale label Sep 13, 2021
@github-actions
Copy link

This issue was closed because it has been stalled for 5 days with no activity.

@Tandon-A
Copy link

@jbohnslav

Hi Jim,

Were you able to figure this out?

@zhifanzhu
Copy link

For people looking for a way to achive this, maybe take a look at my answer in #1528 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
how to How to use PyTorch3D in my project Stale
Projects
None yet
Development

No branches or pull requests

4 participants