-
Notifications
You must be signed in to change notification settings - Fork 248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need Transparent Texture Mapping #10
Comments
@nkolot I am trying to implement this and make a pull request. Could you show me some ideas/pointers on how to do it? |
I actually looked into it and it is significantly hard to implement. The basic problem here is that when you renderer transparent triangles, the order in which they appear is important (the z-buffer you mentioned). I can't see a straightforward way to implement this using the current data structures. It wouldn't be hard to do it on the CPU, but on the GPU it will possibly be a nightmare. The problem here is that for each pixel you will have to store all triangles that might contribute in the rendered image, but the data structure used for that, The other issue I was thinking about is how would you define gradients for transparent textures. Let's take for example the case where a triangle is entirely transparent. In such a scenario the renderer would simply ignore it and optimize what is "behind" this in the scene. What are your thoughts on that? |
I also feel the same way, the current data structure is hard to change, and neither do I think it is a good idea to touch the low-level CUDA kernel files. You right on the gradient problem. A workaround is that we make a rule that there is no perfect transparent glass in the world. In other words, we can require the user to set the alpha value at least > 0 no matter how small, therefore, part of gradients goes into the transparent layer in the proportion of alpha. If there is a glass that perfectly transparent, well, we may just remove that glass and do not make any change, we may do not need it at the first. I had a vague idea that to implement it on high-level (for example, RasterizeFunction). The renderer generates a batch of pictures as a tensor [z, 3, W, H] rather than a single [1, 3, W, H]. Different layers of images (like Z-buffer) will finally be stitched together as one image. The alpha value serves as a mask: output = alpha * current_layer_foreground + (1-alpha) * all_previous_layers_background. There are still a lot of details, let me think for a while to concrete my idea, it is a tricky situation. Maybe we should do #6 first. |
I am closing this issue for now because it seems very difficult to implement. |
Agree |
In 3d modeling, we sometimes use png picture as the texture for transparent effect. PNG picture has a transparent alpha channel, and it is the easiest way to handle glass material and animal hair. For example, here is the result of a horse model example rendered in OpenGL.
Look at the horsetail.
(I will also post PyTorch rendering result here later after the merge #9. PyTorch version can not handle PNG texture for now.)
PNG texture is a cheap but often-used trick to add realism to the result, which is very useful. But this may be a very challenging enhancement task.
It could be implemented as a "mask" value, but it is a very complicated issue involving some knowledge like Z-buffer.
The text was updated successfully, but these errors were encountered: