Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature-request: Render to surface texture to generate object surface textures #3

Open
PhilAndrew opened this issue Feb 27, 2017 · 2 comments

Comments

@PhilAndrew
Copy link

PhilAndrew commented Feb 27, 2017

If you set the camera to be a 2D grid of rays above a surface on one side of the surface then you can generate the surface texture data and perform monte-carlo on this data. Then you can build up a surface.

For a scene which is static but the player moves around in it, then the surface texture can be generated accurately and then it doesn't need to be computed in real-time later once its computed.

Here is an example I produced here which does path tracing and renders textures using Play Canvas. Then my suggestion would be to allow this to scale out, save surface textures back up to a server for caching so they don't need to be computed again. Different client browsers can compute different surfaces.

Here is that project on PlayCanvas to play here https://playcanvas.com/editor/scene/492862/launch edit here https://playcanvas.com/project/453830/overview/surface-shader
You need to spin the mouse wheel to zoom in and rotate the view to see inside.

image

@erichlof
Copy link
Owner

Hi @PhilAndrew ,
I tried the link you posted but it makes my browser lose its WebGL context and the screen goes black. I'm on a 2014 laptop which might be part of the problem, although it is what I develop everything on in this GitHub repo. Could you post a lighter example?

But in any case, thanks for the suggestion. Just want to make sure I understand correctly: Let's say I wanted a rock texture: Would it be like placing the camera directly over a gray-scale height map, looking down, kind of like a satellite view of a mountain range on Google Maps? Then trace rays directly downwards? I can envision getting a good first intersection (or 't' distance along the primary ray to the surface), but what about secondary ray bounces to capture global illumination/ indirect lighting? I can't envision the algo/math to shoot sideways along a height field texture in search of a second intersection point, once we're sitting on the first intersection point.

Maybe it's in the code you posted but I just couldn't see it. The pic looks great though, thanks for sharing!
-Erich

@PhilAndrew
Copy link
Author

Firstly, just show you some more pics. So you can see that from the outside the texture mapping is wrong and a bit crazy but inside the texture mapping is correct. The object can be rotated and on my computer it only takes about 5 seconds to settle down to a nice texture. So ok I'm on a relatively high end graphics card on a relatively new PC.

image

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants