Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Access all scene triangles in fragment shader via Buffer Object? #1531

Closed
keeffEoghan opened this issue Mar 16, 2012 · 23 comments
Closed

Access all scene triangles in fragment shader via Buffer Object? #1531

keeffEoghan opened this issue Mar 16, 2012 · 23 comments
Labels

Comments

@keeffEoghan
Copy link

Hey.

I'm trying to implement a real-time ray tracer in THREE, running most of the computation in a fragment shader.
Having access to the scene geometry (just in the form of triangles, for simplicity, with material information too) and lights (a custom subclass in this case) from the shader is a core part of this - each pixel needs to check ray intersections against the surrounding geometry and lights to achieve the effect.

What I want to find out is whether there's an existing overall representation of the scene being constructed somewhere within THREE that I can just send down to the shader, or if I need to somehow process the scene myself.

What I'm looking for, ideally, is a WebGL Buffer Object(s) (or Uniform Buffer Object(s)) which I could just reference from within the shader.
Alternatively, where is the best place I could grab a hold of everything in the scene to construct one myself?

Thanks for any help you can offer.

@alteredq
Copy link
Contributor

We do have buffers for each object but I'm not sure you can do raytracing in this way - shaders see just individual vertices. Every example I've seen so far does raytracing just on procedurally defined primitives, baked directly in shaders.

Check this thread for previous discussion #509

@keeffEoghan
Copy link
Author

By seeing individual vertices, do you mean the vertex shader seeing only its current vertex?
Yes, same with the demos I've seen so far (Evan Wallace's and others), based on simple primitives in parametric form in the fragment shader.
What I aim to do is send down similarly simple data (triangles/faces) - just a lot of it, and taken from actual geometry. This paper (part 3.1.2) illustrates it pretty well - it uses textures to send the data to the shader, but uniform buffer objects should do the job better.
Can those buffers be read by the shader as they are? If I can pass data to shaders through uniforms, and the shader can traverse and do the simple intersection tests against it, then why couldn't it be achieved - the principle's the same, isn't it?

@alteredq
Copy link
Contributor

By seeing individual vertices, do you mean the vertex shader seeing only its current vertex?

Indeed.

Can those buffers be read by the shader as they are? If I can pass data to shaders through uniforms, and the shader can traverse and do the simple intersection tests against it, then why couldn't it be achieved - the principle's the same, isn't it?

The only way how to get data to shaders in WebGL are uniforms, attributes and textures (i.e. there are no uniform buffer objects or some other OpenGL things).

Attributes are per vertex, uniforms and textures are per object.

With uniforms, you only get something over 200 x vector4 (in ANGLE, with OpenGL you get more, but not by that much). So the only feasible option seems to be textures.

@keeffEoghan
Copy link
Author

Ah, thanks, that lays it out pretty well for me - didn't know WebGL doesn't have UBOs.
THREE.DataTexture looks like the way to go, with data from THREE.Geometry.

Cheers for the clear advice, and if you have any other pointers or thoughts I'd be glad to hear them. Thanks.

@n4m3l3ss-b0t
Copy link

@WestLangley Thanks for pointing me here from #1572.

So, if I understand correctly, I'll need to pass the geometry, as a DataTexture, as a uniform, to the shader. And then, I can have all objects have the same shaderMaterial, and do all ray-tracing calculations in the shader. It means that, I'll need to have one uniform for each object in the scene,right? Also, how do I extract the data from THREE.Geometry to the THREE.DataTexture?

Thanks in advance.

@keeffEoghan keeffEoghan reopened this Mar 25, 2012
@keeffEoghan
Copy link
Author

Hey, reopened the issue for you.

I was thinking of doing the same before - having each object use a "raytracing shader" - but every approach I've seen uses a quad facing the camera as a virtual screen, with the raytracing taking place in its shader - see Evan Wallace's app for example - and now I'm leaning that way, passing information from a reference scene to it in DataTextures so it can draw it.

The reference information needed is:

  • The primitives their geometry is made of - triangles, for simplicity - which ray intersections are done against
  • Their model-view matrices - for transforming the vertices, since they're only stored relative to the object's position on the CPU; transformation is done in the shader, and the intersections need to be done against their global positions
  • Colour information - textures
  • Whatever else you need - I'm using their bounding spheres as part of an approach to accelerating the intersection tests

These DataTextures would go in a sort of heirarchy (see the section of the paper referenced above), and the screen shader would then have access to all the global information about the reference scene it needs.

This approach may need some changes to the renderer - the reference scene must be updated, but not actually rendered (only the screen needs to be rendered) - and I'm not sure if it's a good way to do it or not; but as for your two questions:

  • I would pass all geometry down in one texture, all matrices in another, etc., in a logical heirarchy, as opposed to one uniform per object - it's more flexible
  • The DataTexture takes an array of data - which would come from Geometry.vertices - but bear in mind that these are untransformed as they are - you need to apply each object's model-view matrix to its vertices in the shader to get their global positions

@n4m3l3ss-b0t
Copy link

Thanks a lot for the re-open.

Good stuff here, I think I understand your idea.
The aim of my project is to somehow write a method that constantly updates the scene, which will be dynamic (I plan on having a js loader using the blender add-on), with ray-tracing. I've tried to implement a few examples using javascript, and the intersectObjects(), from THREE.Ray, but it turned out to be really slow, so I've turned to shaders - since they run on the GPU, it will be much faster.

So, for the reference information, I'll load the geometry from file, then pass them as textures to the shader (like you said, seems like a good idea). But how to I get the matrices for each object I load?

As for the changes on the renderer, will it really be needed? We can render the image when it finishes the shader, or is the shader asynchronous from the renderer?

I will post the structure of the program I'm thinking tommorrow.

@keeffEoghan
Copy link
Author

No problem - out of curiosity, what kind of FPS were you getting in the javascript version, for how complex a scene?

I've been rethinking my idea, and it'd be nice to avoid editing WebGLRenderer if possible - I think a ray tracing shader for each object would probably work, and there'd even be a small performance improvement in that none of the initial "eye" rays would miss the object. I have no idea whether there's a good reason no-one else I've seen does it that way, but it's worth a shot - most of the code would be transferable.
Anyone know if it would work? That is: having a ray tracing shader which every object uses, as opposed to one which only a single "screen" quad uses to render everything else (both versions would be sent the global scene data they need).

The matrices are handled by THREE, in Object3D, and updated according to the values of Object3D.rotation, scale and position (see Object3D.updateMatrixWorld).
WebGLRenderer updates these, and passes them as the modelViewMatrix uniform to shaders that need them (including any ShaderMaterials, see WebGLRenderer.loadUniformsMatrices). So, the matrices are already loaded, you'd just need to do some extra work to pass the matrices for each object in the scene in a similar way - probably in another texture, as with the triangles.

The texture passing idea would take some work - you're basically using them as memory, with references from one to the other, and all in a hierarchical structure, all of which you need to devise in a way that makes sense ( again, I found the paper above helpful).

This issue seems to be getting somewhat off the original topic, maybe it should be moved to a new one, I dunno. But good luck with your thing.

@n4m3l3ss-b0t
Copy link

I didn't count the fps, only the total time of loading the page. On a 500x500 frame, shooting a ray through each pixel and getting the intersections, takes 400s on average. The problem isn't really about the object's complexity, in this case it was about the number of rays that needed to be shot, since each ray is a different one, it takes a lot of time to javascript to load it.

I think the problem of using a ray-tracing shader is that you can't transfer information from one shader to another, meaning that, after shading one object with the intersections you got, how would you shade it again when some other object refraction hits him again?

From what I've learned, we can do it like this on the fragment shader:

  1. uniform pile_of_all_scene_triangles in a texture
  2. shoot rays through camera, get the intersection with triangle, get his color, calculate refraction based on the normal and type of material, and take that color to the next intersection. Keep going until all rays are done.

But yeah.. I can see a lot of problems here. Setting all those colors wouldn't be an easy task (at least, not for me).

It would be much easier using THREE.Ray.IntersectObjects(), getting the color, and just setting the color in the fragment shader. But again, javascript takes really too long.

What I'm trying to do is something like this:
http://madebyevan.com/webgl-path-tracing/
But I have still to understand that code structure.

@n4m3l3ss-b0t
Copy link

By the way, here is the code that takes about 400s (on a 500x500 render size) just to cast the rays and get the interceptions:

for(i=0;i<renderWidth;i++){

for(j=0;j<renderHeight;j++)

{
direction = new THREE.Vector3(i, j, 0);
projector.unprojectVector(direction,camera);
ray = new THREE.Ray(camera.position, direction.subSelf(camera.position).normalize());
intercepts = ray.intersectObjects(scene.__objects);
if(intercepts.length > 0){
cont++;
}
}
console.log(cont);

@keeffEoghan
Copy link
Author

I think the problem of using a ray-tracing shader is that you can't transfer information from one shader to another, meaning that, after shading one object with the intersections you got, how would you shade it again when some other object refraction hits him again?

I'm not exactly sure what you mean here. The fragment shader accumulates the colour for each pixel based on ray-object intersections starting with the ray from the "eye" to its position - the pixels don't keep getting "painted" by other shaders or anything. That one fragment program accumulates the final colour for each pixel shown in that object, you wouldn't do it again.

@n4m3l3ss-b0t
Copy link

Looks like I'm getting it all wrong then. I thought the fragment shader was executed once for each pixel of the object which has the shaderMaterial?

@keeffEoghan
Copy link
Author

Yes, that's right.

fragment program accumulates the final colour for each pixel shown in that object

What I'm saying is that you wouldn't need to "shade it again", as you were asking. Shading would happen once per visible object pixel, in the fragment shader.
And you don't need to "transfer information from one shader to another" - the shader has all the information it needs to set colours for each pixel, if passed the global information (scene geometry, transform matrices, colour information, etc.).

@n4m3l3ss-b0t
Copy link

I was thinking I would need to shade it again, because of the situation when a ray hits an object, and then the refraction hits another one that has already been shaded?

Sorry if I'm missing the point, getting a little confused here.

@keeffEoghan
Copy link
Author

Ah, I see. You're saying that the first object should accumulate the shaded colour of the second?
You actually just recalculate the colour for the second one, and accumulate that - using the same diffuse, ambient, specular, shadow, refraction, etc. calculations as normal. This calculation, in turn, sends out other rays, which intersect other objects, and so on until the maximum number of bounces is reached.

The way you were saying would almost be like a breadth-first method - calculate the first bounce for everything, then the second bounce (using colour information from the first), and so on. I've never heard of ray tracing being done that way, don't know if it would even be possible.
But, in shaders, you would do it depth-first - calculate all the bounces for the first one, then all for the second, and so on - always recalculating the colours at each intersection.

@n4m3l3ss-b0t
Copy link

Actually it is the exact opposite, the second object should accumulate the color of the first, since the ray comes from it.
But I see your whole point now. May be the only solution we have here.

EDIT: Check this out:http://www.zynaps.com/site/experiments/raytracer.html

@keeffEoghan
Copy link
Author

Sorry, yeah, that's what I meant. But yeah, seems to be the way it's done everywhere. Does have some limitations and difficulties of its own - any effects you want to be achieved have to be available to the global ray tracing method (or, at least, I have no idea how you would put a custom local shader in the mix somewhere and have it accurately reflected/refracted in other objects).

@keeffEoghan
Copy link
Author

That's pretty cool - yours? It's like POVRay.

@n4m3l3ss-b0t
Copy link

No, that's not mine. It's just an example in javascript, using web workers to run the rays in paralel. Could be an option too.
Ok I'm ready to start coding these ideas for now. Can we keep this issue open or shall I start a new one if I have more problems? Thanks a ton for the help.

@n4m3l3ss-b0t
Copy link

Also since we are working on the same thing, please feel free to contact me by email if you have more ideas. I'll do the same.

@keeffEoghan
Copy link
Author

I think there's parallelism built in to the GPU side, though I'm not sure exactly to what degree - each fragment is independent of all other fragments though, so maybe it's at that kind of level.
I'd say start a new one if there's more to discuss, just reference this one at the top - the heading of this one is misleading at this stage.
No problem, I'll be in touch if I need to figure something out, and feel free yourself. Good luck!

@n4m3l3ss-b0t
Copy link

Ok then, you can close this one now. I see that you don't have an email on the profile, or am I looking on the wrong place?
Anyway, good luck for you too, and I hope someone can also find this helpful.

@keeffEoghan
Copy link
Author

Put the email up there now, see you around.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants