New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ImmediateRenderObject: Clarify implementation. #19341
Conversation
Awesome to see the docs get updated for this! I still don't understand at all how to use this class, though. When should I consider using it? Is it only for per frame generated geometry? Is it always faster than the built in Mesh? If so then why would I not always use it? Is the vertex information reuploaded every frame? An inline example or some pseudocode would really help here, too. Just a few things that come to mind for future improvements! |
I think the marching cubes example nicely demonstrate its purpose. It's a low level interface for generating geometry data per frame.
Yes, assuming you are updating the geometry data per frame. Let me clarify this point in the docs.
I think the docs should now answer this question: Because many built-in features like view frustum culling are not supported.
Yes. |
This is very useful, thank you Mugen87. I have considered using this for a number of GPGPU experiments, but I've struggled to understand the actual workflow for using it, so ended up giving up on the idea. What happens to light uniforms, do they still need to be calculated and sent to the shader? If so, is there a workflow that avoids it? |
Material related stuff is unrelated to |
I think after the update it's a bit more clear -- it would only be faster for frequently updated / dynamically generated meshes. I shouldn't use it otherwise.
I've looked through that example before and did see that it uses I do feel it would be helpful to have a code snippet to demonstrate how set the object up like almost every other docs page has but I'm not sure what a simple function body might look like to demonstrate its use: var material = new THREE.MeshStandardMaterial( { color: 0xffff00 } );
var renderObject = new THREE.ImmediateRenderObject( material );
renderObject.render = function( renderCallback ) {
// ... create / update hasPositions, positionArray, count, etc
renderCallback( this );
}
scene.add( mesh );
How might you use this for GPGPU work? It seems like this is specifically for data being generated in javascript? |
Let's enhance the doc page with a subsequent PR. TBH, I wanna get this quickly merged in order to have one progress save. |
Truth be told, I didn't know exactly how it worked. But I was hopeful it would avoid plenty of unnecessary stuff that is, more or less, required in the regular workflow. Namely, I don't really need frustum culling, wireframe, morthtargets, normals, backgrounds, instancing, dynamic draw range. Since CPU bottleneck is very apparent for GPGPU workflow in three.js, I thought that it could help. However, I do use vertex data for texture look-up coordinates. Which is used in the vertex-shader for getting vertex positions, velocity, etc.. from textures. Although I don't usually change this dynamically too often. |
Thanks! |
This PR slightly clarifies the implementation of
ImmediateRenderObject
by:ImmediateRenderObject
which are expected byWebGLRenderer
.