-
-
Notifications
You must be signed in to change notification settings - Fork 35.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added raycasting support to THREE.SkinnedMesh. #8953
Conversation
you mean skin can have both morph targets and bones?? |
Do you mind adding a example too? |
@mrdoob Sure, I'd be happy to make an example. I could use the skinned cyclinder in the docs, sway it back and forth, and have it change color on click for both BufferGeometry and Geometry. Should I add a new example.html file and commit it or would you like it in another format? @makc Yes, morph targets are a property of THREE.Mesh. In the GPU shader, the morph targets are applied first, say a person's hand closing, then those vertices are deformed according to the bone structure. If bones are applied first, the relative offsets won't be applied in the correct directions. |
…try and Geometry. Fixed vertex referencing error when not using a material with morphTargets. Renamed attributes.skinWeights to attributes.skinWeight.
I added the example and the calculations seem to be working well. One big gotcha is that you have to set the geometry.boundingSphere correctly to detect collisions outside the geometry's bounding sphere. There are several different methods for improving the performance. We could cache vertices by index so that they aren't recalculated multiple times for different faces. We could also calculate them all up front and use that to create the bounding box. This would add a lot of computation so maybe we could add a flag to enable/disable this. Or we could just leave it as it is. Demo link: http://ci.threejs.org/api/pullrequests/8953/examples/webgl_raycast_skinning.html |
/bump |
/bump again /ping @Mugen87 Do we want to support this? And do we want to support raycasting against |
skinIndices.copy( this.geometry.skinIndices[ index ] ); | ||
skinWeights.copy( this.geometry.skinWeights[ index ] ); | ||
|
||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SkinnedMesh no longer supports THREE.Geometry
, so this can be removed.
|
||
THREE.SkinnedMesh.prototype.boneTransform = ( function() { | ||
|
||
var clone = new THREE.Vector3(), result = new THREE.Vector3(), skinIndices = new THREE.Vector4(), skinWeights = new THREE.Vector4(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This clone
variable is shadowed (re-declared) in the function below, and isn't getting reused. Ideally, I think I would overwrite the vertex
parameter passed by the user rather than overwriting a single vector on subsequent calls.
TBH, I have concerns about this, see #6911 (comment) I think there are enough users outside there that will use In many games, you normally don't raycast against the animated (maybe complex) geometry which is intended for rendering. You have some sort of bounding volume hierarchy or maybe just a single tight bounding volume which makes raycasting more efficient. Sometimes it's also possible to use totally different approaches like GPU picking. I think it's okay to add it but it should be well documented. |
Aside from the lack of a bounding volume in this PR, I don't see why raycasting against a SkinnedMesh should be dramatically slower. Still O(n) in the number of faces, yes? I actually think the bounding volumes of the geometry should be used here, as they are in THREE.Mesh. If the user wants to expand those volumes (say, 20x) they can do so manually. threejs should probably not be responsible for knowing how much of a buffer to put on the bounding volumes, though. We'd need to document that. More importantly, we should just profile this. Maybe on a simple model ( Another strategy is to put bounding volumes on the bones, and raycast against those. But I think this would require some user input. |
Definitely 👍 . I have no performance numbers but applying the vertex transformation related to skinning each frame can be very expensive. The code from this fiddle might be a good starting point: https://jsfiddle.net/fnjkeg9x/1/ It computes the AABB for an |
I was playing around with this and instead of expanding the aabb for each vertex, I just expanded for every 128th vertex. It's significantly faster and I saw no difference in the accuracy of the bounding box. You might be able to add a sampling parameter that could allow for skipping indices if speed is desired over accuracy |
YES PLEASE!!! |
I think putting bounding volumes on the bones is a very useful too, if there are no affection on other features. |
In my opinion this should be supported especially with morph target raycasting now being available (#15985). While raycasting a SkinnedMesh may be slow in some complex cases it's hard for me to see that the current behavior (returning a visually incoherent intersection point) would be preferred. In simple cases transforming every face before raycasting is a fine solution and I think it's understood that as a users application outgrows the simple solutions that custom, application-specific solutions will need to be used.
Using a lower LoD skinned mesh for raycasting when the rendered mesh is sufficiently complex is another solution. Regarding putting bounding volumes on bones this would need to be tunable per-model / bone to fit the skinned mesh volume, right? I'm not sure what the right solution is to the bounding volume problem, though. This remains a problem for frustum culling SkinnedMeshes, right? Is there a solution to this that can be used to solve both issues? Or should it be up to the application to manually specify bounds that encompass the extents that an animated mesh is expected to reach? |
From my point of view, this is the main challenge. Not the actually raycasting logic. Computing the current vertex displacement is easy to implement. However, if wrong bounding volumes are used, raycasting will fail no matter how efficient its implementation is. The AABB is an important part of Computing the max AABB for a geometry with morph targets was easy since all data you need for this are available in the geometry ( The problem is that in You essentially need similar logic like from this fiddle that I've written some time ago for a forum post: https://jsfiddle.net/v02kLhsm/2/ The red box represents the current AABB, the green the better one which encloses the maximum extension of the geometry (similar to morph targets). As you can see, even the computation of this AABB is a pain since you have to sample the clip over the entire duration and expand the AABB step by step. Unfortunately, the result is just an approximation. The real AABB could be bigger. The problem is that the used sampling is much easier that processing keyframe by keyframe since position and quaternion tracks might have different time values and keyframe counts (which means you would have to interpolate missing entries. Ugh!). Ideally, |
@Mugen87 if I understand correctly, you're proposing that the That may well be the right way to go — it has the advantage of being precompute-friendly — but I'll point out one limitation. It doesn't support procedural animation, IK, or ragdoll physics. I'd suggested above that bounding volumes on the bones would require user input, but I'm not sure that's true. Suppose we:
I believe that should be fairly efficient, although raycasting a SkinnedMesh with 200+ bones would still require testing 200 AABBs. That can also be solved, but maybe let's consider that later... |
Yes, it's the equivalent to Unity's https://docs.unity3d.com/ScriptReference/SkinnedMeshRenderer-localBounds.html Having this AABB will solve the view frustum culling issue and raycasting for most use cases. Yes, the approach has limitations as you pointed out. But I think I wouldn't go down the path of AABBs per bone now. Even if you have these, you would still need a single maximum AABB. So we can start with this one and refine the approach later. |
I might suggest starting with an AABB derived from the default pose alone (whereas the current implementation ignores this pose), so the user is not required to have a list of clips to get an accurate AABB. That would fix cases where an FBX file (or a file converted from FBX to something else) has a skeleton with a 100x transform and the AABB is 100x too small. Expanding that bounding box further with animation clips could be a future option, and/or the bone volume approach. |
Oh you're right I hadn't understood that -- I really like that idea.
I think there are cases all over the library where extreme scenarios lead to poor performance. I think it's understood (or should be) that three.js provides correct behavior at reasonable performance in common use cases and that more advanced cases will require custom solutions. That's at least how I've thought about things so I'm not sure if I see the 200+ bone case as a problem.
While it might not be a tight containment an AABB could be computed from the bone AABBs more quickly which means it could be computed more effectively every frame like in your example (or from a set of sampled animation frames). Implementation aside why is a single maximum AABB needed? Where would it go? It doesn't seem like it belongs on the |
and why do bones boxes when you could just do some octree? it feels like there should be no improvement. |
@makc I'm not sure what you're suggesting, sorry. Even with an octree or other spatial data structure, it would be unreasonable to update the positions of each triangle in the octree on every frame, and colliders on bones would be more efficient. |
@donmccurdy so you mean fixed boxes around bones then? and just ignore if the triangle leaves the box volume? |
Fixed boxes, yes, but defined to contain the base position of every vertex the bone can influence. It's not guaranteed that vertices will stay within that box, but aside from some pretty contrived examples it should be quite accurate. And an object-level AABB derived from the bone volumes should be guaranteed to contain all vertices. If a user wants to override the autogenerated bone volumes, that's an option. See also "bone colliders" in Unity... those volumes could be used for physics, not just raycasting. |
Hi, just checking back about this issue, whether it's still in-progress... I'm wondering what's the current status. Thanks for your help! |
@tsuchan my guess is noone is doing anything :D well maybe @donmccurdy works on his bone colliders idea when he has nothing better to do |
Not any time soon for me, I have more draft PRs open than I'm keeping up with. 😅 I do support applying the bone transforms to vertices when raycasting, even if the bounding box calculation (for now) remains exactly as it is, disregarding bone transforms. This PR satisfies that, and seems like a good incremental step toward the more advanced solutions. I'll defer to @Mugen87 on how the code should be structured. |
@donmccurdy I've tested your PR and it works fine for me. It is a big progress! I can use this in my project. It seems it just needs small changes to make it work with r113? Looking forward to see it implemented it in the next release. |
|
||
} | ||
|
||
var clone = vertex.clone().applyMatrix4( this.bindMatrix ); result.set( 0, 0, 0 ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't clone()
in this method, it is likely to be called often. I would suggest modifying the input vertex instead, and perhaps changing the signature:
transformVertex ( index : number, target : THREE.Vector3 ) : this
Oh it's @cpollard1001's PR and not mine. :) I think it needs a few fixes, and perhaps a different method name, but could perhaps be in r115 or r116? |
Had a go at fixing the feedback on this PR in #19178. I'll leave the bone collider idea for someone else. 😇 |
Closing in favor of #19178. |
Ray-casting currently only supports deformations, not bone transformations. The bone transformations need to be applied after the morph changes so I created a function that takes the morphed vertices and their index for the skin parameters and returns the bone transformed vertex. The code was adapted from @makc at https://github.com/makc/three.js.fork/tree/skin-raycast. This version allows the morphs to be applied first and doesn't require a creator to be called.