Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Morph Target Animation #339

Open
xen2 opened this issue Jan 23, 2019 · 2 comments

Comments

Projects
None yet
3 participants
@xen2
Copy link
Member

commented Jan 23, 2019

Is your feature request related to a problem? Please describe.
Artist might prefer to work with a morph target animation workflow rather than bone animation (esp. for facial animation).

Describe the solution you'd like
Support for Morph Target animation.

Describe alternatives you've considered
Bone animation is an alternative but might not be enough in some cases (esp. facial animation).

Additional context
https://en.wikipedia.org/wiki/Morph_target_animation

@jeske

This comment has been minimized.

Copy link

commented Jun 6, 2019

I barely know Xenko code, so adding this feels a little over my head at the moment... but I'm investigating it and I have some thoughts and questions:

Is Xenko skinning always on the GPU? I couldn't find any CPU skinning code.

Three methods of implementing Morph Target (Blend Shapes) are:

Method 1. GPU Vertex Shaders (recomputed every frame for every active morph target)
Method 2. cPU VB/IB preparation (recomputed when they change, on the cPU)
Method 3. GPU Compute Shaders (recomputed when they change, on the GPU)

Speaking only of the Engine/Rendering (not the Studio/Asset management part)... I think implementations look something like this:


Method 1. GPU Vertex Shaders (recomputed every frame for every active morph target)

(a) store the morph target data, and attach it to a mesh, much like Skinning does in xenko/sources/engine/Xenko.Rendering/Rendering/Mesh.cs

(b) download the necessary morph target data to the GPU, into buffers that are accessible to rendering (where and how?)

(c) write a MorphTargetRenderingFeature.cs, which allocates and uploads an array of morph target blend weights (and possibly morph target vertex offset/index), much like xenko/sources/engine/Xenko.Rendering/Rendering/SkinningRenderingFeature.cs

(d) Write a MorphTarget.xksl shader, hooked in before Skinning, with a PreTransformPosition() implementation that iterates morph targets, loads each morph target coordinate, uses the morph-weight to calculate and accumulate the blended offset, then applies the final offset to the coordinate

One downside of this approach is that it will repeat the morph target calculations every single frame... which would be a waste if there are lots of morph targets which are non-zero but seldom changed. (such as is common in avatar facial configuration)


Method 2. cPU VB/IB preparation (recomputed when they change, on the cPU)

This involves a pre-pass to modify VB/IB data, and then update or re-upload the GPU version. This would be similar to CPU skinning, but I don't see any code for this. Does Xenko have CPU Skinning code that would serve as an example?


Method 3. GPU Compute Shaders (recomputed when they change, on the GPU)

Performing the morph target calculations in a compute shader (only when morph weights change), is the most efficient method, but it requires coordination with the drawing code.

For example, one way to do this is to feed the raw mesh VB/IB buffers into a compute shader, and have it produce morphed output (either as new VB/IB buffers, or as StructuredBuffers), and then those output buffers need to be fed into the draw-calls instead of the raw mesh VB/IB buffers...

There are several levels of things I don't know how to do in Xenko in there. (a) The Xenko ComputeShader test/example only has StructuredBuffers. The graphics APIs support handing VB/IB buffers to Compute Shaders as raw/typed buffers (not structured buffers), but I don't know if this is punched all the way through the Xenko shading and api abstraction. (b) i don't know if there is some kind of clear pipeline mechanism to control which VB/IB buffers get fed into the draw calls.


It seems easiest to start with Method 1.

Of course making this work also requires (a) extending the Asset Handling and Editor Studio to support Morph Targets, (b) supporting animations of Morph Targets, (c) providing a means for code to control the blend weight for each target.

Hopefully that information is useful / helpful in some way.

@didzey

This comment has been minimized.

Copy link

commented Jun 16, 2019

https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_pref01.html
This should help in the implementation of this technology.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.