Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skin weights must be normalized. #1352

Merged
merged 2 commits into from
Jul 23, 2018

Conversation

donmccurdy
Copy link
Contributor

Fixes #1213.

Some concerns:

  1. If we do decide >4 influences are allowed via JOINTS_1...N, clients that only support 4 influences (this is nearly every engine today) will not be able to assume that JOINTS_0 is normalized. So, practically, the implementation must still be robust against weights that do not sum to 1.
  2. If you have a simple rig (say, 4 bones total) you might get better compression by putting the joints in the same order for all vertices. This change would disallow that.

I'm not too worried about (2), but would say (1) is worth discussion.

@lexaknyazev
Copy link
Member

If you have a simple rig (say, 4 bones total) you might get better compression by putting the joints in the same order for all vertices. This change would disallow that.

We might make this change a bit more subtle by saying that sorting is required only when there are more than one set of weights (since it makes no difference with just 4 influences).

the implementation must still be robust against weights that do not sum to 1.

I expect that an asset that uses 8 influences would still look incorrect in an engine supporting only 4. Keeping the most heavy joints in the first set is probably the best possible fallback in such situation. I'm not sure that re-normalizing them in-engine would improve the result. We definitely need some samples to check these statements.

/cc @UX3D-nopper

@McNopper
Copy link
Contributor

If 8 influences are required, then this has a reason. Even a "small" weight e.g. for a rotation can have a major impact on the final pose.
If an asset has 8 influences - and your engine can not handle it - then just report something. It is the same problem with morph targets and other data, which can be "counted" up. Sooner or later, any engine (or device) has it limits.
What I want to say is, we should not make things complicated, that somehow 8 influences work in an engine, which supports only 4 influences.

@lexaknyazev
Copy link
Member

What I want to say is, we should not make things complicated, that somehow 8 influences work in an engine, which supports only 4 influences.

Fair point. We still need some language around normalization including the case of more than one weights set. It also would be useful to add new sample models showing that.

@lexaknyazev
Copy link
Member

@donmccurdy @McNopper How could we move this PR further?
Since "descending order" requirement makes sense only for more than one weights set, we need to elaborate a bit more on:

  • Indices uniqueness across all sets (for non-zero weights).
  • Recommended treatment of assets exceeding engine's caps: fail or use the most-heavy weights. In the former case, there's no need to require descending order.

@McNopper
Copy link
Contributor

McNopper commented Jun 5, 2018

As mentioned, if more weights are used and the engine can not handle it, the loading should fail an/or the main pose - without skinning - displayed.

@donmccurdy
Copy link
Contributor Author

As mentioned, if more weights are used and the engine can not handle it, the loading should fail an/or the main pose - without skinning - displayed.

I'm happy with this. In that case descending order requirement is unnecessary, but we should still require uniqueness of non-zero weights across all sets, yes?

@lexaknyazev
Copy link
Member

Non-zero non-unique weights make little sense, but I don't think that engines need to care about that (unless they unpack values manually).

In practice, I'd expect non-unique non-zero weights to be caused by exporters' bugs only.

@donmccurdy
Copy link
Contributor Author

Non-zero non-unique weights make little sense, but I don't think that engines need to care about that (unless they unpack values manually). In practice, I'd expect non-unique non-zero weights to be caused by exporters' bugs only.

Do you mean that we shouldn't bother to include spec language about it? If so, and we're also saying descending/ascending order doesn't matter, and users should expect assets with >4 weights to simply fail in most engines, then I think all that's left for this PR to say would be:

The joint weights for each vertex must be >= 0 and normalized to have a linear sum of one.

@lexaknyazev
Copy link
Member

Do you mean that we shouldn't bother to include spec language about it?

My point is that it could be implementation-dependent. Say that we've got

joints:  0   1   2   2
weights: 0.1 0.3 0.2 0.4
total sum == 1.0

If an engine does something like this:

matrix = w0 * m[j0] + w1 * m[j1] + w2 * m[j2] + w3 * m[j3]

then non-unique indices don't matter because joint 2 will use weight of 0.6 anyway.

Alternatively, an engine could do like (for whatever reason):

weights_cache = {};
for each (influence, weight) per vertex
    weights_cache[influence] = weight

In such case, the final sum might not include both weights for joint 2.

So, if non-unique indices are allowed, we must mention that in the spec.

users should expect assets with >4 weights to simply fail in most engines

That's also worth mentioning.

@donmccurdy
Copy link
Contributor Author

donmccurdy commented Jul 2, 2018

If we "expect non-unique non-zero weights to be caused by exporters' bugs only" and reasonable engine implementations might fail here, then I think we should prohibit multiple non-zero weights for the same joint. Is the existing language in this PR sufficient?

No joint may have more than one non-zero weight for a given vertex.

If so, the updated language for this PR could be:

The joint weights for each vertex must be >= 0, sorted in descending order, and normalized to have a linear sum of one. No joint may have more than one non-zero weight for a given vertex.

@scurest
Copy link

scurest commented Jul 16, 2018

What is the "linear" in "linear sum" supposed to indicate?

How actionable is the requirement that the sum be one? Consider this situation: you are writing a program which emits glTFs. Your input model has weights given as 4-vectors of three-digit decimal numbers. You are required to output 4-vectors of normalized UBYTEs. The most obvious way to do this is to multiply each weight by 255 and round to the nearest integer. This is fast and optimal in the sense that no output weight could, by being changed, come any closer to its true value.

But it does not give weights that sum to one because of round-off error. Example:

weight weight × 255 rounded
0.343 87.465 87
0.338 86.190 86
0.198 50.490 50
0.121 30.855 31
==== ===== ===
1.000 255.000 254

This strategy is therefore apparently forbidden by the new text, despite being IMO reasonable (and probably very common).

@lexaknyazev
Copy link
Member

lexaknyazev commented Jul 16, 2018

What is the "linear" in "linear sum" supposed to indicate?

Linear means that values should be summed as is (as opposed to, e.g, quadratic sum for normal vectors).

Your input model has weights given as 4-vectors of three-digit decimal numbers.

That's interesting because, only nine three-digit decimals can be exactly represented in IEEE single-precision binary form (0.000, 0.125, 0.250, 0.375, 0.500, 0.625, 0.750, 0.875, 1.000). So, just for the context, where do those three-digit decimals come from?

This strategy is therefore apparently forbidden by the new text...

As with other numerical thresholds, "equality to 1.0" will have some tolerance. Probably, we should allow 1/255 for ubyte storage.

@donmccurdy
Copy link
Contributor Author

Yes, within reasonable tolerance is fine. This rule is meant to avoid situations where a vertex’s weights sum to values like 0, 0.5, or 99, in which case results in different engines may vary.

@scurest
Copy link

scurest commented Jul 16, 2018

For my own curiosity, what is a quadratic sum for normal vectors? :)

That's interesting because, only nine three-digit decimals can be exactly represented in IEEE single-precision binary form

I chose 3-digit decimals just because it's easy to see the arithmetic by hand, they're not essential. The same effect occurs if you choose the nearest doubles to those weights and do float arithmetic

~ $ python
>>> sum(round(255*x) for x in [0.343, 0.338, 0.198, 0.121])
254

As with other numerical thresholds, "equality to 1.0" will have some tolerance. Probably, we should allow 1/255 for ubyte storage.

The error in this strategy is <= (1/255) (1/2) (number of (non-zero) weights), so you'd need a tolerance of 2/255 for UBYTE storage for one set of weights (thinking of #1381) to allow this.

This is what I mean by "actionable". Sometimes the spec says "you must do this" and it really means "this is the ideal you should keep in mind". For example, it says matrices must be decomposable to TRS but since it doesn't specify which of the numerous possible function is_decomposable : Float^16 -> Bool determines this, it can only be interpreted as "do your best". But with "the sum of weights must be 1", at least in the specific case of normalized UBYTE weights, it appears to be an actionable requirement, ie. you can actually add the weights and determine if it is satisfied or not, and in this case it would not be satisfied.

@lexaknyazev
Copy link
Member

For my own curiosity, what is a quadratic sum for normal vectors? :)

Just unit length: sqrt(x * x + y * y + z * z).

This is what I mean by "actionable". ...

That's a known stalled issue. We still need to specify that kind of details.

@donmccurdy
Copy link
Contributor Author

Merging as discussed, without the requirement that weights be sorted. The spec does not currently discuss tolerance for numerical requirements, but this may be added in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants