-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Matrix4::invert breaks because of low precision in approx_eq #210
Comments
Admittedly, this can be worked around by building a separate ortho projection with very close near and far planes. This keeps the viewing volume small, resulting in a larger determinant. This works just fine for unprojecting a point in screen space. But it does mean we need two different versions of the projection matrix: one for projecting, and one for inverting. So the workaround is less than ideal. |
One way to work around it is to only change your camera position/zoom as long as the projection matrix is invertible. Takes a bit of extra logic in your code, but hey - you've got to support near-zero determinant somehow. |
Well, even perfectly reasonable zoom levels yield determinants small enough to trigger Given that the precision of
That seems to like a very small viewing volume for most 3d applications. |
Just to clarify, the problem is only relevant with orthographic projections. Instead of expanding the projection to cover the world units, you can just scale the world down. Another way to go would be to have a specialized |
Why would that be true? The viewing volume for a perspective camera could also be large. That would result in a tiny determinant, correct?
Scaling the world down is exactly what the camera's model-view-projection matrix does. It scales from a large volume, e.g. 1000 x 1000 x 100, to clip space, which is 2 x 2 x 2. And that scaling is itself the problem, because it's what makes the determinant so small. I think the only way to solve this without workarounds is to increase the precision of the degeneracy check. Would it be a problem to do that? |
No, because a perspective projection fits the view not by scaling but by projecting. The determinant is going to be decent.
Scaling something is not a exactly a problem, inverting large scaling IS a problem. So, if your scale ends up in the Model part of the M-V-P, then it doesn't participate in the inverse, hence everything is good.
It's just a half-measure, it doesn't actually "solve" the problem. |
But to transform from screen space to world space, you need to multiply by inv(projection x model x view). So that big scale factor is necessarily built into the matrix that goes from screen to world. To put it another way: if you only use the inverse projection matrix to transform from screen to world, you'll get the wrong world coorda, because you will have neglected to account for the model-view transform.
|
@jarrett You don't need
Basically, you can unproject a point manually (since orthographic projection is rather simple), then multiply by the computed If you do that, you could contribute the unprojection method back into |
Yes, that is exactly right. But here's the problem: One of the two inversions, Anyhow, is there some reason why the precision of the floating point comparison can't be increased? Do you know why 10^-5 was chosen as the epsilon? |
No. What I'm saying is - instead of computing Let's say your orthographic projection is:
I bet it was an arbitrary decision. But whatever you want to change it to would be no less arbitrary, and that's why I don't like it as a solution. |
That is a good point. So instead, how about this: An unsafe If you're OK with that idea, I'll implement it. |
@jarrett sounds good! |
Here's a first attempt: I'm interested in hearing your thoughts. |
I just pushed a new revision with the latest from bjs/cgmath-rs merged in: https://github.com/jarrett/cgmath-rs Any interest in getting these features into the crate? |
I'll look at this after work - sorry for the delay. I definitely think there should be a way to sidestep the calculations, if the programmer thinks it is ok. I'm also looking at improving approx_eq to handle small floating point values, and moving it to a separate crate. These are the articles I'm looking at: |
Any updates on this? |
I just noticed I don't need to use unsafe in my "hacked" mat4 invert (stolen from @jarrett I think):
Not sure what they changed but that compiles in latest rust nightly |
I just got bitten by this behavior. I actually suggest to not try to do an approximate comparison for the determinant, and instead simply check for exact equality with zero. Why? Simply put, in imprecise arithmetic, the determinant is a completely unreliable way to check if a matrix is invertible or not. Consider the following example. Let Having matrices like this is also not at all uncommon. I just encountered this issue when dealing with inertia tensors and objects of very varying sizes. |
@Andlon we only have so much precision. One way to improve it is to choose the scale where most numbers would be around e0. It's not reasonable to deal with 1e-6 values and expect things to work as well as around 1e0. At the end of the day, cgmath allows you to use |
@kvark: I agree that it's not reasonable for things to work as well near 1e0 as near 1e-6, but I think it's reasonable to expect that things should work near 1e-6, and in fact, with all due respect, I consider this a bug. In terms of scale, your suggestion assumes that there is a single uniform scale where most numbers are in fact near e0, which may not be the case. If you really want to determine if the inversion is successful, performing an LU(P) decomposition can give you a much better indication whether the matrix is non-singular to working precision or not, and a more accurate inverse to boot. However, if you want to use an explicit inverse, I suggest not using the determinant as an indicator, because it will very easily break when it should not. With the current API of cgmath, this also means that there is no way to override this behavior for the user. I'd be happy to contribute some time to improve the situation and send a PR, if we could agree on desired behavior! EDIT: Oh, and I ran into the issues when using |
@Andlon good points! |
I think actually we should introduce LU decompositions into cgmath as a separate entity, because you should actually never invert a matrix. Rather, you're better off solving the system If it turns out that inversion by LU decomposition is as fast as explicit inverses, we could just swap the current implementation. The potential gains in accuracy are truly enormous. I'd be up for working on this, but at the moment I'm still slightly undecided between whether to use cgmath or nalgebra going further. I've used cgmath so far, and have generally been happy, but I miss the equivalent of nalgebra's |
I'm not familiar with LU decompositions, so if you all believe it'll solve the problem, I'll defer to you. Whatever you do, I'd just recommend documenting it in such a way that a typical programmer can understand. I think most of us know what it means to invert a matrix and when to do it. As Andlon said, many references on computer graphics teach us to use inversions. But I've never read one that mentions LU decomposition. Here's one possible approach to documentation: Add a heading in the readme called "Inversion." Under it, explain that you recommend LU decompositions as a fast alternative and present some realistic sample code. E.g. an example of unprojecting mouse coordinates to a ray in world space. |
@jarrett: excellent point! |
LU decomopositions would be great. I would also note that it might make sense to re-look at our current inversion code to ensure it is up to scratch. It's something I'm not too well versed in. |
Maybe open an issue about |
For anyone having this issue I have a |
When inverting a matrix, we check that the determinant is non-zero using
approx_eq
. If the determinant is approximately zero, the inversion fails.But
approx_eq
only goes to a precision of 10^-5.That's problematic, because projection matrices often have very small determinants. This is especially true when the camera is zoomed out very far. In that case, we're transforming a large volume of world space to the clip space volume. Which means that the determinant tends to be very small. (This is because the determinant is the ratio of the two volumes.)
The text was updated successfully, but these errors were encountered: