-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Noise(wrong intersection result) in curve test #17
Comments
It looks the reason was some incorrect SIMD operation + missing Newton-Raphson refinement for some div, sqrt, rsrt, etc.
So |
Hi Syoyo, On Android arm64 many features become broken between July 2019 (left) and January2020 (right): I've attached all image results for manual inspection: I also agree with your reasoning about the cause in the NEON SIMD code. I recall that I was able to fix these issues in the past by introducing two refinement iterations for Newton-Raphson. |
@maikschulze Thanks for the comparison! Please use Also, it would be nice if you contribute regression tests(i.e. build test scene and run it in batch manner) to reproduce comparison images. |
Thanks for pointing me to the neon-fix branch, @syoyo . I've run my tests with this branch and compare it to the state of master in July 2019: Identical: DynamicScene, GridGeometry, InstancedGeometry, LazyGeometry, PointGeometry, PointPrecision, UserGeometry Different: CurveGeometry, DisplacementGeometry, HairGeometry, Interpolation, IntersectionFilter, MotionBlur, SubdivisionGeometry, TriangleGeometry I've taken a look at the images and their differences in BeyondCompare. I consider all but one perceptually identical. I'm not able to state which one is "better". Only one result shows a structural difference to me: Interpolation_003.png Here, it seems the shadow test is slightly less precise in neon-fix (right) vs the older state of master (left): |
In terms of tests, I would gladly help out, or possibly contribute my setup after major cleanups. We should settle on the architecture of the tests first. I can briefly describe what I did: I've created a second, private repository "embreetest" which contains a lot of the original tutorial code that has been refactored to create well-controlled image series and take some additional measurements. In addition, I'm using a different math library (GLM) to minimize the influence of Embree code changes onto my test case. This works very well, because Embree's API is so stable. The down-side is the "duplication" of the tutorial code. I was trying to wrap the existing tutorial codebase, but gave up eventually and consider it more important to have the tests in a different repository. I avoided changing the original tutorial code to avoid code conflicts and minimize regression risk. Having the tests in a separate repo allows the easy execution of newly added tests retrospectively on older builds, which came handy already for me when I was tracking down numerical imprecisions. Naturally, this test repository depends on Embree. The other dependencies (such as GLM, lodepng) could be included. The tests are compiled into a dynamic library with a simple C++ interface for a result monitor. This could be changed to a C interface based on callbacks for easier composition. On top of that sits a command-line executable that implements a result monitor and writes text log files and png files. So far, I've manually inspected the results. I'm unfamiliar with automatic setups on Github. However automation would probably be best. This approach would also allow the integration of the test suite into GUI apps done by thirds. What's your take on this? What else should be considered? |
Does this artifact appear in original embree build(v3.8.0)? If so, it would be better to report this issue to original embree git repo. |
The artifact appears in the official 3.8.0 as well. I've modified the provided interpolation tutorial to match my camera setup: The silhouette patterns vary slightly. I've come to realize that this kind of pattern only happens for even-sized image resolutions, not for odd image resolutions. If the camera rays were cast through the center of each pixel, I would expect the opposite. However, I can't find any evidence in the code that a half pixel offset to the center is performed. This is the result of Embree 3.8.0 on x64 with "--isa avx2" I don't think there's anything wrong with Embree. However, it might be worth verifying that a half pixel offset should be added throughout the tutorials to ensure there's no bias to a pixel corner. Visually, this is best done by rendering very low resolutions. |
Usually we use the following approach for testing our own renderer.
It would be better to manage test codes and scenes in different git repo, so we can setup I can setup CI automation of running tests using Github Actions or Travis. Github Actions support aarch64 platform(through qemu emulation, which is slow to execute though) |
At least it happens from v3.5.0
aarch64 result of curve test contains some noise, probably due to a NEON./fp math issue.
(x86_64 v3.7.0)
(aarch64 v3.5.0)
(aarch64 master(v3.6.1 + iOS patch))
The text was updated successfully, but these errors were encountered: