8315024: Vector API FP reduction tests should not test for exact equality#16024
8315024: Vector API FP reduction tests should not test for exact equality#16024gergo- wants to merge 2 commits intoopenjdk:masterfrom
Conversation
|
👋 Welcome back gbarany! A progress list of the required criteria for merging this PR into |
|
@gergo- To determine the appropriate audience for reviewing this pull request, one or more labels corresponding to different subsystems will normally be applied automatically. However, no automatic labelling rule matches the changes in this pull request. In order to have an "RFR" email sent to the correct mailing list, you will need to add one or more applicable labels manually using the /label pull request command. Applicable Labels
|
|
/label add hotspot-compiler |
|
@gergo- |
Webrevs
|
eme64
left a comment
There was a problem hiding this comment.
Generally looks good, thanks for looking into this.
I left a few comments below.
Another concern I have, which I ran into by writing tests for the auto-vectorizer:
Are we making sure the float/double reductions do not degenerate to either zero or infinity? Because if they do degenerate, then we have only a very weak test.
I'm especially worried about all the values that depend on i, and then get multiplied. Don't the multiplications hit the maximal float value very quickly?
There was a problem hiding this comment.
It seems a bit large. Have you tried to make it smaller? Or what is your justification for this value?
There was a problem hiding this comment.
With the current version of the test, a value as small as 0.000001 seems to be fine, one more zero is too small. This will probably have to be adjusted in the future for larger tests.
There was a problem hiding this comment.
I'm just worried that 1% is very much for the addition tests. Basically we might be dropping off a whole element, and would not notice it.
| Assert.assertEquals(r[i], f.apply(a, i), Math.abs(r[i] * relativeError), "at index #" + i); | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Optional: reduce code duplication. Call assertReductionArraysEquals with relativeError = 0 in pre-existing method.
| withToString("double[i / 10.0 + 0.1]", (int s) -> { | ||
| return fill(s * BUFFER_REPS, | ||
| i -> (double)(i / (double) 10.0 + 0.1)); | ||
| }), |
There was a problem hiding this comment.
Did this generate rounding issues for addition?
Another option would be random values that make sure to fill the whole mantissa with random information.
Of course the tricky part is to keep them within reasonable bounds so that on multiplication they do not degenerate to zero or infinity.
Also: it would be nice to have some cases with extreme values (infinity, NaN, etc).
There was a problem hiding this comment.
Yes, this generator generates a rounding issue for addition:
test FloatMaxVectorTests.ADDReduceFloatMaxVectorTests(float[i / 10.0 + 0.1]): failure
java.lang.AssertionError: at index #16 expected [39.2] but found [39.199997]
There was a problem hiding this comment.
I updated this generator to 0.01 + (i / (i + 1)), plus a variant that replaces some of the elements of this sequence with values from the cornerCaseValues generator.
This should address all of your concerns. Almost all values in this sequence are very close to 1, so they can be added and multiplied without overflow, up to about 2000 elements for floats. The mantissas have bits all over the place. The exponents are very limited, but a wider range of exponents is exercised by the other tests.
You're right. We're currently getting "lucky" here because we reduce individual vectors of 4, 8, or 16 elements inside a loop. So even if the array contains a zero, or the product of the entire array is infinite, individual blocks inside it will have finite, non-zero products. This will change when https://bugs.openjdk.org/browse/JDK-8309647 is addressed. I'll try to put together a generator that also works nicely for multiplications over a whole array. |
|
@eme64 would you have time to take another look at the changes I have made to this PR? |
|
@gergo- I just looked at it again. It looks better. Still, I have a concern about I wonder if it would not be better to generate some data randomly, and throw in the special-cases with a very low probability. Maybe 50% that any show up, and then randomly pick one or more special case values. That way you can test the different special-cases separately. And their position could also be random. BTW: is there any wiki about the template file format, and how to "compile" it to java? I might want to use it in the future myself :) |
Thanks.
Currently there are reduction tests that reduce not across the whole input array but over individual vector-sized blocks, e.g.: The largest vector length is 16 elements (32 bit float * 16 = 512 bits max vector size). Therefore at most every second block will contain one corner-case value, and all the other blocks will only contain normal values. No block mixes different corner case values.
I would say that this could be tackled more naturally as part of https://bugs.openjdk.org/browse/JDK-8309647 which concerns moving reductions out of loops and would require revisiting these tests anyway.
I'm not aware of any docs, I learned to do this by doing. I just do |
|
@gergo- This change now passes all automated pre-integration checks. ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details. After integration, the commit message for the final commit will be: You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed. At the time when this comment was updated there had been 309 new commits pushed to the
As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details. As you do not have Committer status in this project an existing Committer must agree to sponsor your change. Possible candidates are the reviewers of this PR (@eme64, @TobiHartmann) but any other Committer may sponsor as well. ➡️ To flag this PR as ready for integration with the above commit message, type |
|
Thanks for the review @eme64 ! /integrate |
|
@gergo- You should only integrate once you have 2 reviewers (unless your changes are trivial, and this is not exactly trivial). |
|
/sponsor |
|
Going to push as commit e6f23a9.
Your commit was automatically rebased without conflicts. |
|
@TobiHartmann @gergo- Pushed as commit e6f23a9. 💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored. |
Certain floating point reduction operations in the Vector API are allowed to introduce rounding errors. Adjust the corresponding tests to allow a small relative error when comparing the operation's result to the expected value. Also, add a new generator
double[i / 10.0 + 0.1]to test floating point operations with somewhat more interesting input data.Progress
Issue
Reviewers
Reviewing
Using
gitCheckout this PR locally:
$ git fetch https://git.openjdk.org/jdk.git pull/16024/head:pull/16024$ git checkout pull/16024Update a local copy of the PR:
$ git checkout pull/16024$ git pull https://git.openjdk.org/jdk.git pull/16024/headUsing Skara CLI tools
Checkout this PR locally:
$ git pr checkout 16024View PR using the GUI difftool:
$ git pr show -t 16024Using diff file
Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/16024.diff
Webrev
Link to Webrev Comment