Skip to content

8318364: Add an FFM-based implementation of harfbuzz OpenType layout #15476

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 23 commits into from

Conversation

prrace
Copy link
Contributor

@prrace prrace commented Aug 29, 2023


Progress

  • Change must be properly reviewed (1 review required, with at least 1 Reviewer)
  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue

Issue

  • JDK-8318364: Add an FFM-based implementation of harfbuzz OpenType layout (Enhancement - P3)

Reviewers

Reviewing

Using git

Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk.git pull/15476/head:pull/15476
$ git checkout pull/15476

Update a local copy of the PR:
$ git checkout pull/15476
$ git pull https://git.openjdk.org/jdk.git pull/15476/head

Using Skara CLI tools

Checkout this PR locally:
$ git pr checkout 15476

View PR using the GUI difftool:
$ git pr show -t 15476

Using diff file

Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/15476.diff

Webrev

Link to Webrev Comment

@bridgekeeper
Copy link

bridgekeeper bot commented Aug 29, 2023

👋 Welcome back prr! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

@openjdk
Copy link

openjdk bot commented Aug 29, 2023

@prrace The following labels will be automatically applied to this pull request:

  • client
  • core-libs

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing lists. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added client client-libs-dev@openjdk.org core-libs core-libs-dev@openjdk.org labels Aug 29, 2023
@mrserb
Copy link
Member

mrserb commented Aug 30, 2023

@prrace did you check how this change affects the performance, especially startup? I have experimented with Panama for littlecms: https://bugs.openjdk.org/browse/JDK-8313344 and found that the biggest issue is a cold start, 8 ms vs 100ms. An example of the report: https://jmh.morethan.io/?gists=4df0f27789cc4b0ca91fc5b2d677fe39,900b547e073cc1567971f46bfea151db

VarIntLayout.withName("var2")
).withName("hb_glyph_info_t");

private static VarHandle getVarHandle(MemoryLayout layout, String name) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method could take a SequenceLayout instead of a MemoryLayout as it only works for SeequenceLayouts.

private static MethodHandle jdk_hb_shape_handle;

private static FunctionDescriptor get_nominal_glyph_fd;
private static MethodHandle get_nominal_glyph_mh;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Declaring all these final would improve performance. For example

private static final FunctionDescription GET_NOMINAL_GLYPH_MH;

private static MethodHandle dispose_face_handle;
private static MethodHandle jdk_hb_shape_handle;

private static FunctionDescriptor get_nominal_glyph_fd;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the *_fd variables could be converted into local variables.

* int8_t i8[4];
* };
*/
private static final UnionLayout VarIntLayout = MemoryLayout.unionLayout(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was a bit confused by the naming. Suggest VarIntLayout -> VAR_INT_LAYOUT


public class HBShaper {

/*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice with the original C struct as a comment.

return 0;
}
byte[] data = font2D.getTableBytes(tag);
if (data == null) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No setting data_ptr_out to NULL here?

* so it will be freed by the caller using native free - when it is
* done with it.
*/
MemorySegment data_ptr = data_ptr_out.reinterpret(ADDRESS.byteSize());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest using .asSlice() here as it is an unrestricted and safer method.

this.font2D = font;
}

private synchronized MemorySegment getFace() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are already synchronized via the faceMap.


Font2D font2D = scopedFont2D.get();
int glyphID = font2D.charToGlyph(unicode);
MemorySegment glyphIDPtr = glyph.reinterpret(4);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a general comment, it is better to use slicing rather than reinterpret.

private static final ScopedValue<Font2D> scopedFont2D = ScopedValue.newInstance();
private static final ScopedValue<FontStrike> scopedFontStrike = ScopedValue.newInstance();
private static final ScopedValue<GVData> scopedGVData = ScopedValue.newInstance();
private static final ScopedValue<Point2D.Float> scopedStartPt = ScopedValue.newInstance();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using only one ScopedValue and storing a record(Font2D font2D, FontStrike fontStrike, GVData gvData, Point2D.float point2d) {} of the various objects will provide much better performance.

* shaping can locate the correct instances of these to query or update.
* The alternative of creating bound method handles is far too slow.
*/
ScopedValue.where(scopedFont2D, font2D)
Copy link
Contributor

@minborg minborg Aug 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a static ConcurrentHashMap<Long, Record> would provide better performance. We could clean up the key when the value is used. Use the Thread.threadId() as the key.

@prrace
Copy link
Contributor Author

prrace commented Sep 1, 2023

@prrace did you check how this change affects the performance, especially startup? I have experimented with Panama for littlecms: https://bugs.openjdk.org/browse/JDK-8313344 and found that the biggest issue is a cold start, 8 ms vs 100ms. An example of the report: https://jmh.morethan.io/?gists=4df0f27789cc4b0ca91fc5b2d677fe39,900b547e073cc1567971f46bfea151db

Hmm. I didn't notice this comment until today. No emails for drafts ?
Probably I should not have posted this PR even as draft if it is going to get attention as it isn't really ready for that.

But yes, I had already measured startup + warmup and noticed that's an issue.
It may be doesn't matter so much for OpenType layout as other things that are always on the critical path, but it is definitely a concern.

@mrserb
Copy link
Member

mrserb commented Sep 1, 2023

Probably I should not have posted this PR even as draft if it is going to get attention as it isn't really ready for that.

No! That is really interesting proposal and discussion!

BTW this PR is not in the draft state.

@bridgekeeper
Copy link

bridgekeeper bot commented Sep 29, 2023

@prrace This pull request has been inactive for more than 4 weeks and will be automatically closed if another 4 weeks passes without any activity. To avoid this, simply add a new comment to the pull request. Feel free to ask for assistance if you need help with progressing this pull request towards integration!

float startY,
int flags,
int slot,
hb_font_get_nominal_glyph_func_t nominal_fn,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It shouldn't be necessary to pass all the functions here. Note that the upcalls are now effectively static (due to the use of scope values). It would be more efficient to create the array of functions once and for all in Java code (either directly, by creating a memory segment and storing all the function pointers in there, or indirectly, by calling the _hb_jdk_get_font_funcs native function). But, we don't need to create a function array each time we call the shape function (as all the functions in the array are going to be the same after all). If you do that, you can replace all the _fn parameters in here with a single function pointer array parameter.

float startX = (float)startPt.getX();
float startY = (float)startPt.getY();

MemorySegment matrix = arena.allocateArray(JAVA_FLOAT, mat.length);
Copy link
Contributor

@mcimadamore mcimadamore Oct 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should be an overload of allocateArray which takes a Java array directly and then copies it off-heap after allocation. In Java 22 this method is called allocateFrom and is much more optimized (as it avoids zeroing of memory). But, even in 21, the call to copy seems redundant - you can just use the correct overload of SegmentAllocator::allocateArray


for (int i=0; i<glyphCount; i++) {
int storei = i + initialCount;
int cluster = (int)clusterHandle.get(glyphInfoArr, i) - offset;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the var handle calls in this loop are not exact - e.g. they use a int offset instead of a long one. Because of this, the memory access cannot be fully optimized. Adding a cast to long on all offsets coordinates yields a significant performance boost.

To avoid issues like these, it is recommended to set up the var handle using the VarHandle::withInvokeExactBehavior method, which will cause an exception to be thrown in case there's a type mismatch (similar to MethodHandle::invokeExact).

).withName("hb_glyph_info_t");

private static VarHandle getVarHandle(StructLayout struct, String name) {
VarHandle h = struct.varHandle(PathElement.groupElement(name));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that, strictly speaking, these combiners are not required. If you just call MemoryLayout::varHandle, you get back a var handle that takes a MemorySegment and a long offset. So, you can, if you want, adapt the var handle so that the offset parameter becomes something else. But you could also just leave the var handle as is. Then, in the loop that is doing the access, you can do this:

for (int i = 0 ; i < limit ; i++) {
   x_offsetHandle.get(segment, PositionLayout.scale(0, i));
   y_offsetHandle.get(segment, PositionLayout.scale(0, i));
   ...
}

That is, use the offset hole to your advantage, to pass the struct base offset (obtained by scaling the enclosing struct layout by the value of the loop induction variable).

(That said, I understand that working with logical indices is a common operation, and that this is made a bit harder by the recent API changes. We should consider, as @JornVernee mentioned, adding back a more general MemoryLayout::arrayElementVarHandle which will give you the var handle you need - with coordinates MemorySegment and long - a logical index, not an offset).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A PR which adds MemoryLayout::arrayElementVarHandle can be found here:
#16272

With this, you can call the new method in order to create the var handle. The returned var handle will accept two long coordinate - the first is a base offset (as usual), the second in a logical index (what you need). The PR also adds plenty of narrative text describing how access to variable-length arrays should be performed using layouts (and also shows cases where the offset parameter is used in a non-trivial fashion).

@@ -148,6 +148,7 @@
// module declaration be annotated with jdk.internal.javac.ParticipatesInPreview
exports jdk.internal.javac to
java.compiler,
java.desktop, // for ScopedValue
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The indentation looks odd..?

@mrserb
Copy link
Member

mrserb commented Nov 6, 2023

Since we plan to import it into jdk22, do you have some performance data to share? any positive or negative effects of this migration?

@prrace
Copy link
Contributor Author

prrace commented Nov 6, 2023

Since we plan to import it into jdk22, do you have some performance data to share? any positive or negative effects of this migration?

There's three phases - (1) startup, (2) warmup and (3) warmed up performance.

JNI has minimal startup / warmup cost, getting to warmed up performance right away.
So if your app starts up and makes just one call to layout, JNI wins easily.
But if it keeps going, then FFM comes out ahead, even counting that startup /warmup cost.

There's a cost to the first time some code in JDK initialises the core FFM.
If that code happens to be this layout code, it'll see that overhead.
That was somewhere around 75ms on my Mac.
On top of that there's the cost of creating the specific method handles and var handles
I have 11 of these, and the total there is about 35-40ms.

So we have somewhere around a fixed 125ms startup cost for the FFM case - as measured on my Mac,
but only 35-40ms of that is attributable to the specific needs of layout.

And there is some potential for that code to get faster some day
Also if any of the techniques such as AppCDS, or some day, Leyden condensers, are used then
there is also potential to eliminate much of the warmup cost.

The FFM path then needs to be warmed.

Once warmed up, FFM is always as fast or faster than JNI. 20% faster is typical as
measured by a small test that just calls layout in a loop. It was tried with varying lengths of string.
For just a single char, FFM was only a little faster, but gets better for longer strings.
Once we start to use layout, we use it a lot, so you reach many thousands of calls very quickly.
Just resizing your UI window causes that. It doesn't take long for FFM to become an overall win.
That includes amortizing the cost of the startup / warmup time.
As well as a microbenchmark, I looked at what it does in an app consisting of a Swing JTextArea displaying
a decent amount of Hindi using an OpenType Indic font on Mac.
That takes just over 16,000 (!) calls to layout to get to fully displayed.
Then if you just resize back and forth in just a few seconds FFM catches up and overtakes
I'll show numbers below - this measure all the FFM+layout costs but nothing else in the app.
It bears out what I said about startup.
"layoutCnt" is the number of calls to the method to do layout on a single run of text.
The numbers look like a lot of calls to layout and you might think that took hours
but this really is just about 20-30 secs of manual resizing to get to one million calls.

JNI

layoutCnt=1 total=3ms <<< JNI very fast to start up
layoutCnt=2 total=3ms
layoutCnt=3 total=3ms
layoutCnt=4 total=4ms
layoutCnt=5 total=4ms
layoutCnt=1000 total=31ms
layoutCnt=2000 total=40ms << 9-10ms per thousand calls (40-31)
layoutCnt=3000 total=51ms
layoutCnt=4000 total=61ms
layoutCnt=5000 total=69ms
layoutCnt=6000 total=77ms
layoutCnt=7000 total=90ms
layoutCnt=8000 total=100ms
layoutCnt=9000 total=113ms
layoutCnt=10000 total=122ms
layoutCnt=11000 total=134ms
layoutCnt=12000 total=150ms
layoutCnt=13000 total=157ms
layoutCnt=14000 total=169ms
layoutCnt=15000 total=181ms
layoutCnt=16000 total=193ms <<< app fully displayed
...
layoutCnt=250000 total=2450ms <<< rough point at which they are equal
...
layoutCnt=1000000 total=9115ms <<< after 1 million calls FFM is clearly behind
layoutCnt=1001000 total=9124ms << STILL 9-10ms per thousand calls (9124-9115)

FFM

layoutCnt=1 total=186ms << // FFM slow to start up, includes 75ms core FFM, 35-40 varhandles + no JIT yet
layoutCnt=2 total=188ms
layoutCnt=3 total=189ms
layoutCnt=4 total=195ms
layoutCnt=5 total=195ms
layoutCnt=1000 total=269ms
layoutCnt=2000 total=284ms << 15 ms per thousand calls (284-269)
layoutCnt=3000 total=301ms
layoutCnt=4000 total=317ms
layoutCnt=5000 total=333ms
layoutCnt=6000 total=348ms
layoutCnt=7000 total=365ms
layoutCnt=8000 total=376ms
layoutCnt=9000 total=388ms
layoutCnt=10000 total=397ms
layoutCnt=11000 total=407ms
layoutCnt=12000 total=419ms
layoutCnt=13000 total=425ms
layoutCnt=14000 total=435ms
layoutCnt=15000 total=444ms
layoutCnt=16000 total=453ms <<< app fully displayed
...
layoutCnt=250000 total=2426ms <<< rough point at which they are equal
...
layoutCnt=1000000 total=8489ms <<< after 1 million calls FFM is clearly ahead
layoutCnt=1001000 total=8496ms << now about 7 ms per thousand calls (8496-8489)

@mrserb
Copy link
Member

mrserb commented Nov 6, 2023

So we have somewhere around a fixed 125ms startup cost for the FFM case - as measured on my Mac,
but only 35-40ms of that is attributable to the specific needs of layout.

That looks unfortunate. I guess if we will start to use ffm in other places we can easily spend of 1 second budget on startup=(

layoutCnt=16000 total=193ms <<< app fully displayed
vs
layoutCnt=16000 total=453ms <<< app fully displayed

It looks strange that 16000 calls are not enough to warmup, and the difference is so large.

@prrace
Copy link
Contributor Author

prrace commented Nov 7, 2023

So we have somewhere around a fixed 125ms startup cost for the FFM case - as measured on my Mac,
but only 35-40ms of that is attributable to the specific needs of layout.

That looks unfortunate. I guess if we will start to use ffm in other places we can easily spend of 1 second budget on startup=(

Yes, this case is sufficiently uncommon, that it is OK, and is a decent way to help us track improvements to FFM.
But it would be another matter to have to do it for however many of our core software loops and AWT window
manager interaction calls we need to get running for a minimal app.

layoutCnt=16000 total=193ms <<< app fully displayed
vs
layoutCnt=16000 total=453ms <<< app fully displayed

It looks strange that 16000 calls are not enough to warmup, and the difference is so large.

I am not a C2 expert, (not even an amateur), I just assume that it takes a lot of calls to be fully optimized.

@mrserb
Copy link
Member

mrserb commented Nov 16, 2023

So we have somewhere around a fixed 125ms startup cost for the FFM case - as measured on my Mac,
but only 35-40ms of that is attributable to the specific needs of layout.

That looks unfortunate. I guess if we will start to use ffm in other places we can easily spend of 1 second budget on startup=(

Yes, this case is sufficiently uncommon, that it is OK, and is a decent way to help us track improvements to FFM. But it would be another matter to have to do it for however many of our core software loops and AWT window manager interaction calls we need to get running for a minimal app.

layoutCnt=16000 total=193ms <<< app fully displayed
vs
layoutCnt=16000 total=453ms <<< app fully displayed

It looks strange that 16000 calls are not enough to warmup, and the difference is so large.

I am not a C2 expert, (not even an amateur), I just assume that it takes a lot of calls to be fully optimized.

@JornVernee this looks suspicious and seems unrelated to the cold startup issues we discussed before.

@JornVernee
Copy link
Member

So we have somewhere around a fixed 125ms startup cost for the FFM case - as measured on my Mac,
but only 35-40ms of that is attributable to the specific needs of layout.

That looks unfortunate. I guess if we will start to use ffm in other places we can easily spend of 1 second budget on startup=(

Yes, this case is sufficiently uncommon, that it is OK, and is a decent way to help us track improvements to FFM. But it would be another matter to have to do it for however many of our core software loops and AWT window manager interaction calls we need to get running for a minimal app.

layoutCnt=16000 total=193ms <<< app fully displayed
vs
layoutCnt=16000 total=453ms <<< app fully displayed

It looks strange that 16000 calls are not enough to warmup, and the difference is so large.

I am not a C2 expert, (not even an amateur), I just assume that it takes a lot of calls to be fully optimized.

@JornVernee this looks suspicious and seems unrelated to the cold startup issues we discussed before.

I suspect the benchmark might be measuring the java.lang.foreign code needing to be loaded as part of the benchmark. While for JNI, the initialization of all the JNI machinery is included in the startup of the application. Was the running time of the entire application/process measured? Or only from the start of the main method?

Secondly, we have not spent a lot of time optimizing the startup performance of FFM yet. There are things we could do such as pre-generating classes during jlink-time, similar to what we do for java.lang.invoke/lambda implementation classes.

@prrace
Copy link
Contributor Author

prrace commented Nov 16, 2023

layoutCnt=16000 total=193ms <<< app fully displayed
vs
layoutCnt=16000 total=453ms <<< app fully displayed

It looks strange that 16000 calls are not enough to warmup, and the difference is so large.

I am not a C2 expert, (not even an amateur), I just assume that it takes a lot of calls to be fully optimized.

@JornVernee this looks suspicious and seems unrelated to the cold startup issues we discussed before.

I suspect the benchmark might be measuring the java.lang.foreign code needing to be loaded as part of the benchmark. While for JNI, the initialization of all the JNI machinery is included in the startup of the application. Was the running time of the entire application/process measured? Or only from the start of the main method?

Yes, that's correct, it includes all the startup costs in that number.
So as @jayathirthrao observed, the comment "16000 calls are not enough to warmup" may be slightly off the mark since at this time, each 1,000 FFM calls is already roughly as fast as each 1,000 JNI calls
So we ARE warmed up by then, but I have no idea what would be a normal expectation.
Looking at the numbers above it is roughly around 12,000 that we reach parity for the speed of each incremental call.

@JornVernee
Copy link
Member

layoutCnt=16000 total=193ms <<< app fully displayed
vs
layoutCnt=16000 total=453ms <<< app fully displayed

It looks strange that 16000 calls are not enough to warmup, and the difference is so large.

I am not a C2 expert, (not even an amateur), I just assume that it takes a lot of calls to be fully optimized.

@JornVernee this looks suspicious and seems unrelated to the cold startup issues we discussed before.

I suspect the benchmark might be measuring the java.lang.foreign code needing to be loaded as part of the benchmark. While for JNI, the initialization of all the JNI machinery is included in the startup of the application. Was the running time of the entire application/process measured? Or only from the start of the main method?

Yes, that's correct, it includes all the startup costs in that number. So as @jayathirthrao observed, the comment "16000 calls are not enough to warmup" may be slightly off the mark since at this time, each 1,000 FFM calls is already roughly as fast as each 1,000 JNI calls So we ARE warmed up by then, but I have no idea what would be a normal expectation. Looking at the numbers above it is roughly around 12,000 that we reach parity for the speed of each incremental call.

C2/fully optimized compilation kicks in after 10 000 calls, and is asynchronous by default (i.e. the rest of the application keeps running). So, 12,000 sounds relatively normal to me.

@openjdk
Copy link

openjdk bot commented Nov 17, 2023

@prrace This change now passes all automated pre-integration checks.

ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details.

After integration, the commit message for the final commit will be:

8318364: Add an FFM-based implementation of harfbuzz OpenType layout

Reviewed-by: jdv, psadhukhan

You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed.

At the time when this comment was updated there had been 37 new commits pushed to the master branch:

  • fcb4df2: 8320192: SHAKE256 does not work correctly if n >= 137
  • 2b4e991: 8320208: Update Public Suffix List to b5bf572
  • 6b96bb6: 8319777: Zero: Support 8-byte cmpxchg
  • 020c900: 8320052: Zero: Use __atomic built-ins for atomic RMW operations
  • 30d8953: 8275889: Search dialog has redundant scrollbars
  • cee54de: 8319988: Wrong heading for inherited nested classes
  • 32098ce: 8320348: test/jdk/java/io/File/GetAbsolutePath.windowsDriveRelative fails if working directory is not on drive C
  • a2c0fa6: 8320372: test/jdk/sun/security/x509/DNSName/LeadingPeriod.java validity check failed
  • 3aefd1c: 8320234: Merge doclint.Env.AccessKind with tool.AccessKind
  • d6d7bdc: 8319817: Charset constructor should make defensive copy of aliases
  • ... and 27 more: https://git.openjdk.org/jdk/compare/9727f4bdddc071e6f59806087339f345405ab004...master

As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details.

➡️ To integrate this PR with the above commit message to the master branch, type /integrate in a new comment.

@openjdk openjdk bot added the ready Pull request is ready to be integrated label Nov 17, 2023

hb_buffer_destroy (buffer);
hb_font_destroy(hbfont);
if (features != NULL) free(features);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Guess coding style warrants braces { and next statement in separate line...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed


public class LayoutCompatTest {

static String jni = "jni.txt";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems test is failing without fix with Exception in jtreg
java.io.FileNotFoundException: jni.txt (The system cannot find the file specified)

Also in standalone mode. I was expecting it will fail with RuntimeException "files differ byte offset"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure why it matters what this test does in a JDK without the fix, although logically, since the new system property isn't known, both cases would end up using JNI, and I'd expect the test to pass. I am not sure why you say it should fail.

And I can't reproduce your first problem, I ran this test in jtreg on an unmodified JDK22 and it passed, as I expected, for the reason given above.

@prrace
Copy link
Contributor Author

prrace commented Nov 21, 2023

/integrate

@openjdk
Copy link

openjdk bot commented Nov 21, 2023

Going to push as commit f69e665.
Since your change was applied there have been 54 commits pushed to the master branch:

  • 1c0bd81: 8319124: Update XML Security for Java to 3.0.3
  • 61d81d6: 8317742: ISO Standard Date Format implementation consistency on DateTimeFormatter and String.format
  • c4aba87: 8320272: Make method_entry_barrier address shared
  • 9311749: 8320526: Use title case in building.md
  • 9598ff8: 8315969: compiler/rangechecks/TestRangeCheckHoistingScaledIV.java: make flagless
  • 53eb6f1: 8187591: -Werror turns incubator module warning to an error
  • 570dffb: 8310807: java/nio/channels/DatagramChannel/Connect.java timed out
  • 21a59b9: 8282726: java/net/vthread/BlockingSocketOps.java timeout/hang intermittently on Windows
  • 9232070: 8318480: Obsolete UseCounterDecay and remove CounterDecayMinIntervalLength
  • e055fae: 8264425: Update building.md on non-English locales on Windows
  • ... and 44 more: https://git.openjdk.org/jdk/compare/9727f4bdddc071e6f59806087339f345405ab004...master

Your commit was automatically rebased without conflicts.

@openjdk openjdk bot added the integrated Pull request has been integrated label Nov 21, 2023
@openjdk openjdk bot closed this Nov 21, 2023
@openjdk openjdk bot removed ready Pull request is ready to be integrated rfr Pull request is ready for review labels Nov 21, 2023
@openjdk
Copy link

openjdk bot commented Nov 21, 2023

@prrace Pushed as commit f69e665.

💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
client client-libs-dev@openjdk.org core-libs core-libs-dev@openjdk.org integrated Pull request has been integrated
Development

Successfully merging this pull request may close these issues.

8 participants