Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JDK-8322943: runtime/CompressedOops/CompressedClassPointers.java fails on AIX #17708

Closed
wants to merge 2 commits into from

Conversation

JoKern65
Copy link
Contributor

@JoKern65 JoKern65 commented Feb 5, 2024

Even after recent fixes like
https://bugs.openjdk.org/browse/JDK-8305765
the test runtime/CompressedOops/CompressedClassPointers.java fails on AIX.

This error results from the fact, that on AIX the shmat() allocation granularity is 256MB instead of the standard Pagesize (4KB or 64KB).

As a solution we introduce a new method os::vm_shm_allocation_granularity(), which on all platforms except AIX returns the same value as os::vm_allocation_granularity(), but on AIX it returns (in the apropriate cases) 256MB.

This new getter is used at all needed places instead of os::vm_allocation_granularity().


Progress

  • Change must be properly reviewed (1 review required, with at least 1 Reviewer)
  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue

Issue

  • JDK-8322943: runtime/CompressedOops/CompressedClassPointers.java fails on AIX (Bug - P4)

Reviewing

Using git

Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk.git pull/17708/head:pull/17708
$ git checkout pull/17708

Update a local copy of the PR:
$ git checkout pull/17708
$ git pull https://git.openjdk.org/jdk.git pull/17708/head

Using Skara CLI tools

Checkout this PR locally:
$ git pr checkout 17708

View PR using the GUI difftool:
$ git pr show -t 17708

Using diff file

Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/17708.diff

Webrev

Link to Webrev Comment

@bridgekeeper
Copy link

bridgekeeper bot commented Feb 5, 2024

👋 Welcome back jkern! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

@openjdk openjdk bot added the rfr Pull request is ready for review label Feb 5, 2024
@openjdk
Copy link

openjdk bot commented Feb 5, 2024

@JoKern65 The following label will be automatically applied to this pull request:

  • hotspot-runtime

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added the hotspot-runtime hotspot-runtime-dev@openjdk.org label Feb 5, 2024
@mlbridge
Copy link

mlbridge bot commented Feb 5, 2024

Webrevs

@tstuefe
Copy link
Member

tstuefe commented Feb 14, 2024

At a first glance, this seems like a really big hammer for a really small and exotic nail.

Can you explain what the new value "shm_allocation_granularity" is supposed to describe? And how it differs from vm_allocation_granularity?

@GoeLin
Copy link
Member

GoeLin commented Feb 15, 2024

Hi,
I think this should not be named shm_allocation_granularity, or at least not the function used in os.cpp should have that name.
(A function shm_allocation_granularity is useful to deliver the base platform dependent value).
Most platforms do not use shmated memory allocation here. It should probably be pd_attempt_reserve_memory_allocation_granularity(). You could even pass the size here so the check for Use64KPagesThreshold can be used in it on AIX making it more precise.

Thanks for fixing the randomization issue.
If tests require unscaled oops, they should set either Use64KPagesThreshold very high on AIX, or disable RandomizeClassSpaceLocation. Background: On AIX, randomization fails because with shmat alignment less than 16 attach points are identified.

@tstuefe
Copy link
Member

tstuefe commented Feb 15, 2024

I think this should not be named shm_allocation_granularity,

I think there is confusion on several issues. Lets wait for Joachims explanation.

@JoKern65
Copy link
Contributor Author

While almost all platforms allow to mount shared memory (via mmap() or shmat()) at least at _SC_PAGE_SIZE
boundaries (4k or 64k), AIX does only allow the allocation of shmat() memory at 256MiB boundaries. If shmat() on AIX is called with a wish address not at an 256MiB boundary, it fails.
vm_allocation_granularity() is initialized on all platforms with page_size (alias os::vm_page_size()). This has a value of 4K or 64K.
vm_allocation_granularity() is used at some places to generate the rules to compute the wish address for the allocation. Of course, on AIX we get in trouble, because this often misses the 256 MiB boundary.
So, I identified (hopefully) all places, where the vm_allocation_granularity() is used to finally generate the which address for a shared memory mount.
The goal is to replace all those occurrences of vm_allocation_granularity() by a new function vm_shm_allocation_granularity() with the following properties.
For all platforms except AIX vm_shm_allocation_granularity() returns the same value as vm_allocation_granularity(). So, the change is a NOP for all those platforms.
On AIX vm_shm_allocation_granularity() returns 256 MiB, when shmat() will be called and os::vm_page_size(), when mmap() will be called.

@tstuefe
Copy link
Member

tstuefe commented Feb 15, 2024

While almost all platforms allow to mount shared memory (via mmap() or shmat()) at least at _SC_PAGE_SIZE boundaries (4k or 64k), AIX does only allow the allocation of shmat() memory at 256MiB boundaries. If shmat() on AIX is called with a wish address not at an 256MiB boundary, it fails. vm_allocation_granularity() is initialized on all platforms with page_size (alias os::vm_page_size()). This has a value of 4K or 64K. vm_allocation_granularity() is used at some places to generate the rules to compute the wish address for the allocation. Of course, on AIX we get in trouble, because this often misses the 256 MiB boundary. So, I identified (hopefully) all places, where the vm_allocation_granularity() is used to finally generate the which address for a shared memory mount. The goal is to replace all those occurrences of vm_allocation_granularity() by a new function vm_shm_allocation_granularity() with the following properties. For all platforms except AIX vm_shm_allocation_granularity() returns the same value as vm_allocation_granularity(). So, the change is a NOP for all those platforms. On AIX vm_shm_allocation_granularity() returns 256 MiB, when shmat() will be called and os::vm_page_size(), when mmap() will be called.

Okay, thank you for confirming your intent. This is as I suspected from your initial change.

First off, please rename the issue to make it obvious that this affects not just AIX. I suggest something like "Introduce os::vm_shm_allocation_granularity" or somesuch.

But what you attempt to do already exists: this is exactly what vm_allocation_granularity() does: returning the alignment requirement for attach addresses when mapping memory using the os::reserve_... APIs.

And AIX (in shmget mode) is not the only platform with this restriction. We have that on Windows too, where we only can attach at multiples of what MS calls "Allocation Granularity" - I think 64KB. Note that the name is a misnomer, should more precisely be called "Virtual Memory Adress Attach Granuarity" since it does not affect the size of the allocation, only the attach point alignment.

The problem in hotspot is that allocation granularity is often misunderstood to be a granularity that affects the reservation size as well. So people use it (even I, accidentally) to align up allocation sizes when it really only affects allocation address alignments.

E.g. you can happily allocate a SystemV shm segment of one page (4K), but you will only be able to attach it to SHMLBA aligned addresses. Same on Windows.

For a much more detailed explanation, please see https://bugs.openjdk.org/browse/JDK-8253683 - this mis-use of allocation_granularity is a long-standing bug in hotspot.


Bottomline, I am against introducing yet another system value just because the one that is supposed to do that job is misused. I much rather see the misuse of allocation granularity cleaned up.

@tstuefe
Copy link
Member

tstuefe commented Feb 15, 2024

BTW, just a zoomed-back reminder:

all of this complexity is only because we don't use mmap on AIX for os::reserve_memory.

We don't use it, because mmap cannot be used with 64K pages. Or can it? We wrote the code originally 20 (?) years ago when the only way to get 64K pages was via shmget. Maybe that changed. It would be good if someone could check if this is still the case. Because using shmget instead of mmap causes a long tail of follow-up problems. mmap is so much easier.

Just a thought. Maybe we could throw away all the shmget handling nowadays, who knows.

Cheers, Thomas

@GoeLin
Copy link
Member

GoeLin commented Feb 15, 2024

Hi Thomas,

But what you attempt to do already exists: this is exactly what vm_allocation_granularity() does:

Well, that's not really true. As I understand, on AIX, there are both alignments (4k, 256M) at the same time, depending on whether shmat or mmap is used. And both are used.
Please see

if (bytes >= Use64KPagesThreshold) {

@tstuefe
Copy link
Member

tstuefe commented Feb 15, 2024

Hi Thomas,

But what you attempt to do already exists: this is exactly what vm_allocation_granularity() does:

Well, that's not really true. As I understand, on AIX, there are both alignments (4k, 256M) at the same time, depending on whether shmat or mmap is used. And both are used. Please see

if (bytes >= Use64KPagesThreshold) {

Yes, I know.

Please read https://bugs.openjdk.org/browse/JDK-8253683 for details. I described the issue and the confusion surrounding allocation granularity. I also touched on using different granularities for different memory regions. Please also read the comment from today.

We have two ways to deal with this:

We can either implement a function that returns granularity as a function of memory region. That would work with mixed alignment requirements. However, that is really costly, invasive, complex, and really not needed because:

  • on Linux, we don't use shmget anymore. We removed that - we only allocate huge pages with mmap now, so its mmap across the board.
  • on AIX, Use64KPagesThreshold has always been 0 by default, since inception of 64K paged memory management in hotspot. So by default, we always were running either with mmap or with shmat underlying os::reserve_memory, we never mixed those APIs. And I find it very unlikely that customers ever used that switch. In fact, I would suggest making that switch const, or completely removing it. That would also simplify some coding.
  • on all other platforms, we don't have that problem.

Therefore, I am for a simpler solution, which is a single value that is platform-dependent and never changes. And exactly that we have already: os::vm_allocation_granularity(). There is no need for a new value that means the same.

@JoKern65
Copy link
Contributor Author

We don't use it, because mmap cannot be used with 64K pages. Or can it? We wrote the code originally 20 (?) years ago when the only way to get 64K pages was via shmget. Maybe that changed. It would be good if someone could check if this is still the case. Because using shmget instead of mmap causes a long tail of follow-up problems. mmap is so much easier.

I wrote a little test programm

    int page_sz = sysconf (_SC_PAGE_SIZE);
    printf("page size for mmap: %d.\n", page_sz);  --> Output: 4096

    void* addr_mmap = mmap(NULL, 100000000, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
	struct vm_page_info pi;
    pi.addr = (uint64_t)addr_mmap;
    if (vmgetinfo(&pi, VM_PAGE_INFO, sizeof(pi)) == 0) {
      printf("real page size for mmap: %llu.\n", pi.pagesize);  --> Output: 4096
    } else {
      printf("no pagesize available.\n");
    }
    munmap(addr_mmap, 100000000);

and ran this little program with envvar
LDR_CNTRL=DATAPSIZE=64K@TEXTPSIZE=64K@STACKPSIZE=64K@SHMPSIZE=64K

Is this sufficient to prove that mmap is still only using 4K?

@tstuefe
Copy link
Member

tstuefe commented Feb 16, 2024

We don't use it, because mmap cannot be used with 64K pages. Or can it? We wrote the code originally 20 (?) years ago when the only way to get 64K pages was via shmget. Maybe that changed. It would be good if someone could check if this is still the case. Because using shmget instead of mmap causes a long tail of follow-up problems. mmap is so much easier.

I wrote a little test programm

    int page_sz = sysconf (_SC_PAGE_SIZE);
    printf("page size for mmap: %d.\n", page_sz);  --> Output: 4096

    void* addr_mmap = mmap(NULL, 100000000, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
	struct vm_page_info pi;
    pi.addr = (uint64_t)addr_mmap;
    if (vmgetinfo(&pi, VM_PAGE_INFO, sizeof(pi)) == 0) {
      printf("real page size for mmap: %llu.\n", pi.pagesize);  --> Output: 4096
    } else {
      printf("no pagesize available.\n");
    }
    munmap(addr_mmap, 100000000);

and ran this little program with envvar LDR_CNTRL=DATAPSIZE=64K@TEXTPSIZE=64K@STACKPSIZE=64K@SHMPSIZE=64K

Is this sufficient to prove that mmap is still only using 4K?

No clue. Maybe there is a different way. Some fancy madvise option? Ask IBM?

Astonishing that they have not yet provided large page support for mmap after 20 years.

@JoKern65
Copy link
Contributor Author

We don't use it, because mmap cannot be used with 64K pages. Or can it? We wrote the code originally 20 (?) years ago when the only way to get 64K pages was via shmget. Maybe that changed. It would be good if someone could check if this is still the case. Because using shmget instead of mmap causes a long tail of follow-up problems. mmap is so much easier.

I wrote a little test programm

    int page_sz = sysconf (_SC_PAGE_SIZE);
    printf("page size for mmap: %d.\n", page_sz);  --> Output: 4096

    void* addr_mmap = mmap(NULL, 100000000, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
	struct vm_page_info pi;
    pi.addr = (uint64_t)addr_mmap;
    if (vmgetinfo(&pi, VM_PAGE_INFO, sizeof(pi)) == 0) {
      printf("real page size for mmap: %llu.\n", pi.pagesize);  --> Output: 4096
    } else {
      printf("no pagesize available.\n");
    }
    munmap(addr_mmap, 100000000);

and ran this little program with envvar LDR_CNTRL=DATAPSIZE=64K@TEXTPSIZE=64K@STACKPSIZE=64K@SHMPSIZE=64K
Is this sufficient to prove that mmap is still only using 4K?

No clue. Maybe there is a different way. Some fancy madvise option? Ask IBM?

Astonishing that they have not yet provided large page support for mmap after 20 years.

I asked IBM. 64K Pages for mmap are available with AIX 7.3. I proved this. Unfortunately our current minimum release is AIX 7.2.5.7

@tstuefe
Copy link
Member

tstuefe commented Feb 23, 2024

We don't use it, because mmap cannot be used with 64K pages. Or can it? We wrote the code originally 20 (?) years ago when the only way to get 64K pages was via shmget. Maybe that changed. It would be good if someone could check if this is still the case. Because using shmget instead of mmap causes a long tail of follow-up problems. mmap is so much easier.

I wrote a little test programm

    int page_sz = sysconf (_SC_PAGE_SIZE);
    printf("page size for mmap: %d.\n", page_sz);  --> Output: 4096

    void* addr_mmap = mmap(NULL, 100000000, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
	struct vm_page_info pi;
    pi.addr = (uint64_t)addr_mmap;
    if (vmgetinfo(&pi, VM_PAGE_INFO, sizeof(pi)) == 0) {
      printf("real page size for mmap: %llu.\n", pi.pagesize);  --> Output: 4096
    } else {
      printf("no pagesize available.\n");
    }
    munmap(addr_mmap, 100000000);

and ran this little program with envvar LDR_CNTRL=DATAPSIZE=64K@TEXTPSIZE=64K@STACKPSIZE=64K@SHMPSIZE=64K
Is this sufficient to prove that mmap is still only using 4K?

No clue. Maybe there is a different way. Some fancy madvise option? Ask IBM?
Astonishing that they have not yet provided large page support for mmap after 20 years.

I asked IBM. 64K Pages for mmap are available with AIX 7.3. I proved this. Unfortunately our current minimum release is AIX 7.2.5.7

Does that current minimum release need 64K pages? How many customers are that?

Remember, you still can use the JVM with 4K pages (-XX:-Use64KPages). Its not that the VM is unusable then.

@JoKern65
Copy link
Contributor Author

We don't use it, because mmap cannot be used with 64K pages. Or can it? We wrote the code originally 20 (?) years ago when the only way to get 64K pages was via shmget. Maybe that changed. It would be good if someone could check if this is still the case. Because using shmget instead of mmap causes a long tail of follow-up problems. mmap is so much easier.

I wrote a little test programm

    int page_sz = sysconf (_SC_PAGE_SIZE);
    printf("page size for mmap: %d.\n", page_sz);  --> Output: 4096

    void* addr_mmap = mmap(NULL, 100000000, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
	struct vm_page_info pi;
    pi.addr = (uint64_t)addr_mmap;
    if (vmgetinfo(&pi, VM_PAGE_INFO, sizeof(pi)) == 0) {
      printf("real page size for mmap: %llu.\n", pi.pagesize);  --> Output: 4096
    } else {
      printf("no pagesize available.\n");
    }
    munmap(addr_mmap, 100000000);

and ran this little program with envvar LDR_CNTRL=DATAPSIZE=64K@TEXTPSIZE=64K@STACKPSIZE=64K@SHMPSIZE=64K
Is this sufficient to prove that mmap is still only using 4K?

No clue. Maybe there is a different way. Some fancy madvise option? Ask IBM?
Astonishing that they have not yet provided large page support for mmap after 20 years.

I asked IBM. 64K Pages for mmap are available with AIX 7.3. I proved this. Unfortunately our current minimum release is AIX 7.2.5.7

Does that current minimum release need 64K pages? How many customers are that?

Remember, you still can use the JVM with 4K pages (-XX:-Use64KPages). Its not that the VM is unusable then.

Currently we have zero customers using SapMachine21, but a new product which customers are forced to install on all their APP-Servers will come soon. And this new product is based on SapMachine21. And it is already announced to SAP customers using AIX APP-servers that the minimum OS release for this product will be AIX 7.2.5
I do not want to bother them with performance reduction due to going back to 4K pages solely.

So, I would like to go the following way as sketched above: Going through the code and checking the right usage of os::vm_allocation_granularity() and replacing it by os::vm_page_size() if the usage is not the granularity of the allocation address.

@tstuefe : Should I close this PR and open a new one for the new approach, or is it better to revert my previous changes and start again keeping this PR?

@tstuefe
Copy link
Member

tstuefe commented Feb 26, 2024

@JoKern65 I would look for a simpler solution as a start. Fixing up usages of os::vm_allocation_granularity() will be a slog (Kudos to you though for being willing to take it on). Can we just fix the test in places for AIX?

Note that I have never been a fan of this particular jtreg test anyway. It conflates two different things:

  • A the ability of the Operating System and the JVM reservation code to allocate in low-address regions
  • B whether or not we then do the right decisions wrt narrow Klass decoding setup

(A) is very dependent on the Operating System and somewhat random (ASLR).

We also have now CompressedCPUSpecificClassSpaceReservation (which tests A) and CompressedClassPointersEncodingScheme (which tests B, but is not fleshed out yet).

@GoeLin
Copy link
Member

GoeLin commented Feb 26, 2024

I don't think a local test fix makes sense. After all it is a real issue that os::attempt_reserve_memory_between() is using 4K alignment when we try to allocate 256M shmat memory.
We could do a temporary #ifdef AIX solution in that function.

@tstuefe
Copy link
Member

tstuefe commented Feb 27, 2024

I don't think a local test fix makes sense. After all it is a real issue that os::attempt_reserve_memory_between() is using 4K alignment when we try to allocate 256M shmat memory. We could do a temporary #ifdef AIX solution in that function.

That is a good point. And a good compromise.

@JoKern65 can you try this:

diff --git a/src/hotspot/share/runtime/os.cpp b/src/hotspot/share/runtime/os.cpp
index 5d6c1fa69ca..34a708e1cdc 100644
--- a/src/hotspot/share/runtime/os.cpp
+++ b/src/hotspot/share/runtime/os.cpp
@@ -1892,7 +1892,15 @@ char* os::attempt_reserve_memory_between(char* min, char* max, size_t bytes, siz
   char* const absolute_max = (char*)(NOT_LP64(G * 3) LP64_ONLY(G * 128 * 1024));
   char* const absolute_min = (char*) os::vm_min_address();
 
-  const size_t alignment_adjusted = MAX2(alignment, os::vm_allocation_granularity());
+  const size_t system_allocation_granularity =
+#ifdef AIX
+  // AIX is the only platform that uses System V shm for reserving virtual memory. As long as we
+  // have not fixed os::vm_allocation_granularity(), hard-code allocation granularity of SHMLBA here.
+      SHMLBA;
+#else
+      os::vm_allocation_granularity();
+#endif
+  const size_t alignment_adjusted = MAX2(alignment, system_allocation_granularity);

@JoKern65
Copy link
Contributor Author

I don't think a local test fix makes sense. After all it is a real issue that os::attempt_reserve_memory_between() is using 4K alignment when we try to allocate 256M shmat memory. We could do a temporary #ifdef AIX solution in that function.

That is a good point. And a good compromise.

@JoKern65 can you try this:

diff --git a/src/hotspot/share/runtime/os.cpp b/src/hotspot/share/runtime/os.cpp
index 5d6c1fa69ca..34a708e1cdc 100644
--- a/src/hotspot/share/runtime/os.cpp
+++ b/src/hotspot/share/runtime/os.cpp
@@ -1892,7 +1892,15 @@ char* os::attempt_reserve_memory_between(char* min, char* max, size_t bytes, siz
   char* const absolute_max = (char*)(NOT_LP64(G * 3) LP64_ONLY(G * 128 * 1024));
   char* const absolute_min = (char*) os::vm_min_address();
 
-  const size_t alignment_adjusted = MAX2(alignment, os::vm_allocation_granularity());
+  const size_t system_allocation_granularity =
+#ifdef AIX
+  // AIX is the only platform that uses System V shm for reserving virtual memory. As long as we
+  // have not fixed os::vm_allocation_granularity(), hard-code allocation granularity of SHMLBA here.
+      SHMLBA;
+#else
+      os::vm_allocation_granularity();
+#endif
+  const size_t alignment_adjusted = MAX2(alignment, system_allocation_granularity);

Yes, I will try that, but one question. Is the following code snipit an example for the incorrect use of vm_allocation_granularity or did I understand something wrong?

ReservedSpace::ReservedSpace(char* base, size_t size, size_t alignment, size_t page_size,
                             bool special, bool executable) : _fd_for_heap(-1) {
  assert((size % os::vm_allocation_granularity()) == 0,
         "size not allocation aligned");
  initialize_members(base, size, alignment, page_size, special, executable);
}

@tstuefe
Copy link
Member

tstuefe commented Feb 27, 2024

I don't think a local test fix makes sense. After all it is a real issue that os::attempt_reserve_memory_between() is using 4K alignment when we try to allocate 256M shmat memory. We could do a temporary #ifdef AIX solution in that function.

That is a good point. And a good compromise.
@JoKern65 can you try this:

diff --git a/src/hotspot/share/runtime/os.cpp b/src/hotspot/share/runtime/os.cpp
index 5d6c1fa69ca..34a708e1cdc 100644
--- a/src/hotspot/share/runtime/os.cpp
+++ b/src/hotspot/share/runtime/os.cpp
@@ -1892,7 +1892,15 @@ char* os::attempt_reserve_memory_between(char* min, char* max, size_t bytes, siz
   char* const absolute_max = (char*)(NOT_LP64(G * 3) LP64_ONLY(G * 128 * 1024));
   char* const absolute_min = (char*) os::vm_min_address();
 
-  const size_t alignment_adjusted = MAX2(alignment, os::vm_allocation_granularity());
+  const size_t system_allocation_granularity =
+#ifdef AIX
+  // AIX is the only platform that uses System V shm for reserving virtual memory. As long as we
+  // have not fixed os::vm_allocation_granularity(), hard-code allocation granularity of SHMLBA here.
+      SHMLBA;
+#else
+      os::vm_allocation_granularity();
+#endif
+  const size_t alignment_adjusted = MAX2(alignment, system_allocation_granularity);

Yes, I will try that, but one question. Is the following code snipit an example for the incorrect use of vm_allocation_granularity or did I understand something wrong?

ReservedSpace::ReservedSpace(char* base, size_t size, size_t alignment, size_t page_size,
                             bool special, bool executable) : _fd_for_heap(-1) {
  assert((size % os::vm_allocation_granularity()) == 0,
         "size not allocation aligned");
  initialize_members(base, size, alignment, page_size, special, executable);
}

I think so, yes.

@JoKern65
Copy link
Contributor Author

JoKern65 commented Mar 4, 2024

Because this proposal of introducing a new method os::vm_shm_allocation_granularity() in the shared hotspot code was rejected and the alternative solution works by encapsulating the difference in ifdef AIX brackets instead, I close this PR and have opened a successor using the ifdef AIX bracket (PR 18105)

@JoKern65 JoKern65 closed this Mar 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hotspot-runtime hotspot-runtime-dev@openjdk.org rfr Pull request is ready for review
Development

Successfully merging this pull request may close these issues.

3 participants