Skip to content

JDK-8270308: Arena::Amalloc may return misaligned address on 32-bit #4835

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

tstuefe
Copy link
Member

@tstuefe tstuefe commented Jul 20, 2021

Hi,

may I please have reviews for this fix. This fixes an issue with arena allocation alignment which can only happen on 32-bit.

The underlying problem is that even though Arenas offer ways to allocate with different alignment (Amalloc and AmallocWords), allocation alignment is not guaranteed. This sequence will not work on 32-bit:

Arena ar;
p1 = ar.AmallocWords();  // return 32bit aligned address
p2 = ar.Amalloc();       // supposed to return 64bit aligned address but does not.

This patch is the bare minimum needed to fix the specific problem; I proposed a larger patch before which redid arena alignment handling but it was found too complex: #4784 .

This fix is limited to Amalloc() and aligns _hwm to be 64bit aligned before allocation. But since chunk boundaries are not guaranteed to be 64-bit aligned either, additional care must be taken to not overflow _max. Since this adds instructions into a hot allocation path, I restricted this code to 32 bit - it is only needed there.

Remaining issues:

  • Amalloc... align the allocation size in an attempt to ensure allocation alignment. This is not needed nor sufficient, we could just remove that code. I left it untouched to keep the patch minimal. I also left the // ... for 32 bits this should align _hwm as well. comment in Amalloc() though I think it is wrong.
  • The chunk dimensions are not guaranteed to be 64-bit aligned:
    1. Chunk bottom depends on Chunk start. We currently align the header size, but if the chunk starts at an unaligned address, this is not sufficient. It's not a real issue though as long as Chunks are C-heap allocated since malloc alignment is at least 64bit on our 32bit platforms. More of a beauty spot, since this is an implicit assumption which we don't really check.
    2. Chunk top and hence Arena::_max are not guaranteed to be 64-bit aligned either. They depend on the input chunk length, which is not even aligned for the standard chunk sizes used in ChunkPool. And users can hand in any size they want. Fixing this would require more widespread changes to the ChunkPool logic though, so I left it as it is.
    3. similarly, we cannot just align Arena::_max, because that is set in many places and we need to cover all of that; it ties in with Arena rollback as well.

Because of (2) and (3), I had to add the overflow check into Amalloc() - any other way to solve this would result in more widespread changes.


Tests:

  • I tested the provided gtest on both 64-bit and 32-bit platforms (with and without the fix, without it shows the expected problem)
  • GHA
  • Tests are scheduled at SAP.

Progress

  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue
  • Change must be properly reviewed

Issue

  • JDK-8270308: Arena::Amalloc may return misaligned address on 32-bit

Reviewers

Reviewing

Using git

Checkout this PR locally:
$ git fetch https://git.openjdk.java.net/jdk pull/4835/head:pull/4835
$ git checkout pull/4835

Update a local copy of the PR:
$ git checkout pull/4835
$ git pull https://git.openjdk.java.net/jdk pull/4835/head

Using Skara CLI tools

Checkout this PR locally:
$ git pr checkout 4835

View PR using the GUI difftool:
$ git pr show -t 4835

Using diff file

Download this PR as a diff file:
https://git.openjdk.java.net/jdk/pull/4835.diff

@bridgekeeper
Copy link

bridgekeeper bot commented Jul 20, 2021

👋 Welcome back stuefe! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

@openjdk openjdk bot added the rfr Pull request is ready for review label Jul 20, 2021
@openjdk
Copy link

openjdk bot commented Jul 20, 2021

@tstuefe The following label will be automatically applied to this pull request:

  • hotspot

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added the hotspot hotspot-dev@openjdk.org label Jul 20, 2021
@mlbridge
Copy link

mlbridge bot commented Jul 20, 2021

Webrevs

Copy link
Contributor

@coleenp coleenp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this looks good and minimal. I'll comment in the other PR. Maybe I pushed back to hard but these changes are somewhat difficult to review. It was an impressive amount of work and maybe we are wasting a lot of space in ResourceAreas since many are used to print Symbol->as_C_string(). But even so, they're mostly short lived.
Thank you for doing this change.

#ifndef LP64 // Since this is a hot path, do this only if really needed.
_hwm = ARENA_ALIGN(_hwm);
_hwm = MIN2(_hwm, _max); // _max is not guaranteed to be 64 bit aligned.
#endif // !LP64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the change I was thinking of with the conditional for LP64 as well so that 64 bit doesn't pay for the alignment test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes, this makes sense that this would have not worked, since UseMallocOnly assumes that the arena is filled with pointers to the malloc memory.

@tstuefe
Copy link
Member Author

tstuefe commented Jul 21, 2021

I think this looks good and minimal. I'll comment in the other PR. Maybe I pushed back to hard but these changes are somewhat difficult to review. It was an impressive amount of work and maybe we are wasting a lot of space in ResourceAreas since many are used to print Symbol->as_C_string(). But even so, they're mostly short lived.
Thank you for doing this change.

I think it's better this way. This fix is easy to backport. And maybe we do a larger cleanup in later patches. Sorry for running forward with this one - Arenas have been a cause for a lot of frustration in our group, especially before OpenJDK (so, we never could give any changes back).

Realistically, are you guys even interested in cleanups in this area? Since review time is limited, and there are many areas needing attention.

@tstuefe
Copy link
Member Author

tstuefe commented Jul 21, 2021

We crash now in 32-bit with +UseMallocOnly. I'm investigating. This code is fragile :(

@tstuefe
Copy link
Member Author

tstuefe commented Jul 21, 2021

Fixed a bug on 32-bit with +UseMallocOnly. The alignment must never happen in +UseMallocOnly mode since there, the arena is used to keep a packed pointer array and gaps are not helpful.

Also added a cautionary comment about mixing Amalloc .. calls for arenas with tight packing.

Should we ever clean this up, there are some possible improvements:

  • for one, if we really want to keep +UseMallocOnly, the implementation can be simplified by just separating it from the arena and just hold a GrowableArray of the malloced pointers instead. The problem with keeping these pointers in the arena is that it makes iterating them really awkward since you need to know the internal chunk structure. Iterating a GrowableArray is straightforward. Also, +UseMallocOnly can be used to rule out bugs in arena handling, but for that its not helpful to still use the arena.
  • We could have an optional "alignment" as a const property of the arena and, for arenas that pack their data tightly, assert that they don't mix allocations with differing alignments.

#ifndef LP64 // Since this is a hot path, do this only if really needed.
_hwm = ARENA_ALIGN(_hwm);
_hwm = MIN2(_hwm, _max); // _max is not guaranteed to be 64 bit aligned.
#endif // !LP64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes, this makes sense that this would have not worked, since UseMallocOnly assumes that the arena is filled with pointers to the malloc memory.

@coleenp
Copy link
Contributor

coleenp commented Jul 21, 2021

I don't know if I can speak for everybody but I think cleanups in this area would be welcome if they're incremental and we can plan for time to review them. This hasn't been an area of pain for us (that I know of), but if it has for SAP, maybe cleanups like allowing packing in arenas has a higher priority.
Having a GrowableArray for UseMallocOnly might make sense. I was somewhat surprised by this implementation last week when I first looked at it closely. Or maybe UseMallocOnly could be removed in favor of memory guards in arena in debug mode, since UseMallocOnly is sparingly tested. That would be my preference anyway.
Thanks for taking this on and your thoughts on this.

@tstuefe
Copy link
Member Author

tstuefe commented Jul 21, 2021

I don't know if I can speak for everybody but I think cleanups in this area would be welcome if they're incremental and we can plan for time to review them. This hasn't been an area of pain for us (that I know of), but if it has for SAP, maybe cleanups like allowing packing in arenas has a higher priority.

I'll take a look (with medium priority) whether there are any memory savings possible or whether this is "just" cleanup. If it's the former, more effort may be worth it.

Having a GrowableArray for UseMallocOnly might make sense. I was somewhat surprised by this implementation last week when I first looked at it closely. Or maybe UseMallocOnly could be removed in favor of memory guards in arena in debug mode, since UseMallocOnly is sparingly tested. That would be my preference anyway.
Thanks for taking this on and your thoughts on this.

Thanks for the review,

Thomas

// like HandleAreas. Those areas should never mix Amalloc.. calls with differing alignment.
#ifndef LP64 // Since this is a hot path, and on 64-bit Amalloc and AmallocWords are identical, restrict this alignment to 32-bit.
_hwm = ARENA_ALIGN(_hwm);
_hwm = MIN2(_hwm, _max); // _max is not guaranteed to be 64 bit aligned.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may or may not work, depending on what the behavior for x == 0 is supposed to be. The current behavior would just return _hwm without advancing it. But if _max is not 64bit aligned and _hwm == _max then these two steps will end up with _hwm == _max again, and a not-64bit aligned result will ensue. I think better is, for 32bit platforms, allocate an extra word so _max can be guaranteed to be 64bit aligned.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or guarantee the chunk size is 64bit aligned and std::max_align_t is also (at least) 64bit aligned (which it almost certainly is), so that the result of malloc will always be (at least) 64bit aligned.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly agree that _max and the chunk boundaries should be aligned to 64bit alignment. But as I wrote, that would cause more widespread changes. I would just duplicate the work I did with my first proposal #4784. Which already exists, and it does do all you requested. Maybe take a look at that? If you like it, I propose we put this minimal patch through and later use my first PR as a base for a better cleanup.

The current behavior for Amalloc(0) is non-malloc-standard: it returns a non-unique-non-NULL address (since the next allocation returns the same address). When I worked on the first proposal I tried to fix this too. First tried to return NULL, which does not work, callers assume that's an OOM. Then by returning a unique pointer, but I did not like the memory waste. The true fix would be to disallow size==0 and fix callers not to allocate 0 bytes. But I believe that memory allocated with size==0 is not used, since it would be overwritten by the next allocation.

I'll modify the patch and do a size==0->size=1 for 32bit only. Let's hope that does not blow up arena sizes too much.

About std::max_align_t, can we use that? I hard-coded malloc align in a number of places.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you just assert that size > 0 for Amalloc?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately not, since that is actually done. I would have to hunt down callers which do this. That is maybe worthwhile, but would be a larger change.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The latest version doesn't seem any better than the previous. It could still return a misaligned value in the same circumstance, just more slowly because of the additional branch.

Under what circumstances can _hwm not be 64bit aligned? What needs to be done to prevent it? It looks to me that if Arena::grow were to align its size argument, then it all falls out, assuming std::max_align_t is also appropriately aligned. Add asserts in Chunk allocation. Add a static assert that std::max_align_t is appropriately aligned and leave it to the person porting to the very weird platform to figure out what to do if it fails. And I think that's sufficient.

(While use of alignof and std::max_align_t are not currently discussed in the HotSpot Style Guide, I would have no objection to their use when the alternative is something more complicated and less robust. And feel free to propose corresponding post facto changes to style guide.)

@tstuefe
Copy link
Member Author

tstuefe commented Jul 24, 2021

@coleenp, @kimbarrett is this current version okay for you?

// like HandleAreas. Those areas should never mix Amalloc.. calls with differing alignment.
#ifndef LP64 // Since this is a hot path, and on 64-bit Amalloc and AmallocWords are identical, restrict this alignment to 32-bit.
_hwm = ARENA_ALIGN(_hwm);
_hwm = MIN2(_hwm, _max); // _max is not guaranteed to be 64 bit aligned.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The latest version doesn't seem any better than the previous. It could still return a misaligned value in the same circumstance, just more slowly because of the additional branch.

Under what circumstances can _hwm not be 64bit aligned? What needs to be done to prevent it? It looks to me that if Arena::grow were to align its size argument, then it all falls out, assuming std::max_align_t is also appropriately aligned. Add asserts in Chunk allocation. Add a static assert that std::max_align_t is appropriately aligned and leave it to the person porting to the very weird platform to figure out what to do if it fails. And I think that's sufficient.

(While use of alignof and std::max_align_t are not currently discussed in the HotSpot Style Guide, I would have no objection to their use when the alternative is something more complicated and less robust. And feel free to propose corresponding post facto changes to style guide.)

Copy link

@kimbarrett kimbarrett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this still isn't right.

@tstuefe
Copy link
Member Author

tstuefe commented Jul 24, 2021

I think this still isn't right.
...
The latest version doesn't seem any better than the previous. It could still return a misaligned value in the same circumstance, just more slowly because of the additional branch.

How? I must be blind.

  • If size > 0: Amalloc aligns _hwm to 64-bit and returns a 64-bit aligned address -> ok
  • if size == 0: Amalloc does not change the arena, but returns a potentially unaligned pointer - but that is fine too since that pointer cannot be used for storing anything. Certainly nothing needing 64-bit alignment.

What am I overlooking here?

Under what circumstances can _hwm not be 64bit aligned? What needs to be done to prevent it?

_hwm is misaligned by the caller mixing Amalloc and AmallocWords on 32-bit.

AmallocWords advances _hwm by word size. Then, it is not 64-bit aligned anymore. A subsequent Amalloc() call needs to return a 64-bit aligned pointer, so it needs to align _hwm up. A subsequent AmallocWords() call, however, would be fine with _hwm as it is, so the alignment should really be done at Amalloc().

We cannot just align the allocation size to always be 64-bit, since on an Arena that uses repeated calls to AmallocWords() (e.g., HandleArea) this would create allocation padding and waste memory or even crash if the arena cannot handle padding.

It looks to me that if Arena::grow were to align its size argument, then it all falls out, assuming std::max_align_t is also appropriately aligned.

No, since this only ensures the chunk boundaries are properly aligned. The first call to AmallocWords - on a 32-bit platform - will misalign _hwm for potential follow-up calls to Amalloc().

In short, mixing allocation calls with different alignment requirements means the calls with higher alignment guarantees must align. Alternatively, you must ensure that for a given Arena, only one alignment is used - I proposed that alternative in my first patch. Make alignment property of the Arena and only allow allocations with that alignment.

@kimbarrett
Copy link

Sorry about the typo. I meant to ask, how can _max not be 64bit aligned. As that is what the comment is claiming can happen, and that is where things can go wrong. If grow() were to align the allocation size then I don't think that can happen and all would be good.

@tstuefe
Copy link
Member Author

tstuefe commented Jul 24, 2021

Sorry about the typo. I meant to ask, how can _max not be 64bit aligned. As that is what the comment is claiming can happen, and that is where things can go wrong. If grow() were to align the allocation size then I don't think that can happen and all would be good.

Oh, okay. I see what you mean. Arena::_max is not 64-bit aligned currently:

Arena::_max depeds on the chunk length (the payload length). The chunk length is either handed in as initsize parameter to Arena::Arena(MEMFLAGS memflag, size_t init_size);, which can be anything. But we could align it without problem. We already do, to word size. Could align to 64bit instead.

Or, chunk length is one of the default chunk lengths - Chunk::size, tinysize etc. Those are not aligned correctly, since that slack offset is not aligned correctly. Its 20. If we make it 24, its 64-bit aligned, and all default chunk lengths are 64-bit aligned.

I think both occurrences may be aligned to 64-bit, then we could remove the overflow check. I have to check. It would certainly be cleaner.

@tstuefe
Copy link
Member Author

tstuefe commented Jul 24, 2021

In this new version, the chunk payload area - and therefore, Arena::_max - is aligned to 64-bit. In order to do this we need to make sure the chunk length given when creating a chunk is 64-bit aligned. There are three places to fix:

  • the constants for the cached chunk sizes
  • if a caller specifies a custom chunk size as initial size when creating an arena
  • when the chunk grows
    In all three cases we align now to 64-bit. This works seamlessly.

Tested this new version on x64 and x86 Ubuntu, works. I tested it also with my new gtests (see #4898), no problems found.

Copy link

@kimbarrett kimbarrett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks much better. But I think you missed a place.

If AmallocWords is called with a non-64bit-aligned value (it could be only 32bit aligned on a 32bit platform), and it calls grow(), grow will call the chunk allocator with that length, which fails the precondition because it's not BytesPerLong aligned. I think grow() needs to call ARENA_ALIGN on the length on 32bit platforms.

That also suggests an additional test.

I like the new comment for Chunk::operator new.

@tstuefe
Copy link
Member Author

tstuefe commented Jul 25, 2021

This looks much better. But I think you missed a place.

If AmallocWords is called with a non-64bit-aligned value (it could be only 32bit aligned on a 32bit platform), and it calls grow(), grow will call the chunk allocator with that length, which fails the precondition because it's not BytesPerLong aligned. I think grow() needs to call ARENA_ALIGN on the length on 32bit platforms.

You are right. I even spelled it out in my last comment, but then did not fix it.

I added a test, saw that it fires on 32-bit, then fixed Arena::grow(). I also limited all new tests to 32-bit to save some cycles on 64-bit.

One remaining worry I have is that when mixing Amalloc and AmallocWords now we align correctly, that is fine. But if the Arena cannot handle gaps - e.g. HandleArea - difficult to analyze crashes can happen on 32-bit. The original cause would be that the author mixes Amalloc and AmallocWords for an Arena where he should stick to one alignment only.

My first patch filled alignment gaps in debug with a pattern, to trip off the analyzer. What do you think, should I do this here too?

Copy link

@kimbarrett kimbarrett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

@tstuefe
Copy link
Member Author

tstuefe commented Jul 25, 2021

Thanks, @kimbarrett !

@coleenp are you ok with the latest version too?

Copy link
Contributor

@coleenp coleenp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are small comments but maybe important.

@@ -52,11 +52,12 @@ class Chunk: CHeapObj<mtChunk> {
enum {
// default sizes; make them slightly smaller than 2**k to guard against
// buddy-system style malloc implementations
// Note: please keep these constants 64-bit aligned.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a static_assert(is_aligned(slack, ARENA_ALIGNMENT)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure!

// | Chunk |a | Payload |
// | |p | |
// +-----------+--+--------------------------------------------+
// A B C D
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hooray for ascii art!

@@ -0,0 +1,66 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you want an SAP copyright?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could add one. Probably should, thanks for noticing.


#ifndef LP64
// These tests below are about alignment issues when mixing Amalloc and AmallocWords.
// Since on 64-bit these APIs offer the same alignment, they only matter for 32-bit.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much for testing this on 32 bit platforms.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. I run Ubuntu 20.4, with Debian multi-arch it's beautifully easy to have 32-bit and 64-bit compilers side by side.

void* p1 = ar.AmallocWords(BytesPerWord);
void* p2 = ar.Amalloc(BytesPerLong);
ASSERT_TRUE(is_aligned(p1, BytesPerWord));
ASSERT_TRUE(is_aligned(p2, BytesPerLong));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should BytesPerLong in this test be ARENA_AMALLOC_ALIGNMENT?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I can do that.

@@ -188,6 +205,7 @@ void* Chunk::operator new (size_t requested_size, AllocFailType alloc_failmode,
if (p == NULL && alloc_failmode == AllocFailStrategy::EXIT_OOM) {
vm_exit_out_of_memory(bytes, OOM_MALLOC_ERROR, "Chunk::new");
}
assert(is_aligned(p, BytesPerLong), "Chunk start address not malloc aligned?");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be ARENA_AMALLOC_ALIGNMENT, in case we have to change it back to 128 bits for some reason?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I can do that too. I should try to set it to 128bits and test it.

With 128 bit we may run into alignment problems with 32-bit platforms, since we rely on ARENA_AMALLOC_ALIGNMENT being <= malloc alignment, and on 32-bit I believe malloc alignment is just 64 bit. Not a showstopper, but may require a bit more care.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was wondering if it would make an interesting test mode.

@coleenp
Copy link
Contributor

coleenp commented Jul 26, 2021

My first patch filled alignment gaps in debug with a pattern, to trip off the analyzer. What do you think, should I do this here too?
Maybe. Aren't arena's freed with a zap pattern so you'd find bugs that way? Anyway, it might be better as a separate RFE and new round of testing.
Thanks for default aligning the chunks though because this is what one would expect without looking at the code more closely.

@tstuefe
Copy link
Member Author

tstuefe commented Jul 26, 2021

My first patch filled alignment gaps in debug with a pattern, to trip off the analyzer. What do you think, should I do this here too?
Maybe. Aren't arena's freed with a zap pattern so you'd find bugs that way?

Yes, you are right, maybe that's sufficient. I thought that the zap pattern would be overwritten with the first allocation wave (allocate a bunch, reset-to-mark, allocate again) but I see reset-to-mark - ultimately Chunk::chop() - also zaps, so this may be fine.

Anyway, it might be better as a separate RFE and new round of testing.

Yes. But the default pattern may be fine already.

Thanks for default aligning the chunks though because this is what one would expect without looking at the code more closely.

Sure!

@tstuefe
Copy link
Member Author

tstuefe commented Jul 26, 2021

New version:

  • added static asserts to test that the default chunk sizes are now properly aligned (since Chunk::slack is used to compute those sizes, this also tests Chunk::slack)
  • changed a number of literal checks for 64-bit aligned-ness to ARENA_AMALLOC_ALIGNMENT
  • added SAP copyright

Copy link
Contributor

@coleenp coleenp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great. Thank you!

Copy link

@kimbarrett kimbarrett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks even better after dealing with Coleen's recent comments.

@tstuefe
Copy link
Member Author

tstuefe commented Jul 27, 2021

Thanks Coleen & Kim!

/integrate

@openjdk
Copy link

openjdk bot commented Jul 27, 2021

@tstuefe This PR has not yet been marked as ready for integration.

@tstuefe tstuefe changed the title JDK-8270308: Amalloc aligns size but not return value (take 2) JDK-8270308: Arena::Amalloc() may return misaligned address on 32-bit Jul 27, 2021
@tstuefe tstuefe changed the title JDK-8270308: Arena::Amalloc() may return misaligned address on 32-bit JDK-8270308: Arena::Amalloc may return misaligned address on 32-bit Jul 27, 2021
@tstuefe
Copy link
Member Author

tstuefe commented Jul 27, 2021

/integrate

@openjdk
Copy link

openjdk bot commented Jul 27, 2021

@tstuefe This change now passes all automated pre-integration checks.

ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details.

After integration, the commit message for the final commit will be:

8270308: Arena::Amalloc may return misaligned address on 32-bit

Reviewed-by: coleenp, kbarrett

You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed.

At the time when this comment was updated there had been 114 new commits pushed to the master branch:

  • fde1831: 8212961: [TESTBUG] vmTestbase/nsk/stress/jni/ native code cleanup
  • bb508e1: 8269753: Misplaced caret in PatternSyntaxException's detail message
  • c3d8e92: 8190753: (zipfs): Accessing a large entry (> 2^31 bytes) leads to a negative initial size for ByteArrayOutputStream
  • eb6da88: Merge
  • b76a838: 8269150: UnicodeReader not translating \u005c\u005d to \]
  • 7ddabbf: 8271175: runtime/jni/FindClassUtf8/FindClassUtf8.java doesn't have to be run in othervm
  • 3c27f91: 8271222: two runtime/Monitor tests don't check exit code
  • 049b2ad: 8015886: java/awt/Focus/DeiconifiedFrameLoosesFocus/DeiconifiedFrameLoosesFocus.java sometimes failed on ubuntu
  • fcc7d59: 8269342: CICrashAt=1 does not always catch first Java method
  • 8785737: 8269616: serviceability/dcmd/framework/VMVersionTest.java fails with Address already in use error
  • ... and 104 more: https://git.openjdk.java.net/jdk/compare/f8ec3b68f3e8f86eacf5c0de06c91827e88c7b30...master

As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details.

➡️ To integrate this PR with the above commit message to the master branch, type /integrate in a new comment.

@openjdk openjdk bot added the ready Pull request is ready to be integrated label Jul 27, 2021
@openjdk
Copy link

openjdk bot commented Jul 27, 2021

Going to push as commit 45d277f.
Since your change was applied there have been 114 commits pushed to the master branch:

  • fde1831: 8212961: [TESTBUG] vmTestbase/nsk/stress/jni/ native code cleanup
  • bb508e1: 8269753: Misplaced caret in PatternSyntaxException's detail message
  • c3d8e92: 8190753: (zipfs): Accessing a large entry (> 2^31 bytes) leads to a negative initial size for ByteArrayOutputStream
  • eb6da88: Merge
  • b76a838: 8269150: UnicodeReader not translating \u005c\u005d to \]
  • 7ddabbf: 8271175: runtime/jni/FindClassUtf8/FindClassUtf8.java doesn't have to be run in othervm
  • 3c27f91: 8271222: two runtime/Monitor tests don't check exit code
  • 049b2ad: 8015886: java/awt/Focus/DeiconifiedFrameLoosesFocus/DeiconifiedFrameLoosesFocus.java sometimes failed on ubuntu
  • fcc7d59: 8269342: CICrashAt=1 does not always catch first Java method
  • 8785737: 8269616: serviceability/dcmd/framework/VMVersionTest.java fails with Address already in use error
  • ... and 104 more: https://git.openjdk.java.net/jdk/compare/f8ec3b68f3e8f86eacf5c0de06c91827e88c7b30...master

Your commit was automatically rebased without conflicts.

@openjdk openjdk bot closed this Jul 27, 2021
@openjdk openjdk bot added integrated Pull request has been integrated and removed ready Pull request is ready to be integrated rfr Pull request is ready for review labels Jul 27, 2021
@openjdk
Copy link

openjdk bot commented Jul 27, 2021

@tstuefe Pushed as commit 45d277f.

💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored.

@tstuefe tstuefe deleted the JDK-8270308-Amalloc-aligns-size-but-not-return-value-take-2 branch August 23, 2021 12:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hotspot hotspot-dev@openjdk.org integrated Pull request has been integrated
Development

Successfully merging this pull request may close these issues.

3 participants