-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
create different lowerings for different use types #224
Conversation
address.replaceAtUsages(lowered); | ||
GraphUtil.killWithUnusedFloatingInputs(address); | ||
// replace original with lowered at this usage only | ||
// n.b. lowered is added unique so repeat lowerings will elide |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there some other way to phrase this? I'm not sure what you mean by "repeat lowering with elide".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about "n.b. since lowered was added to the graph with Graph.unique, all lowerings of an address node for the same datum type will common to the same node".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I'll rework the patch to use that comment.
Would it be possible to add a test to |
It actually says "repeat lowerings /will/ elide". It could equally as well say "repeat lowerings will be elided". What it indicates is this: assume you lower an original node several times (because it is used several times) producing a new lowered address each time. uses with different transfer datum type will produce lowered addresses are not equivalent. however, uses with the same transfer datum type will produce what is in essence an identical lowered address. the call to unique to add it to the graph should ensure that the first such lowering is reused with later equivalent copies being dropped. i.e. repeat lowerings are elided with that first lowering. Personally, I thought the original comment was clear and succinct but, no doubt, that varies according to the standard of beauty of the beholding eye (or some such). If you can suggest an improvement you would be happy with that avoids the verbosity of the full explanation I'd be equally happy to accept it. |
6357222
to
b9517be
Compare
Hmm, I think it should have been caught by the existing unaligned test. |
I see 23 failing unittests on AArch64 with the latest build tree. The number is the same whether or not the patch in this PR is included. I'm really not sure why the ByteBuffer test doesn't fail without the patch. |
Ah, I think I understand what is hgappening now. I believe the ByteBufferTest unit test passes for two reasons. Firstly, Unsafe.getIntUnaligned is not inlined into the test method that calls it. The problem I saw running netbeans was caused by the compilation of the code inlined from getIntUnaligned. Secondly, the buffer needs to be a DirectByteBuffer. In the netbeans test it was based on memory mapped from a binary file. |
Any chance you want to try and extend/modify the ByteBufferTest to address those 2 issues? White box tests are an excellent way to prevent regressions. |
Yes, I can do that. However, I would prefer to do that in a follow-up patch? I would help to have this patch in now to make it easier to investigate the unit tests which appear to be failing in response to Aleksander's changes that are now in HEAD. |
Ok, I'll merge it as is and you can submit a follow PR when you have time. |
Ok, thanks very much |
The issue:
The current AArhc64 address lowering assumes each use of an address employs the same type for the loaded datum. In rare cases this assumption is invalid because an address is shared by two loads with different types. In particular, this manifests when loading an int from a DirectByteBuffer at an odd address. The problem can be seen when running netbeans, where reads of mapped file data occasionally fail when an integer datum is located at an odd offset.
Diagnosis:
The generated code for Unsafe.getIntUnaligned
results in a short read and a byte read from the same base address both with offset 2. The RawAddress shared by both reads is currently lowered to an AArch64 address with displacement=2 and scalefactor=2, the latter because the first usage is a read with type short (scalefactor = transfer unit size). This leads to the short read being generated with the correct embedded offset 1 and the byte read generated with incorrect byte offset 1.
Fix:
The patch replaces an address at each usage with a lowering determined by the use type. So, in the above example each read gets its own AArch64 address, the first with scalefactor-=2 and the secodn with scalefactor=1.
Testing:
The byte buffer read errros re-appeared consistently when running netbeans after I pulled in the AArch64 address lowering change from the master repo. I tracked down the problem to a bad offset in the relevant ldrb instruction in the code generated for Unsafe,getItUnaligned.
After applying this patch netbeans no longer suffered read errors. A dump of the generated code for Unsafe,getItUnaligned showed that the correct offset was being employed.