-
-
Notifications
You must be signed in to change notification settings - Fork 259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix evaluation order issues (continuation of #1620) #1623
Conversation
For binassign expressions, return the lhs lvalue directly and don't bother casting it to the expression's type - the base types apparently always match.
Nice work! |
At the cost of some more bitcasts.
Using the lhs lvalue for shift-assignments seems to have to do with the precision with which the shift operation is executed. We come across stuff rewritten by the front-end like ushort bar = ...;
cast(ulong)cast(int)bar <<= 8; The front-end seems to like to cast the lhs to specify the precision of the intermediate binop result, not just for shift-assignments. Anyway, the last commit with the enforced DValue type consistency was a trip down the rabbit hole (I thought this may be related to the shift-assign strangeness). But I'm sure it's worth it, possibly rendering casts in other places useless. I only fixed all assertions popping up when executing the full test suite - that took long enough. ;) |
@@ -596,7 +596,7 @@ bool DtoLowerMagicIntrinsic(IRState *p, FuncDeclaration *fndecl, CallExp *e, | |||
|
|||
Expression *exp1 = (*e->arguments)[0]; | |||
LLValue *ptr = DtoRVal(exp1); | |||
result = new DImValue(exp1->type, DtoVolatileLoad(ptr)); | |||
result = new DImValue(e->type, DtoVolatileLoad(ptr)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous D return type for the volatile-load-intrinsic was T*
(as the pointer arg to be loaded from) instead of T
.
Alright, this should finally fix all binop/binassign eval/load order issues. Good to go as far as I'm concerned. |
Ah wait, there are still a few bugs. |
Just cast it to the full lhs type, except for shift-assignments (always perform the shift op in the lvalue precision). The front-end likes to cast the lhs to the type it wants for the intermediate binop. E.g., incrementing a ubyte lvalue by an integer leads to `lval += 3` being rewritten to `cast(int)lval += 3`. Now if we use the full lhs value for the binop, we get an rvalue result due to the cast. I.e., `cast(int)lval` before evaluating the rhs. So what we actually want is to load the lhs lvalue `lval` AFTER evaluating the rhs and cast it to an integer so that the binop is performed in that precision. The result is casted back to the lvalue type when assigning.
Okay, this should really be it now. I finally understood the shiftassign strangeness and fixed a related bug for all non-shift binassigns, see latest commit. |
We only needed that findLvalueExp() thing to 1. skip over casts and 2. to use a (bin)assign's lhs instead of the (bin)assign's result. Well, by making sure most (bin)assign expression actually return the lhs lvalue, we don't have this problem anymore and only need to skip over casts to get to the nested lvalue. This gets rid of a hacky workaround and brings our AST traversal order back to normal again.
The last commit is nice. :) |
I was about to suggest you look into that yesterday evening – only to wake up today and see you had already done it. Nice! |
////////////////////////////////////////////////////////////////////////////// | ||
|
||
namespace { | ||
template <llvm::Instruction::BinaryOps binOp> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it really buy us anything to make this a template? It seems like inlining should be sufficient, even if this was performance critical.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah no need for a template here anymore.
@klickverbot: Thanks for the suggestions. |
I've been trying to test this on Weka's codebase, but haven't been successful so far. Could be for unrelated reasons though. Let's merge? |
Well I hope the Weka issues are unrelated, as I'm pretty confident this should work now. Should also definitely be in 1.1. |
The Weka issue is solved, was unrelated apparently. |
Definitely should be in 1.1! |
Refactorings only for now.