-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
arm: refactor test92 to avoid conflicting writes and add possibly conflicting reads #162
base: master
Are you sure you want to change the base?
Conversation
The specification allow up to a maximum exclusive granule of 512 words to be reserved by `ldxr`, so writes close enough to the address that was loaded could result on the reservation being invalidated and `stxr` failing. Instead of writing back the values, do a second normal read to validate the first one and compare them internally. While at it make the test logic slightly easier to read. Fixes: zherczeg#160
/* buf[1] */ | ||
sljit_emit_op1(compiler, SLJIT_MOV, SLJIT_MEM1(SLJIT_S0), sizeof(sljit_sw), SLJIT_R1, 0); | ||
sljit_emit_op1(compiler, SLJIT_MOV, SLJIT_R0, 0, SLJIT_R1, 0); | ||
jump = sljit_emit_cmp(compiler, SLJIT_NOT_EQUAL, SLJIT_R1, 0, SLJIT_MEM1(SLJIT_S0), 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would keep the original logic of storing the result of loads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't this require then memory barriers to be added?, moving the store outside of the critical loop seems to work, but without a memory barrier it might get reordered and would cause failures (although not as obvious, as the infinite loop)
When refactoring, I thought about using a second array for the stores, but with a possible granule of 512 words, it will need ugly tricks to keep it safe anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On ARM the doc says in "Load-Acquire, Store-Release" section:
- A Load-Acquire places no additional ordering constraints on any loads or stores appearing before the
Load-Acquire. - The Store-Release places no additional ordering constraints on any loads or stores appearing after the
Store-Release instruction.
If barriers needs to be added on some cpus, the instruction should implicitly add them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that (even before LSE and other future improvements) in aarch64 there are 2 pairs of instructions for its LL/SC.
Our implementation uses the weak (and better performing) pair of ldxr
/stxr
but their semantics don't correspond to what is used on other processors and that seem to map[1] better to the strong (and slower) pair (ldaxr
/stlxr
).
Not that anyone should, but if someone would implement a futex or mutex with sljit, it might work fine in x86/s390x and break with ARM (and probably the other RISC CPUs if implemented using LL/SC).
I have to admit that without a clear understanding on how this API is to be used, I am not sure if it is a problem but it might be something worth considering by maybe adding a "weak" parameter that would be a NOOP in strong ordered architectures and allow selecting the right pair in weak ordered ones like ARM.
If this is to be modeled like C11/C++11 atomics then maybe a full set of memory_order options might be needed instead.
Avoid the infinite loop with M1, while trying to keep the test logic as similar as possible (eventhough some tests didn't seem needed and there is more cleanup possible)
Some tests were left doing some potentially unnecessary operations that were in the original, in an attempt to keep using the same registers for the atomic calls, but IMHO the register usage could do with some tightening and consistency.
Fixes: #160