New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vm and atomic improvements #9139
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
elad335
reviewed
Oct 26, 2020
{ | ||
res = &vm::reservation_lock(addr).first; | ||
} | ||
rsx_log.fatal("NV406E semaphore unexpected address. Please report to the developers. (offset=0x%x, addr=0x%x)", offset, addr); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Nekotekina
force-pushed
the
fixup
branch
10 times, most recently
from
October 27, 2020 02:49
e70cda3
to
6bc5bd0
Compare
Notification can be very heavy, especially if we need to wake many threads. Callback is set for cpu_thread in order to set wait flag accordingly.
Allocate "personal" range lock variable for each spu_thread. Switch from reservation_lock to range lock for all stores. Detect actual memory mirrors in shareable cache setup logic.
Nekotekina
force-pushed
the
fixup
branch
5 times, most recently
from
October 28, 2020 00:28
21c8323
to
8dead5b
Compare
Complementarily. Also refactored to make waiting mask non-template arg.
Remove vm::reservation_lock from it. Use lock bits to prevent memory clobbering in GETLLAR. Improve u128 for MSVC since it's used for bitlocking. Improve 128 bit atomics for the same reason. Improve vm::reservation_op and friends.
Nekotekina
force-pushed
the
fixup
branch
8 times, most recently
from
October 28, 2020 02:29
8170347
to
ebd6000
Compare
Reuse some internal locking mechanisms. Also fix vm::range_lock missing check.
Allow more in first-chance transactions. Allow abandonment of PUTLLC as in original path. Make PUTLLUC unconditionally shared-locked. Give PUTLLC +1 priority (minor change).
This was referenced Oct 31, 2020
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Atomic improvement aims to improve TSX performance by reducing waiting time on some heavy thread notification.
VM changes try to get rid of reservation_lock and reduce the concept of reservation bits as simple contention counter (currently they are split to "unique lock" and "shared lock counter").
In the meantime, some improvements for Non-TSX were made and significantly improved PUT performance (writes from SPU to main memory).
Please test for regressions.