-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement frame callbacks and a new memory pool #221
Conversation
The issue our CI is accusing is solved in PR #209, so it can be ignored for now. |
This fixes #208 for me |
Thanks, but unfortunately right now this is not really obeying the protocol properly. I am fixing that right now. |
8e46f3a
to
96a6279
Compare
1c37f5f
to
b69a386
Compare
631404a
to
471a63c
Compare
I believe this is now ready. I've tested in every configuration that was previously exploding. I will most likely be merging this on Monday, to give some time for anyone who would like to test this. I can also use a slightly different setup on Monday, just to confirm that everything works in yet another machine. The branch's name is now woefully inadequate, since we didn't really go back to slot-pools, but oh well. |
I tested this in niri with two outputs of different size (and different refresh rates fwiw) and everything seemed to worked just fine, cheers! |
471a63c
to
1d478c8
Compare
I tested this pr and everything seems to be working, no more glitches! |
if wallpaper.has_animation_id(self) { | ||
self.transition_done.store(true, Ordering::Release); | ||
} | ||
} | ||
} | ||
|
||
struct FrameCallbackHandler { | ||
cvar: Condvar, | ||
time: Mutex<Option<u32>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's probably worth adding a comment that this time
, that is received in the frame callback, doesn't really mean anything and in particular should not be used for any kind of frame timing. (For that purpose, you can use presentation-time + some logic to predict when a frame will be presented.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, good point.
Gave this a quick test on my setup, seems to work! |
We were only using a single WlBuffer per output, busy-waiting until it was released by the compositor. The problem is that compositors are now beginning to never release the buffers, keeping them around for some functionality. This means we have to double-buffer our WlBuffers. Note that, in order to be future proof, we would like to have something that automatically triples, or quadruple buffers, as needed. Now, if we are using two WlBuffers, in order for our animations and transitions to work, we will need to copy the previous buffer's content onto the new one at every surface commit. From my understanding (which could be completely wrong), sctk's current pools do not let us that. This is because we can't access the content of any current active buffer without some shenanigans. Therefore, I've implemented a new memory pool that simply gives us the first free (non-active) buffer it can find, and allocates more memory automatically when it can't find any. This means we will automatically double or triple buffer (or any other number of buffers), as needed. Now, this leads to a problem: if we don't wait for the compositor, we will often try to draw before it is ready. If that happens, the buffers will still be active, and we will have to allocate more. In order to prevent that, I've implemented proper frame callbacks. This has also had the benefit of making animations much more smooth, so I suppose this should have been considered a bug. There was a problem with the MSRV regarding some structs visibility, which is why everything has been annotated with pub(super) or pub(crate).
1d478c8
to
9fcc9f9
Compare
Instead of using multi pools, we are double buffering our WlBuffer so that we don't have to busy loop forever.
Fixes #208 and #220.