Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VSync / screen buffering (feature request) #179

Open
mbunkin opened this issue May 5, 2018 · 8 comments
Open

VSync / screen buffering (feature request) #179

mbunkin opened this issue May 5, 2018 · 8 comments

Comments

@mbunkin
Copy link

mbunkin commented May 5, 2018

When, for example, the prince finds the sword, and the background flashes yellow, I can see screen tearing. It doesn't happen in the original DOS game when played on a PC. I checked that today as I still have a 286 PC AT. If it is possible to add vsync / framebuffering to SDL, I'd ask you to consider this as a future feature. Thank you!

@Falcury
Copy link
Contributor

Falcury commented May 5, 2018

Earlier versions ran with Vsync on by default (at least on my Windows machine). But for 1.18 I tweaked the timer code (to improve accuracy) and ended up having to disable it. It was difficult for me to work with Vsync on the main thread (especially for timing) because it can cause thread execution to halt for a long time while waiting for the next refresh. Hm, maybe if we want to allow Vsync again, we could run a second thread that does nothing but wait for screen refreshes, acquire a mutex to guarantee that the latest frame has at least been fully drawn, and present the screen.

@mbunkin
Copy link
Author

mbunkin commented May 5, 2018 via email

@Falcury
Copy link
Contributor

Falcury commented May 11, 2018

Here is my attempt: Falcury@1fc3d8d

I added an option enable_vsync to SDLPoP.ini to allow turning it on or off.

It may still be unstable. Calling SDL's video APIs from another thread is strictly speaking not supported: https://wiki.libsdl.org/FAQDevelopment#Can_I_call_SDL_video_functions_from_multiple_threads.3F
(That at least explains why I was initially having crashes constantly...)

I tried to circumvent the limitation by only allowing the Vsync thread to run at specific times during the loop, when it is reasonably 'safe'. On my system, it doesn't crash anymore now.

Hm, maybe an alternative way to keep the timing of screen refreshes separate from the game logic would be to run all of the game logic in the second thread, instead of the other way around. I guess that would be harder though.

@mbunkin
Copy link
Author

mbunkin commented Jul 21, 2018

@Falcury On my machine it doesn't work well. It creates a lot of ghost frames with flickering. Maybe it would be possible to avoid threading by using some event library like libuv?.. No idea how the original prince was performing the drawing, but there were no threads in DOS.

@Falcury
Copy link
Contributor

Falcury commented Jul 22, 2018

@mbunkin Right, then using threads is off the table, I guess. Maybe all we would need to do is to take a fresh look at the timing code, to make sure that switching Vsync on does not cause timing to become inaccurate, testing with various monitor refresh rates.

@mbunkin
Copy link
Author

mbunkin commented Jul 22, 2018

@Falcury I gave this a bit more thought and I think the original DOS environment did have "threads". The video card was independent from the CPU, just like a separate thread it would take video memory contents and send them to the display, while CPU was doing something else. If it would be technically possible, I would suggest the following logic:

Main game thread generates the 320x200 screen buffer (drawing sprites). Once draw operation is complete, if the special mutex is open, frame buffer is copied to the second, "video memory" buffer by the main thread, and condition variable is signalled. The second, "video card" thread wakes up to the condition variable, locks the special "video memory" mutex, and begins scaling and displaying the buffer contents via SDL. Once its work is done, it unlocks the "video memory" mutex and sleeps until signalled again. If "video memory" is locked, copy operation returns without waiting for the mutex to be unlocked, and drawing cycle continues until the frame is good for displaying again.

This way the threads would be isolated from each other, no incomplete buffer can be displayed, and it should not negatively affect the responsiveness of controls that is in the game now, as long as it is run on a 2+ core machine so the game and video threads run on separate cores. I have no free time at this point, so I cannot test this theory in practice. It might be not applicable to the way prince's engine works. I hope my idea of a "video card" thread would at least amuse the respectable developers :)

@Falcury
Copy link
Contributor

Falcury commented Jul 23, 2018

@mbunkin If I remember correctly, what you describe is very similar to what I tried to do ;)
My guess is that I failed to fully isolate the threads from one other, because SDL's rendering routines are not thread-safe. In that case, it would still be possible to have an entirely separate "video card" thread, but only if you would initialize SDL, create the renderer and window, etc. all in that same thread. Essentially the "video card" thread would take the place of the main thread, and all of the game logic and pre-drawing of the frames would offloaded to the second thread.

DOS did not have threads, all programs had exclusive control over the system resources. Multithreading is not a problem nowadays, even on a single core system. The OS will periodically switch context to let both threads run. SDL also often uses threads internally, for example for audio callbacks and timers.

I tried to find out how other games solve this problem. I found this blog post interesting:
https://gafferongames.com/post/fix_your_timestep/
The method explained under the heading "Free the physics" could allow us to combine a fixed time step with Vsync on a single thread:

Instead of thinking that you have a certain amount of frame time you must simulate before rendering, flip your viewpoint upside down and think of it like this: the renderer produces time and the simulation consumes it in discrete dt sized steps.

This might be interesting to try as a better solution than relying on threads.

I have no free time at this point, so I cannot test this theory in practice.

Nor me, unfortunately.

@mbunkin
Copy link
Author

mbunkin commented Jul 23, 2018

@Falcury I see! I should really give it a go sometime. It's very interesting. One day I should have time to play around with SDL.

DOS did not have threads

Indeed, but the PC "did", as some controllers (like video or sound cards, modems, etc) were operating independently from the CPU and used interrupts, shared memory and DMA channels for control and data exchange. I just thought such a parallel processing environment can be implemented / emulated with threads.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants