Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explanation of the network synchronization #284

Closed
lolriven opened this issue Apr 30, 2024 · 6 comments
Closed

Explanation of the network synchronization #284

lolriven opened this issue Apr 30, 2024 · 6 comments

Comments

@lolriven
Copy link

Thank you for this, it’s such a huge resource for the gamedev community!

According to your README, the players merely send inputs and then replication or local simulation is done by each individual player. This is similar to how fighting games work. Are you performing roll-back and clock synchronization as well as fixed frame updates to ensure synchronized tick rates? I love C but I have difficult time understanding C++ :(

@geneotech
Copy link
Member

geneotech commented Apr 30, 2024

Hello! From what I know, fighting games are p2p which is where the whole complexity comes from, whereas Hypersomnia is a completely client-server architecture. I've been asked the same question by a fighting game enthusiast and they made me realize I'm doing something called GGPO:

GGPO uses a netcode technique called "rollback". Rather than waiting for input to be received from other players before simulating the next frame, GGPO predicts the inputs they will send and simulates the next frame without delay using that assumption. When other players’ inputs arrive, if any input didn't match the prediction, GGPO rolls back the state of the game to the last correct state, then replays all players’ revised inputs back until the current frame. The hope is that the predictions will be correct most of the time, allowing smooth play with minimal sudden changes to the game state.

This is exactly how it works in Hypersomnia but it is the server that governs what inputs were applied to every simulation step, not the clients.

I don't synchronize clocks at all - it is a self-correcting system. I don't even use timestamps for messages. There is simply a bidirectional stream of completely reliable messages over UDP, the client sends its inputs every step @ 60Hz (even if there were none, it just says "empty input"). The client begins simluating its own world forward the moment it connects, and populates the "predicted inputs (steps)" vector until the server updates start to arrive. Server, within every step update packet (@ 60Hz too) says how many steps worth of client inputs have been accepted. The client pops that many steps from the front of the vector of predicted inputs, then re-creates the "predicted" (rendered) game world by re-applying the remaining predicted inputs on the "referential" (off-screen) server world (which is deterministically simulated from the server commands, and it contains the always correct/certain server state. It is only ever simulated forward the moment that server updates arrive).

Thus the amount of "prediction" - i.e. how much the game nudges into the future when displaying the game world - is always proportional to the apparent network lag, and is 100% adaptive to fluctuations in latency.

@lolriven
Copy link
Author

Hi! Thank you!
So from what I understand, the client sends inputs at a fixed rate to the server. The server will simulate the game forward on the assumption that the client's input did not change from their previous sent input. When a change does occur, the server will undo the game state and apply the correct input for that target frame then resimulate forward again to the current state. And at a fixed rate it will send that clients predicted input or corrected input, along with position, velocity and rotation?

@geneotech
Copy link
Member

geneotech commented Apr 30, 2024

the client sends inputs at a fixed rate to the server.

Yes.

The server will simulate the game forward on the assumption that the client's input did not change from their previous sent input.

Yes, the server does too, although my explanation focused on the client-side, and yes, clients do make this assumption as well.

When a change does occur, the server will undo the game state and apply the correct input for that target frame then resimulate forward again to the current state.

No, it is the clients who do it. The server never rolls back any state, it mercilessly marches forward in time and accepts clients' inputs as they are received, immediately to the soonest simulation step.

And at a fixed rate it will send that clients predicted input or corrected input, along with position, velocity and rotation?

No, the server does not send any predicted input. It also does not send any position, velocity or rotation data except when connection is initialized (clients need starting data to deterministically simulate from). Later it just broadcasts the "canonical" inputs of all players it decided to apply to each simulation step.

@lolriven
Copy link
Author

lolriven commented Apr 30, 2024

I think I understand now! The server just simulates every client's input on the next simulation tick. So it doesn't really care when the client sent it. Itt relays the clients inputs back to the clients. Where the clients can then perform rollback and correct their own simulation of the game?

@geneotech
Copy link
Member

geneotech commented Apr 30, 2024

The server behaves much like lock-step where it waits for all clients inputs to arrive, simulates the game forward then it relays the clients inputs back to the clients. Where the clients can then perform rollback and correct their own simulation of the game?

Correct with one exception - the server never waits. It always simulates at a steady rate. It applies client inputs as they arrive from the network, so it is in everyone's best interest to send them as fast as possible, at as regular intervals as possible.

There of course is jitter - two steps worth of client commands could arrive in a single call to recv() on the server. The server pushes any excess to a queue called "jitter buffer" - and extracts only a single step from this queue per the canonical server simulation frame. The client can set the maximum length of the jitter buffer from Settings (by default set to 3 so it doesn't grow too large). If it exceeds the limit and e.g. 5 steps arrive from the client after a brief stop in connectivity, the server completely resets the queue by "merging" all queued inputs into one, so e.g. concatenates keystroke events and sums the mouse movement offsets. On the other hand, if no steps arrived at all (the queue is empty) and the server is about to simulate the next canonical step, it will just assume an "empty input" for the client, so as if the key press states have not changed and mouse not moved at all for this specific client. This of course only happens during lag spikes since the client always sends inputs at a steady rate whether they're empty or not.

Edit: after your edit, it's of course correct.

@geneotech
Copy link
Member

geneotech commented Apr 30, 2024

Also, like I mentioned wrt. clock synchronization: with each canonical step relayed from server to the client, the server includes the number of client "input steps" that have been successfully "applied" - so it could be "5" after the merge of jitter buffer, always "1" under normal conditions, and "0" for the duration of a lag spike. The client then pops that many steps from its local prediction queue. This is how the system self-corrects for lag fluctuations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants