Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Networking & Multiplayer #16

Closed
aeplay opened this issue Nov 24, 2016 · 9 comments
Closed

Networking & Multiplayer #16

aeplay opened this issue Nov 24, 2016 · 9 comments

Comments

@aeplay
Copy link
Member

aeplay commented Nov 24, 2016

No description provided.

@Herbstein
Copy link
Collaborator

This lays in the future, but tokio has just released their 0.1 version.

@kingoflolz
Copy link
Collaborator

Just putting down some thoughts for different multiplayer architecture

Distributed Actor System

Where actors are distributed between different systems and messages between different systems will just be passed using the internet.

Pros

  • Allows the distribution of computation between different players
  • Latency-insensitive as delays in message passing should not cause too many problems
  • Should be relatively easy to implement

Cons

  • Allows players to cheat
  • Requires high internet speeds (most likely above 1mbps)
  • Difficult to render things crossing chunks smoothly
  • Does not allow completely accurate rendering of the whole world (but we should be able to get very close)
  • Has not really been implemented before (to my knowledge) in other games (But that has never stopped Anselm!)

Deterministic Delayed Lockstep

Each player keeps a version of the game state and receives the "actions" which other players did, which is then applied on the game state. So only the input and initial game state need to be sent between players.

Pros

  • Very small bandwidth requirement (~10kbps after initial sync)
  • No cheating is possible
  • Allows accurate rendering of the whole world
  • Allows easier reproduction of bugs

Cons

  • Requires whole game to be deterministic across all platforms (floating point can only be used for rendering, not simulation!)
  • Latency sensitive
  • Player's computers need to be more powerful the more players there are (everyone needs to compute the whole game world)

Examples

See Factorio, they used this method successfully in a simulation game with tens of thousands (maybe even millions) of objects.

@Herbstein
Copy link
Collaborator

I believe DAS to be the superior system; if we can get it to work correctly. Right now we only have some loose idea of how, and if, it would work. I think we should build a small test in the near future. That test should only depend on kay and the networking stack. We can then benchmark network usage and see if it is at all viable.

@kingoflolz
Copy link
Collaborator

I think the problem with DAS is that it should be easy to make it work, but under the naive implementation (every actor which needs to be rendered receives a render message every frame and replies with the geometry it would like to be rendered) the bandwidth requirements would be absolutely massive (100k cars =~300mbps).
I think the core idea is great but the problem is how much can we optimize the network traffic through techniques like dead reckoning (which cannot be applied by Kay, only the upper layers D: ).

@Herbstein
Copy link
Collaborator

Herbstein commented Jan 13, 2017

I won't disagree with that, but we can make some assumptions about what resources the game has available. E.g. geometry could simply be a model ID. DAS would initially be a beast to optimize to useable levels, but the payoff is potentially pretty big.

@kingoflolz
Copy link
Collaborator

kingoflolz commented Jan 13, 2017

Just to clarify, the 300mbps is not sending the polygons over the network (which would be in the 10s of gbps), it is calculated by (16 bit model id + 3*32 bit XYZ cords) * 60 fps * 50k cars.
The most obvious optimization is sharing the road network between all the nodes, and send (32 bit road ID + 32 bit position along road + 32 bit velocity along road) * 10fps * 50k cars, which would be 48mbps.
Further optimizations could be to only show a subset of cars which are visible, which makes the next part difficult to estimate, but a reasonable lower bound for max cars visible is 1-2k cars, which means that if they are all on the other PC, it would take 1-2mbps upload per other player, which is not unreasonably large but does rule out some people from playing multiplayer online (For example, Australia is mainly on ADSL2+, which has 768kbps max theoretical upload, and maybe 400kbps in the real world).
For even more optimizations than that, by doing things like client-side simulation of local traffic would require the traffic to be able to be simulated locally without knowing the global state, which would be very difficult if not impossible, and you are down to making a deterministic traffic system anyways xD.

@Herbstein
Copy link
Collaborator

You're right. I guess I didn't think of the mathematics of scale. DDL is probably the only realistic option for a scalable implementation, even with the drawbacks.

@kingoflolz
Copy link
Collaborator

Hrm, with since more thought, it might be possible to make DAS possible. We would need to be able to extrapolate from the car data for maybe a second, which would allow a 10x reduction in bandwidth.

@kingoflolz
Copy link
Collaborator

With DAS, we would want to use UDP, and build a reliable and ordered layer on top of that. Something like a 16-bit message ID number and after that just push as many bytes of messages up to a user configurable max packet size (maybe we should have a custom checksum as well, as the UDP one is very weak and only 16 bits).
When the recipient receives the message, it will push an ack with multiple message IDs (maybe 50 or something) and send it back to the reciepient. Each packet will be stored on the sender, until the ack is received. If there is a unacked message that has many acked messages in front of it, it can be resent every single frame until it is acked.
However, if we want to provide strong ordering guarantees, it means whenever we have a single dropped packet, we will need to stop for at least 2 x latency, for the lack of ack to be acknowledged (haha) and the missing data sent over. To make sure that kay wouldn't lag when there is suddenly 3x the input to process one frame, we can buffer the inputs for (3-4) x latency, or we can do some sort of forward error correction to account for some percent of packets being dropped.
I think this would be a pretty good system for "important" messages, and I think rendering messages can be simplified to send only packets with complete messages (no need for reassembly) and no retransmission (there would be no way you get stuff back within 16ms), but with acks so that the sender can prioritise sending updates to things that have not been updated recently, to prevent visual glitches if the extrapolation and simulation disagree over long times.
Just some random thoughts, I am by no means a network expert :)

@aeplay aeplay modified the milestone: Cleanup & Architecture Improvements Jan 25, 2017
@aeplay aeplay closed this as completed Aug 29, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants