Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Client] LB across multiple endpoints? #3

Closed
markmandel opened this issue Jan 28, 2020 · 3 comments · Fixed by #56
Closed

[Client] LB across multiple endpoints? #3

markmandel opened this issue Jan 28, 2020 · 3 comments · Fixed by #56
Labels
area/user-experience Pertaining to developers trying to use Quilkin, e.g. cli interface, configuration, etc good first issue Good for newcomers help wanted Extra attention is needed kind/feature New feature or request

Comments

@markmandel
Copy link
Member

markmandel commented Jan 28, 2020

Should a Client proxy be able to send packets to multiple Server proxy endpoints, probably in some sort of load balancer way - such as round robin, or random manner?

This provides another layer of redundancy in case a singular Server proxy goes down, and it takes time to realise and move to a new one. At least this way, some traffic is going through -- but seemingly at a slightly lower latency.

Maybe a configuration like:

local:
  port: 7000 # the port to receive traffic to locally
client:
  connection_id: 1x7ijy6 # the connection string to attach to the traffic
  lb_policy: ROUND_ROBIN # load balance policy. Round Robin / Random / ???
  endpoints:
    -  127.0.0.1:7001 # the address to send traffic to
    -  127.0.0.1:7002
    -  127.0.0.1:7003
@markmandel markmandel added kind/feature New feature or request area/user-experience Pertaining to developers trying to use Quilkin, e.g. cli interface, configuration, etc labels Jan 28, 2020
@suom1
Copy link
Collaborator

suom1 commented Feb 4, 2020

I have been thinking about this for some time now. While I liked the idea of sending data to all proxies configured / assigned in the beginning I think this could become a problem for players that do not have good enough internet.

We have the luxury in many cases where internet connections are around 50Mbit/s or better, but this is not the case in many places around the world where ADSL is the most common type of internet connection.

I have not finished all testing yet, but will do some testing on bandwidth usage for different FPS games to create a better understanding of how many Mbit/s a typical FPS game uses.

A suggestion on solution for this would be that the client only sends data to the proxy closest to the client, but keeps two (or more) sessions open to other proxies which then acts as fallback if connection to the primary proxy dies. This would eliminate us from sending 2-3 times the data and risking of oversaturating the players internet connection.

Thoughts on this?

@markmandel markmandel changed the title [Sender] LB across multiple endpoints? [Client] LB across multiple endpoints? Feb 4, 2020
@markmandel
Copy link
Member Author

Updated the ticket with the new client/server nomenclature.

Thanks for your thoughts and research @suom1

We have the luxury in many cases where internet connections are around 50Mbit/s or better, but this is not the case in many places around the world where ADSL is the most common type of internet connection.

Is the assumption here that the client proxy would send to every endpoint? My assumption was that it would be something like a round robin, so that a single packet would not get repeated, just sent to a different endpoint each time.

If I'm following what you said above, the concern is the extra bandwidth of duplicating packets? Is that correct? In that case, I'm not advocating for duplicate packets (unless that was something a user specifically configured/requested for their specific use case)

A suggestion on solution for this would be that the client only sends data to the proxy closest to the client, but keeps two (or more) sessions open to other proxies which then acts as fallback if connection to the primary proxy dies.

My assumption here is that people would set up redundant proxies in the same region that would be essentially the same from a latency / network connection perspective (although it is really up to the user). So in a GCP case, I would have several Server proxies in the same GCP zone, all pointing to Game Servers hosted in the same zone. So therefore sending data to any of the Server proxies is essentially the same for all intents and purposes.

Does that make sense?

That being said, down the line ,we could do some kind of weighted load balancing / detection of ping time / something else for more advanced load balancing options like you described above.

What do you think?

@suom1
Copy link
Collaborator

suom1 commented Feb 12, 2020

Sorry for late reply!

Is the assumption here that the client proxy would send to every endpoint? My assumption was that it would be something like a round robin, so that a single packet would not get repeated, just sent to a different endpoint each time.

It was based on previous discussions (in meeting) where it was mentioned that we would send data on X endpoints in order to have redundancy. And I think that's where my thought process picked it up.

If I'm following what you said above, the concern is the extra bandwidth of duplicating packets? Is that correct? In that case, I'm not advocating for duplicate packets (unless that was something a user specifically configured/requested for their specific use case)

That was exactly my concern!

My assumption here is that people would set up redundant proxies in the same region that would be essentially the same from a latency / network connection perspective (although it is really up to the user). So in a GCP case, I would have several Server proxies in the same GCP zone, all pointing to Game Servers hosted in the same zone. So therefore sending data to any of the Server proxies is essentially the same for all intents and purposes.

That's absolutely how I would assume most of the implementations would be. My use case comes from the possibility to use these proxies to proxy traffic to any provider.

Another potential setup would be to deploy proxies in all datacenters (even those you don't have servers in) and then use the internal network of Google to transport the traffic.

That's where you would want the client to check latency, this might be a very specific solution but one that I think most game creators which does games that depends on latency would like to have.

Does that make sense?
What do you think?

It does make sense!

@markmandel markmandel added good first issue Good for newcomers help wanted Extra attention is needed labels Apr 29, 2020
iffyio added a commit that referenced this issue Jun 3, 2020
This commit adds RoundRobin and Random load balancing
support to a proxy as well as config support for multiple
addresses and load balancer policy on the client proxy config.
The default behavior if no policy is set or is a server proxy
sends packets to all endpoints.

This also introduces the `rand` crate as a dependency, used by
the random load balancing implementation.

Resolves #3
iffyio added a commit that referenced this issue Jun 3, 2020
This commit adds RoundRobin and Random load balancing
support to a proxy as well as config support for multiple
addresses and load balancer policy on the client proxy config.
The default behavior if no policy is set or is a server proxy
sends packets to all endpoints.

This also introduces the `rand` crate as a dependency, used by
the random load balancing implementation.

Resolves #3
markmandel pushed a commit that referenced this issue Jun 5, 2020
* Add client proxy load balancing support

This commit adds RoundRobin and Random load balancing
support to a proxy as well as config support for multiple
addresses and load balancer policy on the client proxy config.
The default behavior if no policy is set or is a server proxy
sends packets to all endpoints.

This also introduces the `rand` crate as a dependency, used by
the random load balancing implementation.

Resolves #3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/user-experience Pertaining to developers trying to use Quilkin, e.g. cli interface, configuration, etc good first issue Good for newcomers help wanted Extra attention is needed kind/feature New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants