Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Netplay] New relay servers #14447

Open
ghost opened this issue Sep 28, 2022 · 11 comments
Open

[Netplay] New relay servers #14447

ghost opened this issue Sep 28, 2022 · 11 comments

Comments

@ghost
Copy link

ghost commented Sep 28, 2022

Description

I've brought this before and I am bring this up again as the relay server demand has increased since 1.10.0.

Both the lobby server and the western europe relay server should be moved from Google Cloud to RetroArch/libretro's infrastructure in order to reduce our expenses. This will reduce our Google Cloud costs by half or more; that money can then go into launching about 3 new relay servers, covering a greater area.

My location suggestions for those 3 servers are:

  1. West Coast, USA (covers North America's west coast).
  2. Northern Brazil (covers northern South America and Central America).
  3. Australia (covers Oceania).

Expected behavior

Wider covering range from our relay servers.

Actual behavior

Limited covering range from our relay servers.

@LibretroAdmin
Copy link
Contributor

LibretroAdmin commented Sep 28, 2022

OK so who is going to make this happen? I would need help from you directly through DM in order to make this possible. I cannot do this on my own.

@ghost
Copy link
Author

ghost commented Sep 28, 2022

Whoever manages the internal infrastructure can just setup https://github.com/libretro/netplay-lobby-server-go and https://github.com/libretro/netplay-tunnel-server
After that, it's just a matter of deleting the Google Cloud VMs and updating lobby.libretro.com to point to the new address. We could also start supporting IPv6 for the lobby server this way, since we don't have an IPv6 on Google Cloud.

I can get rid of the old VMs and setup the new ones, but we must have those two running at our infrastructure first.

@LibretroAdmin
Copy link
Contributor

There is noone that manages it.

That is why, if you are insistent on this being done, you will have to directly work with me to get it setup. There is noone else in charge. Otherwise, the current infrastructure has to remain as is.

Again, if you want to work with me on getting this setup, then by all means. But we'll have to directly communicate in real time to get it done and I'll have to give you access to the server side to make it happen.

@ghost
Copy link
Author

ghost commented Sep 28, 2022

Isn't @m4xw managing that?

@LibretroAdmin
Copy link
Contributor

LibretroAdmin commented Sep 28, 2022

No, he has a regular day job, I sometimes ask him for maintenance related issues but he does not have a lot of free time.

So again, it would fall onto me and you to do this. And I don't know most of the infrastructure so it would probably 95% fall on you. If that is too complex or daunting for you, I suggest we stick with the current infrastructure instead. If you think you can do it, then fair. But basically inheriting even more technical debt is a huge issue for the project, where if people go away, then we suddenly have an issue in terms of how to keep moving forward. People have to remain if they want to take over certain parts of the infrastructure of RetroArch, or else it is an unacceptable level of risk when they leave.

@ghost
Copy link
Author

ghost commented Sep 28, 2022

I don't know anything about how things are set up internally, but it should only be a matter of running those two projects. The configs and initialization scripts from Google Cloud can be used here aswell.

As it is right now, we are wasting money that could go towards improving the project.

@LibretroAdmin
Copy link
Contributor

Well, the point still remains - if we are to proceed on replacing the current system, I'd need you to want to do most of the work in terms of replacing it and maintaining it, otherwise it is a point of failure and technical debt. There is no infrastructure team so to speak, what we currently have works and can maintain itself.

@ghost
Copy link
Author

ghost commented Sep 29, 2022

Maintenance remains the same; it's just moving software from one machine to another. I guess I can do the move if given access.

The outer layer of www.libretro.com is running through Cloudfare, so we might need to set that up aswell for lobby.libretro.com.

@gouchi
Copy link
Member

gouchi commented Sep 29, 2022

If possible we should take this opportunity to document the process.

@ghost
Copy link
Author

ghost commented Sep 29, 2022

If possible we should take this opportunity to document the process.

Software wise, it's just running those two projects; the lobby server has to be compiled with go first and both of them have simple configs that either have descriptions inside or require no further explanation, but that's all.

The problem lies in different infrastructures, which have different ways of setuping new VMs.
Lobby server and relay servers are running through Google Cloud, which requires the creation of VMs, groups, etc through Google Cloud's console.
If the infrastructure changes, then any documentation on this process is going to be invalid, and I am pretty sure we already have notes on how to work with the Google Cloud side.

@LibretroAdmin
Copy link
Contributor

LibretroAdmin commented Oct 4, 2022

Maintenance remains the same; it's just moving software from one machine to another. I guess I can do the move if given access.

The outer layer of www.libretro.com is running through Cloudfare, so we might need to set that up aswell for lobby.libretro.com.

I'd still be fine with doing this for the record. All we'd need to do is agree on the communication method. I'd be fine with setting up a temporary account on IRC for the purpose of doing this if that's better.

Note that it is based on the understanding that there is no change in maintenance or point of failure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants