Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: implement p2p based networking using iroh-net #404

Merged
merged 19 commits into from
May 17, 2024

Conversation

dignifiedquire
Copy link
Contributor

@dignifiedquire dignifiedquire commented May 15, 2024

This the start to run the networking connections over iroh, allowing direct connections between players.

General Architecture

  • The online matchmaking server does not proxy connections anymore, it only introduces players to each other. After that iroh connections are used (which are either direct or relayed, in all cases e2e encrypted)
  • The Lan and Online discovery services, only function as such, discovering other players, after this has happened, all work is passed to Socket which is the p2p version of transferring game state between players

Testing it out

There is a branch of jumpy using this, which can be used to try it out
https://github.com/dignifiedquire/jumpy/tree/feat-iroh-networking

Matchmaker Servers

  • Europe: dkv5qztdu75wgtkyukkmhemz25adco7jrplzockgqgzvzl3d3z4q
  • US: tkj2gohdqx2fjqm24s6l6m7ehecntlcz72kzliwjy5ezs6yfdo6q

Work to be done

  • browser, will likely have to be in some form of disabling this when running in wasm (for now)
  • cleanup
  •  better error handling

Copy link
Member

@zicklag zicklag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just reviewed the code, and I'm very happy with it. 👍

I haven't tested it yet, I'll try to test it on LAN if I get the chance, but all the changes look straight-forward. If you don't count the Cargo.lock file, you deleted more than twice as much code as you added!

My only minor thought is that before merging we should try to figure out which tokio features we need and remove the full feature.

Huge thanks for this! The stuff this can help enable in the bigger picture is really exciting!

@MaxCWhitehead
Copy link
Collaborator

Yeah this is great work - excited to play with it more. Once we get this in bones can deploy another matchmaker and switch jumpy over and see how things go.

@MaxCWhitehead
Copy link
Collaborator

MaxCWhitehead commented May 16, 2024

I tried testing this with a docker imagine running on my same machine as clients, and found there is high latency and spam of this:

2024-05-16T03:58:55.046961Z  INFO poll_send{me=juiy6aqwnxrhan5j}:get_send_addrs{node=g3vsalpcnyeyqk6m}: iroh_net::magicsock::node_map::node_state: new connection type typ=mixed

I am curious how publishing the IP of server works when trying to connect to node ID from inside the LAN (or in my case on the same host) - it seemed to connect, but did not seem to have a reliable / direct UDP connection. (It seems possible there is something we have to change to handle this better - just wanted to note as while I would normally specify the localhost or LAN address to do this, I am no longer sure how to do so with node id.

anywho haven't debugged this much but going to throw this up in a more legit configuration on a VPS in same region as another non-iroh matchmaker to compare with.

EDIT:
I think I missed the fact we are on p2p now and not just relaying through matchmaker with iroh connections - testing with a remote server with two local clients I still get same logs / not great ping (seems that using relay https://use1-1.relay.iroh.network which I am 85ms away from).

But the logs show my two clients are sending to each other (our server is not MITM anymore). So issue is not related to location of the matchmaking server, but is impacted by relay as not able to negotiate direct connections on same host. Will have to test with two clients on separate hosts in same lan next.

@MaxCWhitehead
Copy link
Collaborator

MaxCWhitehead commented May 16, 2024

In case it's helpful for anyone got a tiny server running matchmaker (as of commit e075fc6a, in Texas, USA) can use for testing.

node id:
37zobpkk3vyheqonfph55vh5p5w5wn2ipx6qixcffkfox4gdzpsq

@dignifiedquire
Copy link
Contributor Author

But the logs show my two clients are sending to each other (our server is not MITM anymore). So issue is not related to location of the matchmaking server, but is impacted by relay as not able to negotiate direct connections on same host. Will have to test with two clients on separate hosts in same lan next.

Yes, that is what I was suspecting the issue to be as well, especially the log line including mixed connection type, means the system is not able to determine if the direct connections are fully reliable.

Running on my machine locally (two games, same host) things worked out well, it might be we are having some trouble detecting appropriate IP addresses due to docker isolations.

@dignifiedquire
Copy link
Contributor Author

My only minor thought is that before merging we should try to figure out which tokio features we need and remove the full feature.

Yes, fully agreed, I use this to just say give me everything during development and then remove the pieces that I don't end up needing in the end.

@Arqu
Copy link

Arqu commented May 16, 2024

I'm just going to sling a couple wild guesses, maybe they lead somewhere.

  1. Is the docker running the network in host or bridge mode for the image. Bridge might interfere here as you basically get a docker "subnet" which then gives bogus IP addresses internally. Technically you need to have a local quasi stun so that it gives the right IP addr of the machine not the container.
  2. Port matching might be at play, dunno if it's possible to just expose all ports in bridge mode

Ideally the next thing that gets tested is to try running in HOST mode which binds to your local network interface/stack. That should hopefully resolve both and give us a better understanding of the issue.

}
}

impl NetworkSocket for Socket {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Socket is the only implementor of this trait now, could be removed as well, but that's your call

@MaxCWhitehead
Copy link
Collaborator

MaxCWhitehead commented May 16, 2024

I'll try adjusting the container's network mode on local / remote deployments and see if that helps. I'll also test matchmaker running locally without a container. (Problem doesn't have to block merge, just sorting it out)

@dignifiedquire dignifiedquire changed the title [WIP] Implement p2p based networking using iroh-net Implement p2p based networking using iroh-net May 16, 2024
@dignifiedquire dignifiedquire marked this pull request as ready for review May 16, 2024 20:30
@MaxCWhitehead
Copy link
Collaborator

MaxCWhitehead commented May 16, 2024

Updating the remote mm server container to run with host networking (docker run -p 8943:8943/udp --network host matchmaker-iroh) did not resolve the issue with two clients on same host. It is up at 37zobpkk3vyheqonfph55vh5p5w5wn2ipx6qixcffkfox4gdzpsq in case anyone wants to test the configuration - but fairly certain not a docker issue.

I am running another mm server on VPS in same region natively (cargo run) instead of in container at 7v77r7qofyrz4ms4y6offrfji377l5qqatxjqa34z57tgcv7fpkq, I get the same issue. So likely an issue with my local network configuration.

Will do some more tests and look into it further as I have the time. (Next test is two clients separate hosts under same LAN, then can figure out if should be looking host configuration or router / NAT).

@MaxCWhitehead
Copy link
Collaborator

@dignifiedquire re:

browser, will likely have to be in some form of disabling this when running in wasm (for now)

Not sure if you are tracking this but we do not support multiplayer in wasm, basically just a demo atm. So disabling all networking in wasm is sufficient resolution, just need to make sure wasm builds.

@dignifiedquire dignifiedquire changed the title Implement p2p based networking using iroh-net feat: implement p2p based networking using iroh-net May 17, 2024

// Generate certificate
let (cert, key) = certs::generate_self_signed_cert()?;
let secret_key = iroh_net::key::SecretKey::generate();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about how we deploy our matchmaker - if I understood correctly, we want a persistent secret key for server so the node ID (public key) is the same and if we set a default matchmaking server by node_id in Jumpy / on game side, so if restart / move server it will still be connectable.

Maybe read from env var or generate if not found so we can configure a deployment with persistent ID?

If I understood correctly, looks like we are using the global discovery method mentioned here, the alternative would be ship a default ticket? I think discovery sounds like a good bet so don't have to reissue ticket if move server away from closest DERP. I was curious about the mentioning that DNS record must be republished as DHT nodes may go away - is this something we need to handle, or is there a service / loop somewhere here refreshing this occasionally while server is up? Not sure how much of a concern this is.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read the other article again on DNS - it sounds like we may be using a centralized DNS server / the DHT stuff is not a concern here. Correct me if I misunderstood though. Thanks!

Copy link
Member

@zicklag zicklag May 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, as far as I understand so far, it updates a centralized DNS record so that other nodes can resolve our node ID, so I think if we persist our key, then we can keep the same node ID and people will be able to find us, even on a different server/ip.

We should probably read the secret key from an environment variable, and I like the idea of generating if one isn't supplied, we should just log a warning to make it clear that it will be different every time the server is started.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the iroh main library we store the keys on disk in the same format as openssh does, this soul easy to add here, but could also use an env variable

@MaxCWhitehead MaxCWhitehead added this pull request to the merge queue May 17, 2024
Merged via the queue into fishfolk:main with commit 3cb6a88 May 17, 2024
10 of 11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants