Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

home nodes depend on static exit node ips #23

Open
jhpoelen opened this issue Mar 21, 2018 · 19 comments
Open

home nodes depend on static exit node ips #23

jhpoelen opened this issue Mar 21, 2018 · 19 comments

Comments

@jhpoelen
Copy link
Contributor

jhpoelen commented Mar 21, 2018

currently, home nodes have configuration that is tied to specific exit node ip (mesh+internet) in addition to the required list of brokers for tunneldigger client in /etc/config/tunneldigger.

suggest to remove dependencies on exit node ip (mesh+internet) outside of the configuration in /etc/config/tunneldigger.

Why now? We setting up a second (ed. third actually...) exit node and would like to be able to bridge the mesh between the two. This means that the exit nodes have to have distinct mesh ips. Right now, home nodes are hardcoded to expect a exit node at 100.65.0.42 .

jhpoelen pushed a commit to sudomesh/patches that referenced this issue Mar 21, 2018
jhpoelen pushed a commit to sudomesh/makenode that referenced this issue Mar 21, 2018
@jhpoelen
Copy link
Contributor Author

This issue has been resolve in latest version of makenode. Also, a patch is available at https://github.com/sudomesh/patches/tree/master/bug0023 .

@bennlich
Copy link
Collaborator

I notice on my home node I have a file called /etc/resolv.conf.dnsmasq that also contains the exit node mesh ip:

root@pattyspuddles:~# cat /etc/resolv.conf.dnsmasq
# These are the upstream DNS servers used by dnsmasq

nameserver 100.64.0.42 # sudomesh exit server
nameserver 209.244.0.3 # Level3 primary DNS
nameserver 209.244.0.4 # Level3 secondary DNS
nameserver 84.200.69.80 # dns.watch primary DNS
nameserver 84.200.70.40 # dns.watch secondary DNS
nameserver 2001:1608:10:25::1c04:b12f # dns.watch primary IPv6 DNS
nameserver 2001:1608:10:25::9249:d69b # dns.watch secondary IPV6 DNS

See https://github.com/sudomesh/makenode/blob/master/configs/templates/files/etc/resolv.conf.dnsmasq#L3.

@bennlich
Copy link
Collaborator

Looks like you addressed this in the patch, but maybe not in the makenode commit?

jhpoelen pushed a commit to sudomesh/makenode that referenced this issue Mar 22, 2018
@jhpoelen
Copy link
Contributor Author

@bennlich nice catch!

@bennlich
Copy link
Collaborator

bennlich commented Mar 23, 2018

I think this patch might need to include this change too:
sudomesh/makenode@f23342d

(because the patch deletes the $INETEXITIP variable)

@bennlich
Copy link
Collaborator

I was able to apply, revert, and reapply this patch without issues besides the above! The above issue is non-catastrophic. Just results in the following error getting logged:

Thu Mar 22 22:42:04 2018 user.emerg syslog: Error: an inet prefix is expected rather than "/32".

@jhpoelen
Copy link
Contributor Author

jhpoelen commented Mar 23, 2018

@bennlich great to hear that you were able successfully apply patch https://github.com/sudomesh/patches/tree/master/bug0023 ! Thanks for testing it. I made the changes you suggested to fix the (non-breaking) patch omission to remove the routing rule from /etc/init.d/meshrouting .

@paidforby
Copy link

paidforby commented Apr 9, 2018

Does anyone know if this change works with existing extender node firmware builds?

We attempted to a complete short mesh link with two Nanobridge M5 extender nodes connected to N600 home nodes and were able to reach the internet (i.e. ping 8.8.8.8) but not able to resolve domain names (i.e. ping google.com). I'm concerned that this update to makenode causes conflicts with some dns configurations on the extender nodes, therefore preventing the extender nodes from correctly routing dns requests made. More info about this can be found in #27

@jhpoelen
Copy link
Contributor Author

jhpoelen commented May 2, 2018

Most nodes have been patched. Closing issue.

@jhpoelen jhpoelen closed this as completed May 2, 2018
@bennlich
Copy link
Collaborator

bennlich commented May 2, 2018

For clarity, this issue is still an issue, right? I.e. we're still using static exit node IPs in home node config. Now we're just using more static IPs than before, correct?

@jhpoelen
Copy link
Contributor Author

jhpoelen commented May 2, 2018

@bennlich as far as I know we no longer use public exit node IPs. And you are right that, we do use mesh exit node IPs (100.64.0.42 and *.43) for domain name services. Perhaps best to open a separate issue to investigate the specific of the explicit dns configuration. I leave it up you to act on this.

@jhpoelen
Copy link
Contributor Author

jhpoelen commented May 3, 2018

@bennlich after some more thoughts (and a cup of tea), I figured that re-opening this issue might be a good idea. Thanks for pointing this out.

Summary so far: we did remove the exit node ips (both public and mesh), only to found out that this introduced issues in resolving domain names (see #27) . In the mean time, two exit node mesh ips have been added to the configuration. This way, we at least have some redundancy.

However, to fully decouple home noes from specific exit node ips, more research is needed.

@eenblam
Copy link
Member

eenblam commented May 15, 2018

@jhpoelen what kind of dynamic (as opposed to static, per issue title) configuration do you have in mind?

  • IPs can be configured when first provisioning the node?
  • IPs can be configured through some (to be created) interface?
  • Nodes poll a broker for IPs of active exit nodes?

@jhpoelen
Copy link
Contributor Author

Perhaps easiest to take any mesh ip using regex or some ip mask. Home nodes and exit nodes are not that different anyway.

@eenblam
Copy link
Member

eenblam commented May 15, 2018

take any mesh ip

At what point in the configuration process?

@paidforby
Copy link

@eenblam what I believe @jhpoelen is suggesting is that home nodes should be able to figure out what IP to use for resolving domains by parsing some output after establishing a tunnel. I'm happy to look at how to do this and explain how it could be done. I'm thinking that a uci command in tunnel_hook should do it, check out the uci commands that are already there.

@jhpoelen
Copy link
Contributor Author

@paidforby I agree with your statement above.

@eenblam also, I mistakingly thought you were talking about sudomesh/monitor#6 .

@eenblam
Copy link
Member

eenblam commented May 15, 2018

@paidforby @jhpoelen Ah, I see. I was thinking in terms of the public IPs of the exit nodes themselves, not DNS, from the original issue comment (Right now, home nodes are hardcoded to expect a exit node at 100.65.0.42 .)

@eenblam
Copy link
Member

eenblam commented May 15, 2018

@jhpoelen Oh, yeah, that's actually a good move on monitor/issues/6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants