-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
home nodes depend on static exit node ips #23
Comments
This issue has been resolve in latest version of makenode. Also, a patch is available at https://github.com/sudomesh/patches/tree/master/bug0023 . |
I notice on my home node I have a file called /etc/resolv.conf.dnsmasq that also contains the exit node mesh ip:
See https://github.com/sudomesh/makenode/blob/master/configs/templates/files/etc/resolv.conf.dnsmasq#L3. |
Looks like you addressed this in the patch, but maybe not in the makenode commit? |
@bennlich nice catch! |
I think this patch might need to include this change too: (because the patch deletes the $INETEXITIP variable) |
I was able to apply, revert, and reapply this patch without issues besides the above! The above issue is non-catastrophic. Just results in the following error getting logged:
|
@bennlich great to hear that you were able successfully apply patch https://github.com/sudomesh/patches/tree/master/bug0023 ! Thanks for testing it. I made the changes you suggested to fix the (non-breaking) patch omission to remove the routing rule from |
Does anyone know if this change works with existing extender node firmware builds? We attempted to a complete short mesh link with two Nanobridge M5 extender nodes connected to N600 home nodes and were able to reach the internet (i.e. ping 8.8.8.8) but not able to resolve domain names (i.e. ping google.com). I'm concerned that this update to makenode causes conflicts with some dns configurations on the extender nodes, therefore preventing the extender nodes from correctly routing dns requests made. More info about this can be found in #27 |
Most nodes have been patched. Closing issue. |
For clarity, this issue is still an issue, right? I.e. we're still using static exit node IPs in home node config. Now we're just using more static IPs than before, correct? |
@bennlich as far as I know we no longer use public exit node IPs. And you are right that, we do use mesh exit node IPs (100.64.0.42 and *.43) for domain name services. Perhaps best to open a separate issue to investigate the specific of the explicit dns configuration. I leave it up you to act on this. |
@bennlich after some more thoughts (and a cup of tea), I figured that re-opening this issue might be a good idea. Thanks for pointing this out. Summary so far: we did remove the exit node ips (both public and mesh), only to found out that this introduced issues in resolving domain names (see #27) . In the mean time, two exit node mesh ips have been added to the configuration. This way, we at least have some redundancy. However, to fully decouple home noes from specific exit node ips, more research is needed. |
@jhpoelen what kind of dynamic (as opposed to static, per issue title) configuration do you have in mind?
|
Perhaps easiest to take any mesh ip using regex or some ip mask. Home nodes and exit nodes are not that different anyway. |
At what point in the configuration process? |
@eenblam what I believe @jhpoelen is suggesting is that home nodes should be able to figure out what IP to use for resolving domains by parsing some output after establishing a tunnel. I'm happy to look at how to do this and explain how it could be done. I'm thinking that a uci command in tunnel_hook should do it, check out the uci commands that are already there. |
@paidforby I agree with your statement above. @eenblam also, I mistakingly thought you were talking about sudomesh/monitor#6 . |
@paidforby @jhpoelen Ah, I see. I was thinking in terms of the public IPs of the exit nodes themselves, not DNS, from the original issue comment ( |
@jhpoelen Oh, yeah, that's actually a good move on monitor/issues/6. |
currently, home nodes have configuration that is tied to specific exit node ip (mesh+internet) in addition to the required list of brokers for tunneldigger client in /etc/config/tunneldigger.
suggest to remove dependencies on exit node ip (mesh+internet) outside of the configuration in /etc/config/tunneldigger.
Why now? We setting up a second (ed. third actually...) exit node and would like to be able to bridge the mesh between the two. This means that the exit nodes have to have distinct mesh ips. Right now, home nodes are hardcoded to expect a exit node at 100.65.0.42 .
The text was updated successfully, but these errors were encountered: