Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

listening on new respondd multicast group #107

Open
anoymouserver opened this issue May 24, 2019 · 13 comments

Comments

@anoymouserver
Copy link
Collaborator

@anoymouserver anoymouserver commented May 24, 2019

I can't get the new repondd multicast address ff05::2:1001 to work, but I don't know if it's nodejs or the network configuration.
The other mcast address ff02::2:1001 works without a problem.

{ Error: getaddrinfo ENOTFOUND ff05::2:1001%br-client
     at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:56:26)
  errno: 'ENOTFOUND',
  code: 'ENOTFOUND',
  syscall: 'getaddrinfo',
  hostname: 'ff05::2:1001%br-client' }

Excerpt of my (not working) config:

{
  "receiver": {
    "receivers": [
      {
        "module": "announced",
        "config": {
          "target": {
            "ip": "ff05::2:1001"
          }
        }
      }
    ],
    "ifaces": [
      "br-client"
    ]
  }
}
@T-X

This comment has been minimized.

Copy link

@T-X T-X commented Sep 21, 2019

As remarked in nodejs/help#2073 (comment), too:

You can set a more specific multicast route for the multicast group and interface you want to send it to. Usually this should work:

$ ip -6 route add ff05::2/128 dev bat0
$ ip -6 route add ff05::2/128 dev bat0 table local

Also see: https://stackoverflow.com/a/13149495

But if hopglass could add a configuration option to bind the network socket to a specific interface that would be the better solution. As that would allow multiple instances on the same host, too. (which the multicast route option probably will not work with)

Anyway, the getaddrinfo should only be called for "ff05::2:1001" and not "ff05::2:1001%br-client". A %<iface> is only granted to work for link-local addresses (ff02:... fe80:... etc.).

@T-X

This comment has been minimized.

Copy link

@T-X T-X commented Sep 21, 2019

@anoymouserver: have you tried for "function retrieve()"
https://github.com/hopglass/hopglass-server/blob/master/modules/receiver/announced.js#L75

to change this:

collector.setMulticastInterface(ip + '%' + iface)
collector.send(req, 0, req.length, config.target.port, ip + '%' + iface, function (err) {

to this:

collector.setMulticastInterface('::%' + iface)
collector.send(req, 0, req.length, config.target.port, ip, function (err) {
...
@anoymouserver

This comment has been minimized.

Copy link
Collaborator Author

@anoymouserver anoymouserver commented Sep 23, 2019

Your suggested adjustment works fine using the old address, but with the new address I can only reach the servers running ext-respondd.

@T-X

This comment has been minimized.

Copy link

@T-X T-X commented Sep 24, 2019

@anoymouserver: Hm, this sounds strange. Which Gluon version are those nodes running?

@rubo77

This comment has been minimized.

Copy link
Contributor

@rubo77 rubo77 commented Oct 1, 2019

You can set a more specific multicast route for the multicast group and interface you want to send it to. Usually this should work:

$ ip -6 route add ff05::2/128 dev bat0
$ ip -6 route add ff05::2/128 dev bat0 table local

so is this a valid solution? I don't understand the ip commands, and we don't have a bat0 interface on our hopglass server.

For an ip-n00b: What would be the command to make old (2018.2.x) and new (2019.1) nodes work with hopglass? And on which server would we have to apply that rule?

@Adorfer

This comment has been minimized.

Copy link

@Adorfer Adorfer commented Oct 3, 2019

  1. really problematic point are networks where there are (due to "autoupdater turned off by default"-policy in some communities) Gluon v2016-2019 at the same time and will be for a longer time
  2. @rubo77 so far we have been adddressing ff02 via the individual "linklocal"-networks. So we did not explitly configure routing for the batman-interface(s).
    As we have map-servers working for multiple l2 domains at the same time, when we have now the same (non-linklocal)subnet on different interface: there will be the need of separated routing tables, since "normally" you can route one prefix just to one interface. and then the hopglass servers have to be started to work on just that individual routing table one interface.
@anoymouserver

This comment has been minimized.

Copy link
Collaborator Author

@anoymouserver anoymouserver commented Nov 4, 2019

@T-X, most of the nodes are running gluon v2017.1.8, some newer version until v2019.1.

I've narrowed it down to the iptables rules used by gluon. They seem to block requests coming from 2a03: addresses. For some reason the data is requested using this global IP rather than the internal IP (which is used when requesting for ff02:).

@T-X

This comment has been minimized.

Copy link

@T-X T-X commented Nov 5, 2019

That it uses a global multicast source address makes sense. Because it's a routable multicast destination the source needs to be / should be contactable over multiple hops.

The 2a03:, from which interface is it? Is it from br-wan or br-client?

@RalfJung

This comment has been minimized.

Copy link

@RalfJung RalfJung commented Nov 6, 2019

We are also having trouble setting up the new multicast address on the hopglass server. I am not even sure if that's a hopglass thing; even just trying to use nc to send data to the new address fails:

$ nc -6 -q 3 -u "ff05::2:1001"%saarBR 1001
nc: getaddrinfo: Name or service not known

Doing the same with ff02 works fine. Seems like ff05 is a qualitatively different kind of multicast address and somehow it... doesn't exist here, or whatever that error message means?

EDIT:

so far we have been adddressing ff02 via the individual "linklocal"-networks. So we did not explitly configure routing for the batman-interface(s).

Ah, this sounds like ff05 indeed needs some amount of extra configuration. Would be good to link to some guide for this from the gluon nodes; IPv6 multicast is a mystery even though I am otherwise fairly familiar with IPv6.^^

Also, if this is not link-local any more, is there some guidance for how to handle the multi-domain case? Our plan was to have a single routing table for all our domains as they do not overlap. If this is a problem now, it seems like really gluon should support configuring that multicast address so that it can be deconflicted.

@RalfJung

This comment has been minimized.

Copy link

@RalfJung RalfJung commented Nov 20, 2019

Actually in our case the only problem was that our hopglass-server was too old. After updating that to current master, the new address works just fine!

@petabyteboy

This comment has been minimized.

Copy link
Collaborator

@petabyteboy petabyteboy commented Nov 21, 2019

Actually in our case the only problem was that our hopglass-server was too old. After updating that to current master, the new address works just fine!

Did you perform any additional IP/route configuration on the host? Can you post your hopglass-server configuration? I can not reproduce it.

@rubo77

This comment has been minimized.

Copy link
Contributor

@rubo77 rubo77 commented Nov 21, 2019

In Kiel, I upgraded to the latest hopglass, but still no 2019 Knoten ;(

@RalfJung

This comment has been minimized.

Copy link

@RalfJung RalfJung commented Nov 22, 2019

No I did not to any route configuration. Also the test here was whether after the address change, any nodes still reported updates -- we haven't rolled out 2019.1 yet so I can't say anything about those.

EDIT: The first few nodes updated to our experimental channel, and they also seem to work fine.

Our hopglass config template is

{
  "core": {},
  "receiver": {
    "receivers": [
      {
        "module": "announced",
        "config": {
          "target": {
            "ip": "ff05::2:1001"
          }
        }
      },
      {
        "module": "aliases",
        "overlay": true
      }
    ],
    "ifaces": [
{%- for domain in pillar.domains %}
      "{{ domain.name }}BR"
      {%- if not loop.last %},{% endif -%}
{%- endfor %}
    ]
  },
  "provider": {
    "metricsOfflineTime": 65
  }
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants
You can’t perform that action at this time.