Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

as_macro not added to IX-F JSON if read from PeeringDB #113

Closed
bluikko opened this issue Oct 17, 2022 · 8 comments
Closed

as_macro not added to IX-F JSON if read from PeeringDB #113

bluikko opened this issue Oct 17, 2022 · 8 comments

Comments

@bluikko
Copy link
Contributor

bluikko commented Oct 17, 2022

It would seem that the as_macro field is added to generated IX-F JSON export only if it is passed in clients.yml and is missed if it is read from PeeringDB AS-set record.

Since we (and likely everyone else) try to automate this is much as possible, meaning PeeringDB is used wherever possible, the end result IX-F JSON export will be missing as_macro for most clients. It would be nice if as_macro is added even when it is read from PeeringDB.

We typically generate the route server configs first so the AS-set could be already cached, if the current limitation is due to avoiding making queries to PeeringDB when generating the IX-F JSON.

@pierky
Copy link
Owner

pierky commented Nov 19, 2022

Hi @bluikko,

I think the tool already allows to deal with the use case you mentioned, but I might be wrong or maybe I misunderstood your request here.

When the clients-from-peeringdb command is used to build the clients.yml starting from PeeringDB records, a file like this is generated:

asns:
  AS112:
    as_sets:
    - AS112
  AS15169:
    as_sets:
    - RADB::AS-GOOGLE
clients:
- asn: 112
  ip:
  - 192.0.2.1
  - 2001:db8::1
- asn: 15169
  ip:
  - 192.0.2.2
  - 2001:db8::2

When the command ixf-member-export is executed and instructed to use the clients.yml file generated above, this is what it produces:

{
  "version": "1.0",
  "timestamp": "2022-11-19T14:11:59Z",
  "ixp_list": [
    {
      "ixp_id": 0,
      "ixf_id": 1,
      "shortname": "test",
      "vlan": [
        {
          "id": 0
        }
      ]
    }
  ],
  "member_list": [
    {
      "asnum": 112,
      "connection_list": [
        {
          "ixp_id": 0,
          "vlan_list": [
            {
              "vlan_id": 0,
              "ipv4": {
                "address": "192.0.2.1",
                "as_macro": "AS112"
              }
            },
            {
              "vlan_id": 0,
              "ipv6": {
                "address": "2001:db8::1",
                "as_macro": "AS112"
              }
            }
          ]
        }
      ]
    },
    {
      "asnum": 15169,
      "connection_list": [
        {
          "ixp_id": 0,
          "vlan_list": [
            {
              "vlan_id": 0,
              "ipv4": {
                "address": "192.0.2.2",
                "as_macro": "RADB::AS-GOOGLE"
              }
            },
            {
              "vlan_id": 0,
              "ipv6": {
                "address": "2001:db8::2",
                "as_macro": "RADB::AS-GOOGLE"
              }
            }
          ]
        }
      ]
    }
  ]
}

The as_macro is present on the export file, and set to the values retrieved from PeeringDB.

How far is this from what you were asking for? Am I missing anything? Maybe you're encountering some sort of bug?

@bluikko
Copy link
Contributor Author

bluikko commented Nov 20, 2022

Am I missing anything? Maybe you're encountering some sort of bug?

You are understanding exactly right. I am not seeing the as_macro fields in the result IX-F JSON file. Example for AS112:

    {
      "asnum": 112,
      "name": "AS112",
      "connection_list": [
        {
          "ixp_id": 1,
          "vlan_list": [
            {
              "vlan_id": 0,
              "ipv4": {
                "address": "192.0.2.1"
              }
            },
            {
              "vlan_id": 0,
              "ipv6": {
                "address": "2001:db8::1"
              }
            }
          ]
        }
      ]
    },

I have only anonymized the IPv4/6 addresses in the above extract from the generated IX-F JSON.

I just noticed that the run to generate IX-F JSON prints only a few lines as opposed to few hundred lines from the run to generate RS config files which have several lines about PeeringDB etc.
Maybe the behavior I am seeing is due to how I am running arouteserver? First I run arouteserver to generate each RS BIRD config. Then IX-F JSON is generated using docker image with something like:

arouteserver ixf-member-export --clients /root/clients.yml --merge-file /root/ix-f-staticdata.yml \
 --ixp_id 1 -o /root/ix-f.json <IXPNAME> <IXPID>

I then added the arouteserver.yml file after noticing that I passed it as a docker volume to the RS config generation but not to the IXP-F JSON generation which did not help. I could not see anything other obviously wrong or something to explain why ixf-member-export would not query PeeringDB.

Since your example JSON includes as_macro then I cannot at this point rule out the possibility that I am doing something wrong. I will try to look at my particular setup first before claiming it is a bug.

@bluikko
Copy link
Contributor Author

bluikko commented Nov 20, 2022

While working on troubleshooting this I noticed that currently run.sh can only be used if none of the options need to be changed from default values - there is no way to pass arbitrary parameters to arouteserver. For example I would like to pass --cache-dir value to make sure I create a common docker volume for the several runs of arouteserver.
At least I would find it very useful to have some env var to pass arbitrary parameters, say CUSTOM_PARAMETERS or whatever that's just added somewhere around line 195 of run.sh.

@pierky
Copy link
Owner

pierky commented Nov 20, 2022

Hi @bluikko, thanks for the additional information.

I now understand that you build your RS config files using the Docker container, and that you would also like to generate the Euro-IX file using the container as well: am I right?

Assuming this is the case, I see why you might want to have a way to customise the run.sh execution with additional options. However, I'd like to point out that while on one hand the Docker image can be used to easily get a "packaged" version of ARouteServer, with all the dependencies installed, on the other hand its entry point script (run.sh) is not really designed to offer a comprehensive way to interact with the tool. Nonetheless the docker image can still be used as a "wrapper" to easily run ARouteServer as if it was a local program, running on the host directly.

A simple Docker command like docker run -it --rm pierky/arouteserver:latest arouteserver is enough to execute the program from within a container.

Starting from that, you can add additional options to docker run and to arouteserver to setup your environment, by mounting all the needed volumes to obtain what you need.

For example, this...

docker run \
    -it \
    --rm \
    -v path-to-local/arouteserver.yml:/path-on-the-container/arouteserver.yml \
    -v path-to-local/cache_dir:/path-on-the-container/cache_dir:w \
    pierky/arouteserver:latest \
    arouteserver COMMAND --cfg /path-on-the-container/arouteserver.yml

... would allow you to run ARouteServer using a custom arouteserver.yml file on the host, crafted in a way that custom paths on the container could be used for the cache directory for example. The file could be something like this...

arouteserver.yml

# ... other settings

cache_dir: /path-on-the-container/cache_dir

# ... other settings

... with cache_dir pointed towards a directory that you mounted via docker run.

Basically all the arouteserver xxx commands that you see in run.sh can be executed in a similar way, directly from the host.

Please let me know what you think about the solution I'm proposing here.

I do see how a customization of run.sh would make things easier for your specific use case, but at the same time I'm a bit reluctant to implement it because I also see how difficult it could be to cover all the potential use cases that usually ARouteServer deal with by modifying the Docker image. It could easily become a "full mesh" of environment variables and CLI options, all condensed into a single Docker image and setup.

In addition to what I've proposed above, you can still edit the run.sh file and use your local version as the entry point for the container.

All this said, it's still unclear to me why your ixf-member-export command is not adding the IRR info to the export file. If you could share the input clients.yml with me and also the exact command you run to have it translated into the Euro-IX JSON I could try to reproduce the issue on my side.

Thanks

@bluikko
Copy link
Contributor Author

bluikko commented Nov 20, 2022

Please let me know what you think about the solution I'm proposing here.

Yes I had thought about just passing the docker & arouteserver options I need to arouteserver bird instead of relying on run.sh. If you think some kind of "passing arbitrary switches" to run.sh is not necessary there are other ways to accomplish the needful. I certainly see your point and I totally agree that a good vision is very important instead of trying to do everything suggested. Also I did not notice that cache_dir can be passed in the configuration file so I'll use that then, it seems cleanest.

All this said, it's still unclear to me why your ixf-member-export command is not adding the IRR info to the export file. If you could share the input clients.yml with me and also the exact command you run to have it translated into the Euro-IX JSON I could try to reproduce the issue on my side.

I just found out a difficult-to-notice typo in the command, I will do some more tests tomorrow before bothering you more.

@bluikko
Copy link
Contributor Author

bluikko commented Nov 21, 2022

It took a while to get things tested again since I wanted to use the cache_dir parameter. Due to various reasons that necessitated running arouteserver (the Docker image) as non-root which also meant stopping using run.sh.
The typo I mentioned in #113 (comment) was quite astonishing - I passed the PeeringDB API key to docker using -v (volume) and not -e (env var). I have no idea how it even worked in the first place. But fixing that did not make arouteserver ixf-member-export to query PeeringDB.

It is run as (sanitized):

      docker run \
       -t --rm -u 42:42 \
       -e SECRET_PEERINGDB_API_KEY=${Y} \
       -v ${X}/custom-general.yml:/etc/arouteserver/general.yml:ro \
       -v ${X}/arouteserver.yml:/etc/arouteserver/arouteserver.yml:ro \
       -v ${X}/clients.yml:/etc/arouteserver/clients.yml:ro \
       -v ${Z}/ix.json:/tmp/ix.json \
       -v ${X}/static.yml:/tmp/static.yml:ro \
       -v ${X}/cache:/tmp/arouteserver_cache \
       ${IMAGE} arouteserver ixf-member-export --cache-dir /tmp/arouteserver_cache \
       --merge-file /tmp/static.yml --ixp-id 1 -o /tmp/ix.json IXPshort 42

And it displays no output. The generated ix.json file is otherwise good but includes as_macro for only those clients that have it defined in clients.yml. Since there is no output whatsoever it seems obvious that PeeringDB is not queried at all. Other tasks (RS config, HTML policy, IRR AS-set) work correctly with the similar parameters for docker run and other config.

I have been trying to see if I have some stupid problem somewhere but could not notice any so far.

As a side note: using cache_dir dropped the typical total run time from 10 minutes to 5 minutes.

@pierky
Copy link
Owner

pierky commented Nov 21, 2022

Hi @bluikko, thanks again for the details.

I think I'm starting to figure out what's going on: the ixf-member-export is not designed to fetch information from PeeringDB at all. It just formats the same information that are inside the clients.yml file in a Euro-IX JSON compatible format.

Following from your very first comment in the opening of this issue, the workflow I thought you were using was something like clients-from-peeringdb -> clients.yml -> ixf-member-export -> Euro-IX JSON file. So, I was under the assumption that your clients.yml file was generated starting from PeeringDB, so that it contained all the expected as_sets records, and that eventually those records were not included in the output Euro-IX JSON.

The fact that only the as_sets records which are part of the clients.yml file are included in the JSON file is by design. Basically, the ixf-member-export command does not infer any additional information before generating the JSON file. It does not "augment" the information already present on the clients.yml in any way. This is done on purpose, since the file that is generated in that way is expected to be an authoritative source of information for the IXP; so, clients.yml is assumed to be the source of truth, to contain information that are reviewed and approved by the IXP operator, and eventually only what's in there is dumped into the output JSON file. Fetching additional information automatically and using them to "enrich" the original recordset would break this assumption, since the new details would be unreviewed basically.

I hope this helps to clarify what you're seeing.

What I'd like to check with you is whether the workflow I've summarized above would fit your needs. You could use clients-from-peeringdb to build your clients.yml, and then use that for both config generation and Euro-IX JSON export. You could run clients-from-peeringdb to generate a "new" clients.yml file programmatically, like clients.yml.NEW, and then compare it with the official one (diff clients.yml clients.yml.NEW or similar) and if you like the changes "promote" the new one to official (basically just a mv clients.yml.NEW clients.yml, maybe after implementing backup and versioning if you desire them, via git for example). In this way, with the addition of --merge-file, you could be probably able to obtain the desired level of automation and customization. WDYT?

@bluikko
Copy link
Contributor Author

bluikko commented Nov 22, 2022

Fetching additional information automatically and using them to "enrich" the original recordset would break this assumption, since the new details would be unreviewed basically.

Understood and I guess that makes sense.

clients.yml is generated during route server & switch configuration generation.

@bluikko bluikko closed this as completed Nov 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants