New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase IPv4/6 prefix limits #671
Comments
Most routers support a 32 or 64-bit number in their route policies. Why not make it 0..4294967295? I don't think PeeringDB should set a limit on what we think is a reasonable value. Let the user decide that for themselves. Plus we'll always have to monitor the internet routing table size to adjust the limit. |
I don't mind there being some smarts to confirm typo's etc, especially from newbie networks. Clearly nobody (right now) needs 1Million IPv6 routes as a valid limit. Perhaps if we do have a limit it could be automated and tied to the size of the routing table at that time? 6939 is announcing north of 30k IPv6 these days, and as they participate on the route servers once you add in a number of other peers route server tables are starting to grow beyond 40k. Gavin |
@peeringdb/pc any ideas? |
I agree with Greg in principle but in reality, letting people put in any number with no safety check usually results in bad data. I think having an upper bound would be good, but doing it statically would be annoying. Two things to consider:
|
I think the proposed solution adds a lot of needless complexity for little gain. But there is a more fundamental issue here in that PeeringDB should validate data to ensure it correctly matches the data type (IP addresses, ASNs (minus reserved ASNs), integer ranges, etc.), but we should not dictate what we think are valid values within the type. This may not be the best example but I'm worried about setting a precedent where PeeringDB dictates how operators run their networks. I'm kind of surprised there are even limits that we need to increase. |
While I think that I understand @mcmanuss8's point about validating data as much as reasonable, I kind of agree with @ghankins. We're not going to catch errors where someone types 81000 instead of 18000, so this is only a mild check. Given that the "reasonable" limit requires care & feeding (as @mcmanuss8 presents), I don't think it is worth it. |
@ghankins there is difference between dictating what we think are valid values and defining ranges for reasonable values. Values have to make operational sense and hence putting in ranges is a safeguard. Having a config file as @mcmanuss8 proposes is an excellent idea. These data may be controlled by either @peeringdb/ac or @peeringdb/oc. Which committee makes more sense. |
Another approach here would be to shift from a hard error to a soft error. If you set the value out of the actual range (32-bit or 64-bit), we hard error. If you set it out of the configured range, we soft error: "The prefix limit of $your_input seems very high. Most are less than $what_we_have_in_config. Are you sure it is correct?" |
There is already a config file with the settings 500000/50000. The limits in https://github.com/peeringdb/peeringdb/blob/master/config/facsimile/peeringdb.yaml are unchanged since when that file was committed on Nov 8, 2018. Updating that periodically to the Potaroo counts rounded up to nearest 100k/10k (v4/v6) seems reasonable. Ie., for now 900k/90k, or even 1M/100k would potentially be good for years. Out of curiosity, and not because I suggest to use IRR data for this, but here are the prefix counts of the as-set's of some backbones, from a route server operator perspective, along with their current setting in PeeringDB: (left is as-set count, right is info_prefixes{4,6})
Another data point, at present 5 networks in PeeringDB set their IPv4 prefix count to the max of 500k, while 68 networks in PeeringDB set their IPv6 prefix count to the max of 50k. I can't see what Gavin wrote in the ticket, but I am curious why there is even a need to go above the current limits. |
If you have AS6939 connected to your IX they already announce 30+k IPv6 routes. HE have set their value to 49k. So, calculating with 40k prefixes and adding headroom (+50%) that would sum up to 60k. Increasing to 10^6/10^5 for IPv4/IPv6 seems reasonable IMHO |
Very good point. Just realized my IRR as-set stats above are for aggregated prefixes, and don't account for the announcement of more specifics. (ex. HE agg count of 15,628 but specifics count of 30+k)
Agreed. |
Can we set a reminder for 3 or so years from now to raise this value again? 😂 |
@peeringdb/pc could we please vote that @peeringdb/oc sets limits to
in the config file |
+1 |
+1 -- please don't PR that config, it's gone in a few days with #548 |
+1 |
@peeringdb/oc Mike Leber from HE suggests setting
|
IPv4 500k might be to low for t1. |
Putting the maximum at 70% of the current routing table sizes is probably safe for all involved. The peeringdb limit should accommodate the largest networks but not be higher than (or close to) the actual DFZ size. My 2 cent |
70% would be 600k for IPv4, 80k for IPv6. Both are reasonable numbers, but need a manual checking every ~3 months unless we have access to a router to automate all the things. |
Mike Leber was trying to key in 129k for their ASN. However, 129k >> 80k. So, @job's rule needs improvement. |
Mike is asking for too much (imho, from a globel DFZ perspectve) The point of this feature is to prevent full table leaks. HE does not have 129K prefixes in their customer cone. (I acknowledge HE is one of the worlds largest ipv6 networks, but accommodating the experts at HE might have in unintentional consequences in other parts of the ecosystem.)
I can help automate an alert 3 or 6 month review. Im also fine with a mechanism where we have a few manual exceptions. HE is somewhat exceptional |
Learning, so bear with me.
Are networks really using PDB to set prefix limits? How many? Why have
this “feature” and why would PDB unilaterally decide the limit if (example)
Mike Leber says 129k? Why is the user wrong and do we know why he tried to
set it to 129k?
Thanks
…On Thu, Apr 15, 2021 at 12:22 Job Snijders ***@***.***> wrote:
Mike is asking for too much (imho, from a globel DFZ perspectve)
The point of this feature is to prevent full table leaks. HE does not have
129K prefixes in their customer cone. (I acknowledge HE is one of the
worlds largest ipv6 networks, but accommodating the experts at HE might
have in unintentional consequences in other parts of the ecosystem.)
mieli$ bgpctl show rib inet | wc -l
829067
mieli$ bgpctl show rib inet6 | wc -l
110765
I can help automate an alert 3 or 6 month review.
Im also fine with a mechanism where we have a few manual exceptions. HE is
somewhat exceptional
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#671 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFA2YQXS7UTB2KWB3BEBQOLTI4HEBANCNFSM4LT3OYRA>
.
|
Not a network, but an IXP (the SIX) is using PeeringDB per-network prefix max count data to inform route server prefix limits. At https://www.seattleix.net/route-servers we state: |
Yes there are several, e.g. we do: https://peering.anexia.com/ Networks can easily set their max prefix limits by their own. We can automate all the things and just fetch the data from pdb to produce the config without human interaction. You will find more and more networks that require a well kept pdb record before you can peer with them. 129k is wrong because its way over the current max seen prefixes in the DFZ. (110k) Maybe they have a different use case for pdbs prefix limit fields. |
70% seems reasonable to me, it would be fairly trivial to update the number every production release, so about once a month. |
Understood and it is trivial. And I guess if you don't like it you can also
just not use it.
…On Fri, Apr 16, 2021 at 1:03 AM Stefan Funke ***@***.***> wrote:
Learning, so bear with me. Are networks really using PDB to set prefix
limits? How many? Why have this “feature” and why would PDB unilaterally
decide the limit if (example) Mike Leber says 129k?
Yes there are several, e.g. we do: https://peering.anexia.com/
It is a security measure, see https://tools.ietf.org/html/bcp194 or
https://www.manrs.org/isps/guide/filtering/
Networks can easily set their max prefix limits by their own. We can
automate all the things and just fetch the data from pdb to produce the
config without human interaction. You will find more and more networks that
require a well kept pdb record before you can peer with them.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#671 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFA2YQX3M6YKFHBVXEWODZTTI7AJDANCNFSM4LT3OYRA>
.
|
People still have issues with the max-prefix-limit field. Any progress on this issue? |
@funkestefan what are the issues? |
Verizon is using pdb to auto-configure values, but can't set a real world max-prefix-limit for HE. (HE now > 100k, our max value) |
There might be a misunderstanding between HE and Verzion, unrelated to PeeringDB. I see 48,850 routes via HE on a BGP session in Amsterdam, and doubly confirmed on an IX Route Server in Canada. If HE is sending 100K+ IPv6 routes to Verizon, Verizon is configured as a 'full table customer' and not as a 'peer'. The PeeringDB "IPv6 Prefixes" field is meant to indicate the number of routes in the Customer Cone, not the total number of routes in the BGP Default-Free Zone. |
@peeringdb/oc, as of 28-12-2022 2247 I see on Potaroo
Given that we are already at 1M for IPv4 and 100k for IPv6, I suggest to raise the values of
@peeringdb/pc and @peeringdb/ac: comments? |
+1 |
1 similar comment
+1 |
This needs to be a new issue -- created at #1298 Another issue for auto incrementing it on deploy would be nice. :) |
Increasing limits to
could make sense, when looking at Potaroo. Ticket refers to #101. To increase the limits was suggested by Gavin Tweedie from Megaport.
The text was updated successfully, but these errors were encountered: