Skip to content
This repository has been archived by the owner on Feb 29, 2024. It is now read-only.

Can we avoid sending invalid ROA's #61

Open
Avinash825 opened this issue Apr 24, 2020 · 2 comments
Open

Can we avoid sending invalid ROA's #61

Avinash825 opened this issue Apr 24, 2020 · 2 comments

Comments

@Avinash825
Copy link

Hi,
I have seen that if we have invalid ROA's such as below conditions, if the cache server itself rejects such ROA it would be great rather than letting the router to take care.

-->ROA's with prefix length > Max length.i.e.
prefix: "1.1.1.0/32"
Maxlength:30
-->ROA's with Maxlength greater than allowed. i.e 1.1.1.0/64 or Maxlength is >32 for ipv4 and >128 for ipv6.
-->Having non zero number after the prefix length i.e 1.1.1.1/24 or 3000:1:1:1::1/64

I think it is good that if cache server itself finds and avoids publishing the same to the clients. Just a suggestion. We faced issue in our code when we got such prefixes.

@lspgn
Copy link
Contributor

lspgn commented Apr 24, 2020

Hi @Avinash825,
Did a validator generate those invalid ROAs?
I'm already filtering for duplicates

gortr/cmd/gortr/gortr.go

Lines 453 to 459 in 5ff3fc1

for _, roa := range roas {
if roa.Prefix.IP.To4() != nil {
countv4_dup++
} else if roa.Prefix.IP.To16() != nil {
countv6_dup++
}
}

as they actually cause the RTR session to be closed.
I'm not sure about the router handling of those.
I guess I could add such check

@Avinash825
Copy link
Author

Hi ,

Yes , if i define local ROA's with above said invalid conditions, then the RPKI validator does not filter them and send as is. Yes it would be of great use for the customers if you add such checks before advertising the ROA's. That will be a added benefit. Most probably we would land up in these issues when admin adds few prefixes via slurm.json file.

Example wrong ROA's that are being advertised are defined in slurm.json file as below.

{
"slurmVersion": 1,
"validationOutputFilters": {
"prefixFilters": [
],
"bgpsecFilters": [
]
},
"locallyAddedAssertions": {
"prefixAssertions": [

  {
    "asn": 13336,
    "prefix": "1.1.1.1/24",      -->Non Zero number after prefix length.i.e instead of 1.1.1.0 it is defined as 1.1.1.1
    "maxPrefixLength": 30
  },

{
"asn": 13336,
"prefix": "1.1.1.0/32", --> prefix length mentioned is /32 and max length is /30
"maxPrefixLength": 30
},
{
"asn": 13336,
"prefix": "1.1.1.0/24",
"maxPrefixLength": 64 --> for ipv4 prefix length range is <=32, but it is defined as 64
}
}
}

Also you have added only check for duplicate prefixes coming from the server(not locally defined through slurm.json file), if you add the above checks it will help you filter any such wrong updates.

Thanks,
Avinash C

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants