-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
companion: add stronger validation for urls sent via URL plugin #2083
Conversation
I am not sure what the purpose of blocking IPs and requiring TLDs is but that can easily be circumvented. For example, I can simply make my domain (evilshit.de) point to 127.0.0.1 and then use http://evilshit.de:8000/foo instead of http://localhost:8000/foo. Depending on your threat model, this may or may not be a problem. |
@Acconut But I believe it can be fixed using DNS resolve first before calling the url using dns.resolve(url, (err,dnsRecords) => {
if(err) return; //error in dns resolve
for (const record of dnsRecords) {
//compare the record with IPS blacklisted
//ex: 127.0.0.1, 0.0.0.0 , some internal services .. etc
//if success mark this url as infected
}
}); What do you think ? |
We don't need the redirect prevention and tld filtering anymore if we already do filtering at the DNS level right? There are legitimate use cases for redirects and loading files from IPs |
@goto-bus-stop The redirection is essential , because I can host script that redirect to the IP
<?php
header('Location: http://127.0.0.1'); |
@netcode The current implementation in this PR does the check after each DNS lookup, including when following redirects (instead of doing a separate lookup first). So if we prevent private IPs at that stage it should automatically address malicious redirects, while mundane ones still work, right? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not going to comment much on the Uppy code itself, but the IP filtering using the HTTP agent looks amazing! Would be nice to have a test for this, though!
@netcode @goto-bus-stop @Acconut I tested with redirects enabled and it's true what @goto-bus-stop says, the dns lookup (validation) is done on every URL including the URL redirected to. So we can indeed enable redirects
However, if we enable IPs (i.e if we remove the FQDN validation), then it doesn't do a dns lookup, hence the validation is skipped. Except we try to do a validation on the higher level to see if the URL is an IP, and then we'd validate that it's not a private IP. But I'm not sure if there are other risks involved by allowing this. I'm also not sure how far we are willing to go to allow this. |
👍 for that |
Yeah in that case we can also do that check inside |
I have updated the PR to allow IP addresses and redirects, but validate that the IP address is not a private one. |
ipPrefix.push('0.') | ||
// Add IP V4 prefix for private addresses | ||
// See https://en.wikipedia.org/wiki/Private_network | ||
ipPrefix.push('10.') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe this to be correct, i think there are actual WAN addresses that start with 10 and the 10. range should be more discriminatory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i just checked but couldn't find any such claims online so it seems i was wrong
looks like we're missing an |
Oh. please remove the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
save for CI and the package-lock, this LGTM!
(can we also add a test?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
companion: add stronger validation for urls sent via URL plugin
this PR allows for stronger validation on URLs sent via the URL plugin (for file downloads).
It blocks URLs like:
redis://domain.com/file-name
(because of bad protocol)https://168.192.4.100/file-name
(because hostname is not a Fully Qualified Domain Name)http://localhost:9000/file-name
(because hostname has no TLD i.e.com
suffix)The last two above are only blocked when companion is not running in debug mode. So that testers running the app locally can test with local servers that may have such URLs.
UPDATE:
I have also added a DNS lookup to validate that download URLs are not pointing to a local IP. In which case, the request will be aborted.
A follow up PR will add an option to companion where users can set a blacklist of URLs similar to the whitelist available for upload URLs