You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Requesting this URL from wallabag allows getting the content in a stored article.
There are different ways to exploit this depending on the hosting environment.
It does not suffice to block IP addresses, since any domain can point to 127.0.0.1 - HTTP redirects must also be taken into account.
I'm happy to contribute tests, if you can help me identify the code in charge of this. Redirects and unusual URL schemes (file:///, php://, etc.) should be taken into account as well.
Thanks for creating this issue, I've just read the vuln about Pocket this morning and was about to create an issue to check that.
ATM the code part that handle the url and fetching content isn't in wallabag. It's done by FullTextRSS. We just grab the url given by the user, send it to this lib and we retrieve readable content from it.
For the v2, we plan to use graby which is a fork of FullTextRSS but in a more embeddable way and with tests. So this job should be done in graby instead of wallabag I guess.
It is possible to request saving a URL, bypassing common restrictions.
Example:
There are different ways to exploit this depending on the hosting environment.
It does not suffice to block IP addresses, since any domain can point to 127.0.0.1 - HTTP redirects must also be taken into account.
I'm happy to contribute tests, if you can help me identify the code in charge of this. Redirects and unusual URL schemes (file:///, php://, etc.) should be taken into account as well.
For further hardening, I recommend looking at the SSRF Bible, Nicolas Grégoire's presentation Server Side Browsing Considered Harmful and this report about similar problems with Pocket
The text was updated successfully, but these errors were encountered: