You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems when inspecting pages that have a lot of links to big files like PDFs, that the entire file is downloaded (HTTP GET) before checking the Content-Type header to see that it's not an HTML file and then raising the MetaInspector::ParserError.
It would be really nice if there was an option or something to pass in so that MetaInspector first just does an HTTP HEAD request on the URL to get the response headers back from the webserver so it can check things like response HTTP Status Code and Content-Type more efficiently, without having to download the entire large PDF file first to do so.
Thanks!
The text was updated successfully, but these errors were encountered:
First of all, thanks for the great gem!
It seems when inspecting pages that have a lot of links to big files like PDFs, that the entire file is downloaded (HTTP GET) before checking the Content-Type header to see that it's not an HTML file and then raising the MetaInspector::ParserError.
It would be really nice if there was an option or something to pass in so that MetaInspector first just does an HTTP HEAD request on the URL to get the response headers back from the webserver so it can check things like response HTTP Status Code and Content-Type more efficiently, without having to download the entire large PDF file first to do so.
Thanks!
The text was updated successfully, but these errors were encountered: