-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scan rules don't manage correctly gzip compressed response content #1871
Comments
Good find. |
Whats the best way to handle this?
|
This isn't necessarily limited to active scan. Passive scanners may also be missing findings in compressed responses. |
Ugh :( |
I'd go with toString() return the content uncompressed, always (and throw an exception if the encoding is not supported). BTW, this duplicates #408. |
@yhawke was the option "Modify/Remove 'Accept-Encoding' request-header" [1] enabled when that happened? |
I like the suggestion that we |
If ResponseBody.toString() always returns the decompressed/inflated content, how would a plugin get the compressed/deflated content? Think about BREACH detection, for instance. Does "getBytes()" not return the de-compressed content? I thought it did. As thc202 points out, the "Modify/Remove 'Accept-Encoding' request-header" may have been disabled, in which case, is this not expected functionality? (expect perhaps that the GUI should also respect the option, which it appears not to do) Another option that might fix this issue:
|
I can see arguments both ways. But my honest assumption was that an option for the local proxy to only applies to proxied traffic not necessarily the scanners (and ya I know what I get when I assume.....). (Think HTTPSender and source definition... kinda: if source == proxy then Modify/Remove.) Re: |
Dear all, the option "Remove Accept Encoding" was enabled. Things don't work I suppose because it's not something related to the local proxy. I had this behavior while using the AJAX Spider because the Firefox browser manage things in this way. |
Any updates on this? I just noticed that ZAP isn't actually removing the "Accept-Encoding:" header, it just removes gzip... but still sends "Accept-Encoding: br", so it receives "Content-Encoding: br". |
@kingthorin thanks! |
@pqyptixa would you mind checking the log file to see if there's any error? (file zap.log located in ZAP's default directory or the directory manually specified [1]). Was the GUI that hanged (i.e. no repaints were being done)? |
@thc202 No, there are no errors logged at the time ZAP hanged, in fact, there is nothing... I killed java a minute or so after ZAP hanged, and there is no indication of the hang. |
Merging into #408, which will make the scan rules handle this transparently (at least for text/string checks, scan rules that want to check the "raw" content can use the content as bytes, which should be already doing). |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
While doing a pentest I discovered that ZAP was unable to find an existing ExternalRedirect vulnerability.
after troubleshooting I discoverde that the original request is set using the Accept-Encoding: gzip header, so when the redirect occurs, the content given back by www.google.it (the site used for testing) is gzipped and the regexes don't work because the content isn't the real one.
Checking inside the ZAP panel implementation, I saw that the response visualization panel perform the ungzipping of the content while showing it... so everything seems working, but inside the core all plugins contine to working with an encoded content.
This mean that all the plugins could suffer about this implementation if the original request has the Accept-Encoding header set to gzip and the server supports it. Every time a msg.getResponseBody().toString() is performed the resulting string cannot be used correctly to search for regex...
The text was updated successfully, but these errors were encountered: