Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integer Overflow in r_asm_massemble at libr/asm/asm.c #15543

Closed
mmmdzz opened this issue Dec 4, 2019 · 10 comments
Closed

Integer Overflow in r_asm_massemble at libr/asm/asm.c #15543

mmmdzz opened this issue Dec 4, 2019 · 10 comments

Comments

@mmmdzz
Copy link

mmmdzz commented Dec 4, 2019

Work environment

Questions Answers
OS/arch/bits (mandatory) Ubuntu x86 64
File format of the file you reverse (mandatory) None
Architecture/bits of the file (mandatory) None
r2 -v full output, not truncated (mandatory) radare2 4.1.0-git 23513 @ linux-x86-64 git.4.0.0-165-gcb60b5e8f commit: cb60b5e build: 2019-12-04__01:01:37

Expected behavior

$ cat poc.py
f = open("poc.r", "w")
f.write("/a " + ";" * (2 ** 31 + 16))
f.close()

$ python poc.py

$ r2 -i poc.r malloc://1024  # Expect No Crash

Actual behavior

$ r2 -i poc.r malloc://1024 
Segmentation fault

Steps to reproduce the behavior

  • Follow the command I list above

Additional Logs, screenshots, source-code, configuration dump, ...

In r_asm_massemble at libr/asm/asm.c, when r2 tries to assemble a long input with too many tokens, new_token_size will be integer-overflowed to zero. Later, realloc(tokens, sizeof (char*) * new_tokens_size) will actually free tokens, leading a Use-After-Free. More serious, the freed tokens can be filled with arbitrary data, which can be used to exploit to RCE.

The bug code is listed below, a quick fix will be to add a upper boundary check for new_token_size

	/* Tokenize */
	for (tokens[0] = lbuf, ctr = 0;
			((ptr = strchr (tokens[ctr], ';')) ||
			(ptr = strchr (tokens[ctr], '\n')) ||
			(ptr = strchr (tokens[ctr], '\r')));) {
		ctr++;
		if (ctr >= tokens_size) {
			const int new_tokens_size = tokens_size * 2;
			char **new_tokens = realloc (tokens, sizeof (char*) * new_tokens_size);
			if (new_tokens) {
				tokens_size = new_tokens_size;
				tokens = new_tokens;
			}
		}
		*ptr = '\0';
		tokens[ctr] = ptr + 1;
	}
@XVilka XVilka added the crash label Dec 5, 2019
@XVilka XVilka added this to the 4.1.0 milestone Dec 5, 2019
@carnil
Copy link

carnil commented Dec 5, 2019

This seem to have CVE-2019-19590 assigned.

radare added a commit that referenced this issue Dec 5, 2019
@radare
Copy link
Collaborator

radare commented Dec 5, 2019

cant reproduce. its not crashing or failing for me, but i see the problem in the code and just pushed a blind fix, can you confirm the issue is gone now?

@radare
Copy link
Collaborator

radare commented Dec 5, 2019

btw:

$ rasm2 -f poc.r
zsh: segmentation fault  rasm2 -f poc.r

@radare
Copy link
Collaborator

radare commented Dec 5, 2019

i fixed the rasm2 crash, and actually i think r_file_slurp() have the bug, because it is unable to open > 2GB files and actually i dont think we even want that to happen

@mmmdzz
Copy link
Author

mmmdzz commented Dec 5, 2019

cant reproduce. its not crashing or failing for me, but i see the problem in the code and just pushed a blind fix, can you confirm the issue is gone now?

I have checked the poc. It seems the bug is still there. In current patch, if the variable token is freed, it will still be used later, causing UAF. I suggested following patch:

	/* Tokenize */
	for (tokens[0] = lbuf, ctr = 0;
			((ptr = strchr (tokens[ctr], ';')) ||
			(ptr = strchr (tokens[ctr], '\n')) ||
			(ptr = strchr (tokens[ctr], '\r')));) {
		if (ctr + 1 >= tokens_size) {
			const int new_tokens_size = tokens_size * 2;
			char **new_tokens = realloc (tokens, sizeof (char*) * new_tokens_size);
			if (new_tokens) {
				tokens_size = new_tokens_size;
				tokens = new_tokens;
				ctr++;
			} else {
				// tokens has already been freed
                                 eprintf("Too many tokens");
				return NULL;
			}
		}
		*ptr = '\0';
		tokens[ctr] = ptr + 1;
	}

or this kind of patch

	/* Tokenize */
	for (tokens[0] = lbuf, ctr = 0;
			((ptr = strchr (tokens[ctr], ';')) ||
			(ptr = strchr (tokens[ctr], '\n')) ||
			(ptr = strchr (tokens[ctr], '\r')));) {
		ctr++;
		if (ctr >= tokens_size) {
			const int new_tokens_size = tokens_size * 2;
                         // check integer overflow
                         if (new_tokens_size < tokens_size) {
                                 eprintf("Too many tokens");
                                 free(tokens);
                                 return NULL:                         
                         }
			char **new_tokens = realloc (tokens, sizeof (char*) * new_tokens_size);
			if (new_tokens) {
				tokens_size = new_tokens_size;
				tokens = new_tokens;
			}
		}
		*ptr = '\0';
		tokens[ctr] = ptr + 1;
	}

Thanks for your quick fix and reply.

@mmmdzz
Copy link
Author

mmmdzz commented Dec 6, 2019

@radare Moreover, for the integer overflow in r_file_slurp, because the parameter usz is int *, the integer overflow will still exist when *usz = (int)sz for current patch.

@thestr4ng3r
Copy link
Contributor

...
			} else {
				// tokens has already been freed
                                 eprintf("Too many tokens");
				return NULL;
			}
...

tokens would not be freed by the realloc here: If realloc() fails, the original block is left untouched; it is not freed or moved. (from man realloc).
Or am I missing something?

@thestr4ng3r
Copy link
Contributor

I think I know why @radare was not able to reproduce. The command must look like "/a ;;;;..." instead of /a ;;;;..., otherwise the ; will be interpreted as r2 command separators.
An updated reproducer would be:

$ cat poc.py
f = open("poc.r", "w")
f.write("\"/a " + ";" * (2 ** 31 + 16) + "\")
f.close()

$ python poc.py

$ r2 -i poc.r malloc://1024  # Expect No Crash

In addition, there was another bug that caused a segfault for me before even going into r_asm_massemble(), which will be fixed by 8bd4b78

@thestr4ng3r
Copy link
Contributor

The r_asm_massemble() overflow should be properly fixed now.

@mmmdzz
Copy link
Author

mmmdzz commented Dec 7, 2019

...
			} else {
				// tokens has already been freed
                                 eprintf("Too many tokens");
				return NULL;
			}
...

tokens would not be freed by the realloc here: If realloc() fails, the original block is left untouched; it is not freed or moved. (from man realloc).
Or am I missing something?

Thanks for your fix. I have checked the poc and the bug has gone.

BTW, for realloc(void *ptr, size_t size), when size == 0, it will actually call free(ptr)

I guess the issue could be closed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants