-
Notifications
You must be signed in to change notification settings - Fork 365
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation fault with large indent #501
Comments
We're aware of this one. I'm supposed to be in the process of fixing it (although I haven't been particularly proactive about it). |
Hello!
file1 is 78M - no Segmentation fault Python 3.10.4 I would appreciate if anybody can help me solve this problem. |
I'm guessing that you're using pandas? They've got their own copy of ujson so you should take this up with them. |
What did you do?
python3 -c 'import ujson; print(ujson.encode({"a": True}, indent=65539))' >/dev/null
This is the smallest value that triggers the segfault on my machine with this build of ujson. I'm sure it's no coincidence that it's slightly larger than 64 KiB.
What did you expect to happen?
Properly (if poorly) formatted output
What actually happened?
SIGSEGV
What versions are you using?
I think the reason might lie in the fact that the
Buffer_Reserve
call inencode
does not appear to account for indentation at all. I wouldn't be surprised if other things could also trigger buffer overruns in certain conditions, e.g. the absence ofJSON_NO_EXTRA_WHITESPACE
causing the insertion of extra spaces.#334 and #402 might be symptoms of the same underlying bug. Note that they both use indentation.
The text was updated successfully, but these errors were encountered: