-
Notifications
You must be signed in to change notification settings - Fork 40.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deficiencies in processing json objects in rest/api services #36666
Comments
Thanks for the report. Jackson is a very widely used JSON processor in the JVM ecosystem. What would be your suggestion to use instead? If you find security issues, please report them responsibly via https://github.com/spring-projects/spring-boot/security/policy |
Thanks for reporting these problems but they're really out of Spring Boot's control. As @mhalbritter has said, Jackson is a very widely used JSON processor in the JVM ecosystem. While it is used in Spring Boot by default and we do not have any plans to change this default, other processors such as Gson are also supported. The problems that you have described should be reported to the Jackson team. I would do so by following the process described in the project's README. |
It's also worth noting that properties such as |
Users should always set a max size on the JSON input. Jackson leaves this up to users at the moment. Jackson is concentrating on issues where comparatively short inputs can cause issues (like deeply nested docs or long numbers). The suggestion by @philwebb ( |
Note, too, that the first 2 issues can be handled by Jackson configuration:
although Jackson defaults to both being |
Enviroment
The problem
Java Spring boot handles very poorly the interpretation of Json-type objects in HTTP requests, for example, it can cause forced and unexpected errors not controlled by Spring.
To test this problem we will only use a Java Spring project using the
web
component fromorg.springframework.boot:spring-boot-starter-web
and a single DTO and controller:/src/main/java/com/example/demo/MainController.java
/src/main/java/com/example/demo/ViewModel.java
Problem 1: Non-strict structure - duplicated fields
Example:
The RFC-8259 indicates that it is possible to use duplicate keys in the same document root, but how can it be reflected in a java object?, the answer is that it is not possible, a java object cannot contain multiple keys, so what What
com.fasterxml.jackson
does is simply overwrite the value and this causes some serious problems:Problem 2: Non-strict structure - the library can process garbage content that is not json
Example:
The library performs a loop to process each byte block, storing each variable and value in different maps created in real time while the stream processes, so when the last character of the json document ends, this loop ends, but the stream does not. does, this means that it continues to receive information in blocks until the client decides to stop sending information. This can affect the service by making connections that take longer than expected to complete, as this will bypass the json maximum object size check, but it still cannot bypass the POST request body's maximum size setting. This makes it more susceptible to slowloris-type attacks.
It is true that the impact is not greater but what I want to show is the deficiency when processing content that is not standard json, the library should have thrown an error to Spring indicating that a non-standard document was sent even though it may be indicted.
Problem 3: Uncontrolled key index size
In original code of
jackson
: https://github.com/FasterXML/jackson-core/blob/2.16/src/main/java/com/fasterxml/jackson/core/sym/ByteQuadsCanonicalizer.java#L1109-L1126The stream processes every 4096 bytes and each key is stored in a dictionary according to each level of the tree, the problem is that it correctly controls the size of the value but not the size of the key and thanks to this it is possible to evade the controls of general limit of the library being able to send a very large number of bytes causing the value of the byte array to be overflowed. I understand that Java prevents an overflow from allowing an attacker to access memory for security reasons, but it still sounds dangerous.
The text of an item's key is processed using a low-level byte array which has an assigned size of MAX_INT, but if we send a larger number of bytes it will overflow the array, returning to a negative index, causing a native error of java, this error was not controlled from the
jackson
library and less from Java Spring, for which reason it only produces a disconnection from the server without returning any http response message. This can be useful for some attackers to check if Spring Boot is in the path between the client and the final REST service, either as a proxy gateway or as a service layer.The script to test it:
When running the script you may get the expected error message:
The interesting thing is that if I reduce the number of bytes to send, from 5000000000 to 4307148800 the error does not occur, then if I increase a number the error returns, but when I stop the Java Spring project and start it again that tolerance value up to cause the error changes, sometimes it's a higher value, other times it's a lower value, sometimes the value changes without the need to restart the service, it seems it all depends on the number and types of bytes you send at startup, it's as shown will behave on future http requests. But why does index overflow seem to persistently affect other future http requests? Java is supposed to be protected against memory overflow attacks but this behavior is typical when residue is left in regions of memory where there shouldn't be.
My point
My point is that the library that processes the Json type content is not efficient in structural issues, there can be many other problems but they all stem from the same thing, that the library is not strict in many ways. I think you should consider changing your json document processor. I have not raised this problem as a security problem because I don't know yet the impact that all this may have on a productive service, but I warn you so that you can be careful early. It has me worried that through different content processing problems it could lead to a bigger input data validation problem or even to violating the Java Spring core itself.
The text was updated successfully, but these errors were encountered: