Spring's data binder allows you to set maximum size() of automatically created List<> e.g. to 3 items. It's quite easy to bypass this limitation and cause Spring to create a List of 3000+ items simply by modifying HTTP content sent to the server.
In other words: while testing my webapp I was able by creating malicious HTTP request to force Spring's data binder to create a List<> consisting of 4000 items although I had set the limit to 3 items. This may easily lead to Out of Memory exceptions on any app server.
It turns out that this is actually by design: autoGrowCollectionLimit only kicks in for auto-growing, i.e. for filling an array/collection with empty/ dummy elements up until the specified index is reached as required by an incoming parameter. This prevents growing to arbitrary collection sizes based on a single incoming parameter with a (faked) high index.
Fully populating an array/collection with explicitly specified elements in a single pass, on the other hand, is a different scenario: The incoming request contains the full set of elements in this case, and we're simply binding it to a Java data structure. While this can theoretically be turned into a memory-filling attack, it's primarily the request's large parameter structure itself then, with the data binding being a secondary problem.
So it looks like what you have in mind is a general collection size limit for data binding, even for fully populated arrays/collections. We could introduce a separate setting with such semantics; however, I wonder whether that specific case a real problem in practice. After all, large memory-consuming values can also be specified for simple Strings etc; general concerns about large incoming HTTP request bodies would have to be dealt with much earlier, before it even reaches an MVC controller.