New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SQL Server: The incoming request has too many parameters. The server supports a maximum of 2100 parameters. Reduce the number of parameters and resend the request.
#18203
Comments
In further research, I found out that I also get this error:
Does the param limit count for all queries within the transaction? I also noticed that the query that errors out tries to limit the params to 2099. So Im not quite sure why mssql thinks this query contains more than 2100 parameters:
|
I'm running into the same issue. |
same issue here |
On my end. v4.4.0, 4.12.0, v4.1.0 |
Can you help us to get to a reproduction (for SQL Server)? Any minimal project that leads to this behavior would help, so we can reproduce, debug and hopefully fix. Thanks. |
Hi @janpio, I was able to reproduce this error in isolation. You can reproduce this yourself by cloning and running the project within the following repo: https://github.com/JaapWeijland/prisma-mssql-issue The case is a bit different than I initially explained, but I was able to reproduce the same error using a smaller conceptual domain nonetheless. Happy to help. If there are any questions, please let me know. |
Hi. I'm getting this too in production code. We've batched the Prisma seems to manage the number of parameters to stay just below 2100, for example this one with 2094: https://gist.github.com/jonalport/9d4e594780a85f05d03eef6aabaaac47 But perhaps there is a mistake in certain scenarios where it goes over? |
Amazing @JaapWeijland, I can reproduce with this:
|
Everyone else who posted here (@jonalport @joshkay @mhesham93 @thugzook), can you please share the queries where you experienced this? The current theory is that it has to do with doing things one level deep in the query vs. on the top level. Example from @JaapWeijland:
|
From my personal experience, when I use wider date range, I start seeing the error otherwise it works fine. here's my query (for example if I wanted to generate data for a week, it would work just fine however if I tried to run the same query to pull data for a month it won't work as expected)
|
Not sure if the issue is related to that, here's my query that I had to refactor to using
|
AFAIK the issue is not unique to MSSQL, although the parameter number message error probably is. This behavior is far from optimal in any situation where the database is non trivial:
It usually ends up in unexpected errors when the system is live and the data grow over a certain point: I've seen this happening in both MySql and MSSql (not because other databases are immune, rather because these are the ones I routinely use). While I firmly believe that Prisma is generally superior, in this one aspect it is still trailing way behind Sequelize whose join queries are excellent. The workaround I usually adopt for the typical application where you have a list of rows and then you do CRUD operations on the single record, is to create views for the lists, add them as entities to schema and perform the
Con:
I believe this is a good compromise, but I'd really like to know when the Prisma team plans to fix this problem. |
This issue here is specifically about the SQL Server problem, which should not happen even with our current logic. For more general requests about changing our joining behavior, you can for example see #5184 |
I believe this issue is due to the It's possible to use the environment variable |
Same issue here. |
Any progress with this one? |
This worked for me, thank you @dmitri-gb. |
Thanks for the comments and investigation here, I can confirm that in a reproduction lowering the value to 2098 via an environment variable indeed works in certain situations. |
Setting the batch size to 2098 will work in some cases, but does not solve the root problem. A fixed batch size is a fundamentally flawed approach, and the batch size should instead be dynamically determined based on the number of other parameters. For example, if batch size is 2098 and you wanted to get all compatible red doors:
For your 'compatible doors' batched query, you'll end up with sql something like this:
Here you have sp_executesql with 2101 parameters:
As a workaround I've set QUERY_BATCH_SIZE=2000 in my .env |
Yup- I've run into this same issue. I think I had issues at 2000, even, so
I've downgraded to 1000. Probably an unnecessary performance hit but it
works.
…On Tue, Feb 20, 2024 at 4:53 PM Drew Marshall ***@***.***> wrote:
Setting the batch size to 2098 will work in some cases, but does not solve
the root problem.
This is because it is possible to add more parameters to the query.
A fixed batch size is a fundamentally flawed approach, and the batch size
should instead be *dynamically determined* based on the number of other
parameters.
For example, if batch size is 2098 and you wanted to get all compatible
red doors:
prisma.findUnique({
where: {
productId,
},
select: {
compatibleDoors: true,
where: {
color: 'red'
}
},
})
For your 'compatible doors' batched query, you'll end up with sql
something like this:
exec sp_executesql
N'select * from Doors where color = @p1 and id ***@***.***, ... @P2199)'
, ***@***.*** varchar(255) ... @P2199 int'
, @p1='red', ... @P2199=12345
Here you have sp_executesql with *2101 parameters*:
- 1 sql string
- 1 parameter list
- 1 color
- 2098 ids
As a workaround I've set QUERY_BATCH_SIZE=2000 in my .env
—
Reply to this email directly, view it on GitHub
<#18203 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAWWZKZYUKJR2PUYLMKQT7LYUULOZAVCNFSM6AAAAAAVOZPCWGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNJVGE4DEMBZG4>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Of course @thedrewguy, but using the environment variable confirms what the problem is (the automatically used number to calculate when and how to chunk queries). Lowering the number by 1 will already improve the situation, before we are able to properly fix this logic. Any suggestions or optimally examples for such logic that can handle this? |
This is definitely a good step and I'm sure it'll fix most cases! I've seen other ORMs do this without parameterizing the "join" field (just a comma-separated list) but this leaves you open to sql injection. There are two ways I could see to solve this: 1. Keep query batching but limit batch size to fix the parameter issueyou'd have a MAX_BATCH_SIZE constant instead of QUERY_BATCH_SIZE, default 2098 2. Do away with query batching and use temp tables to temporarily store join fields - my personal dreamI would guess this is a much more performant option for large datasets and it's injection safe Say you have a prisma query:
you could aim for sql like this.
Email query:
multi-column relations and temp table indexing are important considerations for this approach |
The incoming request has too many parameters. The server supports a maximum of 2100 parameters. Reduce the number of parameters and resend the request.
Posting this without a reproducible example at hand to offer (apologies), but we did run into a situation today where keeping our batch size at Checking out the process log there are no other blocking queries or activities as I'm the only activity on the DB. For reference if it matters the header table has 12,533,371 rows, the lines has 46,731,122. (Edit: I should also note we aren't pulling back that many rows, just wanted to note the table size as maybe it's trying to search in an inefficient way? The actual result set we're pulling using a raw query is ~3,700 records as it's filtered on a few different statuses/etc.) Sadly it seems like the workaround is only a partial workaround 🥲 So it's back to writing some ugly code to split things up and do it manually on our end😅 Hope we have made progress/are closer to a resolve on this one! 🙏🕯️ |
Wanted to flag that this issue is also happening in // ✅ runs using the QUERY_BATCH_SIZE in .env
await prisma.orderlines.findMany({
where: {
order_number: { in: ordersList.map((order) => order.orderNumber) },
},
}); // ❌ fails with 'too many params' error despite QUERY_BATCH_SIZE in .env
await prisma.orderlines.groupBy({
by: ['order_number', 'box_id'],
where: {
order_number: { in: ordersList.map((order) => order.orderNumber) },
},
}); |
This has been changed in 5.14.0 today: prisma/prisma-engines#4747 |
Bug description
When requesting a model including the relations to another model, and when more than 2100 relations exist, SQL Server throws an error:
Even when I try to batch fetch them, with
skip
andtake
, no matter how small the batch, I get this error.It seems that under the hood, Prisma translates these relationship queries to a query with thousands of query parameters. I think this should be handled more carefully by the client, because we can't control how Prisma builds the query, and apparently there is no failsafe in this case when using SQL Server.
How to reproduce
compatibleAddOns
Expected behavior
Return an AddOn with 10.000 compatibleDoors.
Prisma information
Environment & setup
Prisma Version
The text was updated successfully, but these errors were encountered: