New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batch not cleared on failure when using bulk update #1767
Comments
…atch via bulk update. Closes microsoft#1767
…atch via bulk update. Closes microsoft#1767 Signed-off-by: Ian Roberts <i.roberts@dcs.shef.ac.uk>
Hi @ianroberts, Thank you for the report and for the potential fix in the pull request. We'll take a look at both and get back to you as soon as we can. |
Hi @ianroberts, Thank you again for your submissions. We've taken a look at both the issue presented, as well as the proposed solution in the pull request. We would like to spend some more time reviewing both, and will get back to you with a final decision, at a later date. |
…atch, whether via bulk update or via traditional batch insert. Closes microsoft#1767 Signed-off-by: Ian Roberts <i.roberts@dcs.shef.ac.uk>
…atch, whether via bulk update or via traditional batch insert. Closes microsoft#1767 Signed-off-by: Ian Roberts <i.roberts@dcs.shef.ac.uk>
EDIT: Ignore what I posted, I see where you're coming from. I'll keep looking into the issue. |
Hi @ianroberts, Another update. I agree with you there is a problem, when you look at the code path, and how |
I can't share my exact code that revealed this issue (it's buried deep inside a proprietary Grails web application) but the scenario is that I have a client that is uploading a stream of data items and on the server side I'm using a PreparedStatement ps = conn.prepareStatement(sql);
int idx = 0;
int numErrors = 0;
for(Item item : uploadedItems) {
// set statement params from item data - in the real code this is more dynamic
ps.setLong(1, item.getIdentifier());
ps.setString(2, item.getSomethingElse());
// add this row to the current batch
ps.addBatch();
if((++idx % 1000) == 0) {
try {
numErrors += countFails(ps.executeBatch());
} catch(BatchUpdateException e) {
numErrors += countFails(e.getUpdateCounts());
} catch(Exception e) {
// handle other types of exception
}
}
}
// and do a final executeBatch when we run out of items ( The batch is cleared if we are not using bulk copy, or the bug can be worked around by adding |
I suppose a simple unit test could be
|
…atch, whether via bulk update or via traditional batch insert. Closes microsoft#1767 Signed-off-by: Ian Roberts <i.roberts@dcs.shef.ac.uk>
Signed-off-by: Ian Roberts <i.roberts@dcs.shef.ac.uk>
@Jeffery-Wasty I've added exactly this test to |
… batch (#1869) * Ensure that batchParamValues is cleared in all cases when running a batch, whether via bulk update or via traditional batch insert. Closes #1767 Signed-off-by: Ian Roberts <i.roberts@dcs.shef.ac.uk> * Added regression test for #1767 Signed-off-by: Ian Roberts <i.roberts@dcs.shef.ac.uk> * Cleaned up tests slightly. * Minor formatting cleanup Signed-off-by: Ian Roberts <i.roberts@dcs.shef.ac.uk> Co-authored-by: Jeff Wasty <v-jeffwasty@microsoft.com>
Driver version
All (tested with 9.2.1 but same code is present in latest
main
branch).Problem description
The normal behaviour of
executeBatch
is to clear the pending batch of operations, regardless of whether the batch execution completes successfully, succeeds for some rows and fails for others, or fails entirely with an exception:mssql-jdbc/src/main/java/com/microsoft/sqlserver/jdbc/SQLServerPreparedStatement.java
Lines 2174 to 2176 in d9a07bd
(the code to reset
batchParamValues
tonull
is in afinally
block and fires in all cases, even when an exception is thrown).However, in the case where the bulk update API is used,
batchParamValues
is not cleared in the case of an exception:mssql-jdbc/src/main/java/com/microsoft/sqlserver/jdbc/SQLServerPreparedStatement.java
Lines 2113 to 2122 in d9a07bd
(
batchParamValues = null
is not in afinally
so does not run if any of thebcOperation
methods were to throw an exception). This means that subsequentaddBatch
calls following a failure will append to the existing batch rather than starting a new one, and the nextexecuteBatch
will re-submit the previous batch of rows along with the new data.Expected behavior
The logic should be the same in all cases -
executeBatch
should always clear the batch state so subsequentaddBatch
calls begin a new batch.Actual behavior
As described above - in the bulk update case the batch is not cleared on failure.
Any other details that can be helpful
Since
clearBatch
is idempotent, a workaround would be for user code to do an explicitstmt.clearBatch()
call (in afinally
) after every call toexecuteBatch()
.The text was updated successfully, but these errors were encountered: