-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deadlock with SQL Server when inserting a large number of related rows #21899
Comments
@mrpmorris What version of Microsoft.Data.SqlClient are you using? Some deadlock issues were fixed in the last few months, so make sure you are using 1.1.3 or 2.0.0. (This may require adding a specific package reference to your project.) |
Thanks for your quick reply! If I type But I do not reference this package myself so it must be referenced indirectly by Could you tell me how to update to V2? Is it as simple as adding the nuget package to my project? It doesn't seem to be. |
It should be. For example:
|
@ajcvickers I tried adding |
I've managed to repro this based on your code (see self-contained repro below). I'm not an expert in SQL Server or it's locking/deadlocking behavior, but there indeed seem to be cases where INSERTs alone can trigger a deadlock scenario (e.g. see this question). I don't think EF Core is doing anything wrong here, i.e. you'd be able to see the same errors happening if executing these inserts concurrently without EF Core. As a workaround, you can enable retry on failure in your model's OnModelConfiguring - this will automatically retry applying the SaveChanges when a deadlock error occurs (I've confirmed this makes the code sample pass): builder.UseSqlServer("Server=DESKTOP-G05BF1U;Database=EFCoreConcurrencyTest;Trusted_Connection=True;",
o => o.EnableRetryOnFailure()); There may be other better solutions for avoiding the deadlock situation - I'd recommend searching for SQL Server solutions on this without any connection to EF Core. Complete repro sampleclass Program
{
const int Tasks = 5;
static async Task Main(string[] args)
{
await using var ctx = new AppDbContext();
await ctx.Database.EnsureDeletedAsync();
await ctx.Database.EnsureCreatedAsync();
var trigger = new ManualResetEvent(false);
var readySignals = new List<ManualResetEvent>();
var processingTasks = new List<Task>();
foreach(int index in Enumerable.Range(1, Tasks))
{
var readySignal = new ManualResetEvent(false);
readySignals.Add(readySignal);
var task = CreateDataAsync(trigger, readySignal);
processingTasks.Add(task);
}
WaitHandle.WaitAll(readySignals.ToArray());
trigger.Set();
await Task.WhenAll(processingTasks.ToArray());
Console.WriteLine("Finished");
}
private static async Task CreateDataAsync(ManualResetEvent trigger, ManualResetEvent signalReady)
{
await Task.Yield();
using (var context = new AppDbContext())
{
var incomingFile = new IncomingFile();
for(int i = 1; i <= 1000; i++)
{
new IncomingFileEvent(incomingFile);
}
context.IncomingFile.Add(incomingFile);
signalReady.Set();
trigger.WaitOne();
await context.SaveChangesAsync().ConfigureAwait(false);
}
}
}
public class AppDbContext : DbContext
{
public DbSet<IncomingFile> IncomingFile { get; set; }
static ILoggerFactory ContextLoggerFactory
=> LoggerFactory.Create(b => b.AddConsole().AddFilter("", LogLevel.Information));
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
=> optionsBuilder
.UseSqlServer(@"Server=localhost;Database=test;User=SA;Password=Abcd5678;Connect Timeout=60;ConnectRetryCount=0")
.EnableSensitiveDataLogging()
.UseLoggerFactory(ContextLoggerFactory);
}
public abstract class EntityBase
{
[Key]
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public Guid Id { get; set; } = Guid.NewGuid();
}
public class IncomingFile : EntityBase
{
public virtual ICollection<IncomingFileEvent> Events { get; private set; } = new List<IncomingFileEvent>();
}
public class IncomingFileEvent : EntityBase
{
public Guid IncomingFileId { get; private set; }
public virtual IncomingFile IncomingFile { get; private set; }
[Obsolete("Serialization only")]
public IncomingFileEvent() { }
public IncomingFileEvent(IncomingFile incomingFile)
{
if (incomingFile is null)
throw new ArgumentNullException(nameof(incomingFile));
IncomingFile = incomingFile;
IncomingFileId = incomingFile.Id;
IncomingFile.Events.Add(this);
}
} |
Hi @roji I have EnableRetryOnFailure in my app already, it doesn't fix the problem, it just causes it to happen more before it fails. I wrote some code to do the same thing using SqlCommand and I don't get a deadlock. |
At the end of the day, EF Core is just sending SqlCommands here - can you please turn on EF logging and compare what it sends to your own SqlCommand attempt? My "complete repro sample" above does this (it's very close to your original code). If you really can't repro this with raw SqlCommand, and are convinced EF is doing something specific here which triggers the deadlock, I can also look at reproducing with SqlCommand. |
EFCore creates temporary tables and merges them, this is something I didn't do in my SqlCommand code. Perhaps that is the problem? Running 64 concurrent tasks each with 3000 child rows caused a deadlock for me in EFCore but not using SqlCommand. Does this help?
|
What are you referring to here? Looking at the logs produced by EF for the code sample, there's aren't any temporary tables or merges... I'd recommend turning on logging (again, look at my full code sample above) and comparing that to whatever you're doing with raw SqlCommand. Note that as this is a concurrency issue, timing is extremely important here, and the SqlCommand sample should be as close as possible to the EF Core sample. For example, your original EF code above has |
You can ensure EF doesn't create temporary tables by setting MaxBatchSize to 1 |
@roji In my real app I see EFCore is inserting into
@AndriySvyryd That prevents the bug, but it takes over twice as long to insert. I can set the max batch size to 64 and it is close to the SqlCommand speed and doesn't error - but how can I know which setting is safe to use across my whole DB? |
@mrpmorris You'll just have to fine-tune it for your scenario. In EF Core 5.0 we'll set it to 42 by default as that is close to the sweet spot in typical scenarios. |
@AndriySvyryd How is that tuning done? Simply changing it and running lots of scenarios? |
I cannot possibly emulate every scenario of a production system. With so many different people uploading so many data files there are just too many combinations. This concerns me. It means I have to test as much as I can, release the app into production, and then keep watching it and releasing a new version every time I want to tweak the value. It's a magic number. @roji Could you please explain why this ticket is considered |
Maybe consider using SqlBulkCopy ? |
Hi @ErikEJ Is that a suggestion for me or the EFCore team? I'm using EFCore SaveChangesAsync to persist domain objects, the SqlCommand code was just to compare behaviour. |
@mrpmorris There isn't much else we can do here. Perhaps with #18990 the performance would be good enough that we don't need to create a temporary table. But until then you'll have to pick a conservative batch size. |
@mrpmorris I'm not an expert here so I may well be missing something - would love to be corrected if so - but here's my reasoning. The code sample in #21899 (comment) (which is almost identical to your own original code sample) reliably reproduces a deadlock, but inspecting the actual SQL shows only plain old INSERT statements - no temporary tables or merging. Since it's possible to trigger deadlocks with such basic SQL on SQL Server, I think it makes sense to consider this as non-EF-related, hence the closed-external label. Now, in other scenarios EF Core indeed makes use of temporary tables and merging, and the fact that it does so might increase the likelihood of deadlocks - or it may not. If I didn't have a deadlock repro with plain INSERTs, it may have made sense to consider not using merging, but at the moment it seems like the deadlock may occur in any case. I hope the above makes sense. At the end of the day EF is only sending SQL statements to SQL Server; at the moment I can't see anything inherently wrong in what EF is sending, and the problem you're encountering seems like it's reproducible without using EF at all. But if you see this differently, I'd be happy to know more. (BTW note that the issue itself isn't closed yet, even though I put closed-external on it - that means we're still discussing it and haven't yet reached a final conclusion). |
Can you suggest where I can report an issue for SQL Server? I doubt I'll find it on GitHub :) |
I'd be surprised if this is considered a bug in SQL Server, rather than expected behavior... I'd recommend first producing a repro which purely uses SqlCommands (based on the SQL EF produces for the repro above), and post that as a question on stackoverflow. Some people will probably be able to help you out (this question may be relevant as I wrote above). Otherwise this is the official page for getting help for SQL Server. |
This can't be expected behaviour for a world class db. It didn't happen in Firebird or Postgres. |
@mrpmorris you may be right and we may be missing something here - all I know is that I saw the simple INSERTs from your EF repro triggering the deadlock, so the best way forward is to reproduce that without EF Core. If you can't manage to do that for some reason, I can try to help. |
To make sure I wasn't muddying the waters at all I have just created a new console app inserting using SqlCommand. It uses the same code to ensure tasks start to insert data at the same time. 64 concurrent tasks inserting 1 parent row and 3000 child rows is taking a long (very long) time, but it hasn't errored yet.
|
@mrpmorris I'll take a look in the next few days. |
@mrpmorris your code sample sends separate commands for each child row, whereas EF Core inserts all child rows in a single INSERT command with multiple values. The below reproduces the same deadlock without using EF Core: Repro code without EFprivate static async Task CreateDataWithSqlCommand(ManualResetEvent trigger, ManualResetEvent readySignal)
{
await Task.Yield();
using var connection = new SqlConnection(AppDbContext.ConnectionString);
await connection.OpenAsync().ConfigureAwait(false);
var transaction = (SqlTransaction)await connection.BeginTransactionAsync(System.Data.IsolationLevel.ReadCommitted).ConfigureAwait(false);
readySignal.Set();
trigger.WaitOne();
Guid parentId = Guid.NewGuid();
string fileCommandSql = "insert into IncomingFile (Id) values (@Id)";
using var fileCommand = new SqlCommand(fileCommandSql, connection, transaction);
fileCommand.Parameters.Add("@Id", System.Data.SqlDbType.UniqueIdentifier).Value = parentId;
await fileCommand.ExecuteNonQueryAsync().ConfigureAwait(false);
using var fileEventCommand = new SqlCommand
{
Connection = connection,
Transaction = transaction
};
var commandTextBulder = new StringBuilder("INSERT INTO [IncomingFileEvent] ([Id], [IncomingFileId]) VALUES ");
for (var i = 1; i <= NumberOfChildRows * 2; i += 2)
{
commandTextBulder.Append($"(@p{i}, @p{i+1})");
if (i < NumberOfChildRows * 2 - 1)
commandTextBulder.Append(',');
fileEventCommand.Parameters.AddWithValue($"@p{i}", Guid.NewGuid());
fileEventCommand.Parameters.AddWithValue($"@p{i+1}", parentId);
}
fileEventCommand.CommandText = commandTextBulder.ToString();
await fileEventCommand.ExecuteNonQueryAsync().ConfigureAwait(false);
await transaction.CommitAsync().ConfigureAwait(false);
} I admit I'm also a bit surprised that this can cause SQL Server to deadlock - it would be good to understand exactly what's going on. As @AndriySvyryd mentioned above, when we have a proper ADO.NET batching API (dotnet/runtime#28633), and it's implemented by SqlClient (dotnet/SqlClient#19), then the EF saving mechanism for SQL Server should be re-thought (#18990). At least in theory, we may want to evaluate switching to (batched) command-per-row instead of a single command for all rows. The current mechanism probably also has the disadvantage of producing query plan cache pollution, when different numbers of rows are being inserted. But all this would need to be carefully analyzed and measured. |
Thank you for the repro, @roji! Thank you everyone, I really appreciate how much effort you have put into this on my behalf! |
@ErikEJ Can you provide some more details about the required index? |
Sure, as I briefly mentioned 6 days ago, there is no index on the foreign key, as also mentioned by Dan Guzman in his SO answer. This will cause table scans, leading to deadlocks, as rows are added to the child table. |
@ErikEJ I've added the following: CREATE INDEX IX_FK ON IncomingFileEvent(IncomingFileId); And I'm still getting the deadlock. Did I misunderstand? Below is the full repro without EF, can you tweak it to pass? Full repro codeclass Program
{
const string ConnectionString =
@"Server=localhost;Database=test;User=SA;Password=Abcd5678;Connect Timeout=60;ConnectRetryCount=0";
const int Tasks = 5;
const int NumberOfChildRows = 1_000;
static async Task Main(string[] args)
{
// Setup
await using (var conn = new SqlConnection(ConnectionString))
{
await conn.OpenAsync();
await using var cmd = conn.CreateCommand();
cmd.CommandText = @"
IF OBJECT_ID('dbo.IncomingFileEvent', 'U') IS NOT NULL
DROP TABLE IncomingFileEvent;
IF OBJECT_ID('dbo.IncomingFile', 'U') IS NOT NULL
DROP TABLE IncomingFile;
CREATE TABLE IncomingFile (
Id uniqueidentifier NOT NULL,
CONSTRAINT PK_IncomingFile PRIMARY KEY (Id)
);
CREATE TABLE IncomingFileEvent (
Id uniqueidentifier NOT NULL,
IncomingFileId uniqueidentifier NOT NULL,
CONSTRAINT PK_IncomingFileEvent PRIMARY KEY (Id),
CONSTRAINT FK_IncomingFileEvent_IncomingFile_IncomingFileId FOREIGN KEY (IncomingFileId) REFERENCES IncomingFile (Id) ON DELETE CASCADE
);
CREATE INDEX IX_FK ON IncomingFileEvent(IncomingFileId);";
await cmd.ExecuteNonQueryAsync();
}
var trigger = new ManualResetEvent(false);
var readySignals = new List<ManualResetEvent>();
var processingTasks = new List<Task>();
for (var i = 0; i < Tasks; i++)
{
var readySignal = new ManualResetEvent(false);
readySignals.Add(readySignal);
var task = CreateDataWithSqlCommand(trigger, readySignal);
processingTasks.Add(task);
}
WaitHandle.WaitAll(readySignals.ToArray());
Console.WriteLine("Starting inserts...");
trigger.Set();
await Task.WhenAll(processingTasks.ToArray());
Console.WriteLine("Finished");
}
private static async Task CreateDataWithSqlCommand(ManualResetEvent trigger, ManualResetEvent readySignal)
{
await Task.Yield();
using var connection = new SqlConnection(ConnectionString);
await connection.OpenAsync().ConfigureAwait(false);
var transaction = (SqlTransaction)await connection.BeginTransactionAsync(IsolationLevel.ReadCommitted).ConfigureAwait(false);
readySignal.Set();
trigger.WaitOne();
var parentId = Guid.NewGuid();
var fileCommandSql = "INSERT INTO IncomingFile (Id) VALUES (@Id)";
using var fileCommand = new SqlCommand(fileCommandSql, connection, transaction);
fileCommand.Parameters.Add("@Id", SqlDbType.UniqueIdentifier).Value = parentId;
await fileCommand.ExecuteNonQueryAsync().ConfigureAwait(false);
using var fileEventCommand = new SqlCommand
{
Connection = connection,
Transaction = transaction
};
var commandTextBulder = new StringBuilder("INSERT INTO [IncomingFileEvent] ([Id], [IncomingFileId]) VALUES ");
for (var i = 1; i <= NumberOfChildRows * 2; i += 2)
{
commandTextBulder.Append($"(@p{i}, @p{i+1})");
if (i < NumberOfChildRows * 2 - 1)
commandTextBulder.Append(',');
fileEventCommand.Parameters.AddWithValue($"@p{i}", Guid.NewGuid());
fileEventCommand.Parameters.AddWithValue($"@p{i+1}", parentId);
}
// commandTextBulder.Append(" OPTION (LOOP JOIN)");
fileEventCommand.CommandText = commandTextBulder.ToString();
await fileEventCommand.ExecuteNonQueryAsync().ConfigureAwait(false);
await transaction.CommitAsync().ConfigureAwait(false);
}
} |
@ErikEJ And if the schema was created from the EF model (i.e. Migrations) then we would have added that index, right? So this will only be an issue if the schema is created not from EF and the index isn't added? |
@ajcvickers correct |
@ErikEJ how do you mean? Are you referring to generating separate INSERT statements per row instead of one statement for multiple rows? |
@roji Yes, I mean a full old school INSERT statement per row |
I think that's covered by option 2 under #21899 (comment) - this seems to degrade perf considerably (on SQL Server). But I don't see any connection to foreign indexes so far. |
@ErikEJ I added an index to IncomingFileEvent.IncomingFileId and it still happened. I also made the PKs non clustered, no change. I also have the child table a composite PK and no change. I'm in touch with MS support at the moment. I'll provide an update once we know what is happening. |
Thanks @mrpmorris! Any further info on this would be useful indeed. |
Reply from product group was the followingThe deadlocks thrown by the code are due to locks being acquired during PK lookup operation to ensure referential integrity. As the code inserts thousands of records in a single batch instead of using Nested Loop join + Seek operator the optimizer decides to use Merge Join and Index Scan. The merge join plan is much cheaper from the cost perspective. The solutions the customer may consider are:
My thoughts
|
Thanks for posting these details... We'll take another look at this, especially once the better batching API is implemented in SqlClient. |
@roji Are you referring to SqlBulkCopy? If so, I tried it and it also suffers from the deadlock problem unless you disable referential integrity checks (which then runs the risk of invalid data). |
@mrpmorris no, I'm referring to dotnet/runtime#28633, and to its planned implementation in SqlClient (dotnet/SqlClient#19). This hopefully would allow us to use one INSERT statement per row (but many of those batched in the same command), rather than one INSERT statement per multiple rows, without degrading performance. This may even improve performance as it would reduce query plan cache pollution - the current mechanism sends a different INSERT statement each time, since the number of rows is variable. |
Thanks for the link @roji - It looks like this will make things faster, and I look forward to it, but I don't think it will fix the deadlock problem. |
@mrpmorris I think we've already seen above (see #21899 (comment)) that when one-statement-per-row is used, the deadlock disappears - it only seems to happen when a single INSERT statement is used with many row VALUES, or am I mistaken? |
@roji You are correct, but I can't help but be frightened :) I really hope it works, good luck! If I am reading this correctly, it should be in the November release. Is that correct? |
With good reason :)
Me too. In any case we wouldn't be doing any changes in the EF Core SQL Server provider without carefully measuring and making sure it makes sense to switch.
You're referring to the new batching API? No, that's definitely not going into 5.0 - but it's one of the things I want to push early for 6.0. |
I've just finished my final support call with MS regarding this issue. I understand the cause of the problem, and hopefully I will use the correct terminology to describe it. When SQL inserts into the child table it checks the referential integrity using one of two methods.
The engineer said that batching will not solve this problem as it is related to how SQL Server checks referential integrity when inserting child rows rather than how the rows are being inserted. So, that's the technical bit over, I'll offer my layman opinions :) (EFCore) (SQL Server) However, this logic obviously fails when the table is low volume due to being new, but will be high-volume as soon as it goes into production. All large tables are empty at some point. If you have any channels open to you to influence SQL Server, I would like to make a suggestion. I would have mentioned it on the call, but it has only just come to me as I am writing this out. The suggestion is to only use the MERGE JOIN (scan) option if the table is small enough AND the index hasn't been modified for X amount of time. This way SQL Server can be cautious and use LOOP JOIN when it suspects that the table is undergoing a period data changes, and use MERGE JOIN when no updates have taken place for a while. It's essentially a small window of time for disabling the use of MERGE JOIN on any table that has been updated recently. I think that would solve the problem not only for EFCore but for all users of the product. |
@mrpmorris thanks again for continuing to follow up on this and for providing all the valuable info. Your explanation makes a lot of sense.
I'm still a bit confused by this - we've seen above that doing one-statement-per-row makes the deadlock go away. It could when all rows are in the same INSERT statement, the lock is held for much longer, while when doing one-statement-per-row it gets constantly released and retaken, avoiding the deadlock (or something along those lines...).
Yeah, we do have #6717 in general for tracking OPTIONS/hints - that is issue is more focused on queries, but also discusses general table hints. We can definitely work on docs/guidance if we see people hitting this more.
I'm personally a bit skeptical of heuristic mechanisms such as this... This sounds like it would make the manifestation of the issue even more rare, which could also be viewed as a bad thing (as it's even harder to detect before production, repro reliably...). |
@roji I don't know enough to comment myself on the batch insert which is why I asked the engineer, who said it wouldn't make a difference. I can double-check this if you wish? As for my suggestion, SQL server is already making this assessment which is why it selects a MERGE JOIN (based on volume only). My suggestion would prevent a false positive in the case of high-volume tables that are new (and thus empty or small). E.g. only use MERGE JOIN if the table is empty or if it is small + hasn't been updated for a suitable amount of time. |
I definitely think there's a point here to be explored/understood, but we can also postpone the investigation and come back to it once the new batching API is actually implemented... Of course, if you have the spare time and are interested :) |
I am interested, and will make time :) PS: You have all been extremely helpful. Thank you very much! |
Thank you for being so involved in this, it's rare (and very valuable) to get this kind of very in-depth analysis! |
Description of problem
I've created a server-side system that is updated very heavily and runs on SQL Server on Azure. I am seeing deadlocks all over the place when inserting completely unrelated data even in the most simple application. Even when READ_COMMITTED_SNAPSHOT is enabled.
If this cannot be fixed we will have to switch to another persistence framework, which will cost us a lot of time and money and possibly cause our project to overrun (it is imperative this does not happen as there is a hard deadline to enter the market).
I am currently seeing this on locally run Azure functions updating a local SQL Server 2019 developer edition database.
Steps to reproduce
Create the following .NET Core Console app and run it
Having turned on READ_COMMITTED_SNAPSHOT I also tried every IsolationType available (except Chaos) and none solved the problem.
Exception
System.InvalidOperationException
d__1.MoveNext() in C:\Users\x\source\repos\EFCoreConcurrencyTest\EFCoreConcurrencyTest\Program.cs:line 28HResult=0x80131509
Message=An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseSqlServer' call.
Source=Microsoft.EntityFrameworkCore.SqlServer
StackTrace:
at Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerExecutionStrategy.d__7
2.MoveNext() at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter
1.GetResult()at Microsoft.EntityFrameworkCore.DbContext.d__54.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at EFCoreConcurrencyTest.Program.d__2.MoveNext() in C:\Users\x\source\repos\EFCoreConcurrencyTest\EFCoreConcurrencyTest\Program.cs:line 45
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
at EFCoreConcurrencyTest.Program.
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
at EFCoreConcurrencyTest.Program.(String[] args)
This exception was originally thrown at this call stack:
Microsoft.Data.SqlClient.SqlCommand.ExecuteDbDataReaderAsync.AnonymousMethod__164_0(System.Threading.Tasks.Task<Microsoft.Data.SqlClient.SqlDataReader>)
System.Threading.Tasks.ContinuationResultTaskFromResultTask<TAntecedentResult, TResult>.InnerInvoke()
System.Threading.Tasks.Task..cctor.AnonymousMethod__274_0(object)
System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, object)
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, object)
System.Threading.Tasks.Task.ExecuteWithThreadLocal(ref System.Threading.Tasks.Task, System.Threading.Thread)
System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(System.Threading.Tasks.Task)
System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task)
...
[Call Stack Truncated]
Inner Exception 1:
DbUpdateException: An error occurred while updating the entries. See the inner exception for details.
Inner Exception 2:
SqlException: Transaction (Process ID 52) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Further technical details
EF Core version: 3.1.6
Database provider: Microsoft.EntityFrameworkCore.SqlServer
Target framework: .NET Core 3.1
Operating system: Windows 10
IDE: Visual Studio 2019 16.6.5
DB settings
The text was updated successfully, but these errors were encountered: