Strategy for inserting rows with table-values parameters #2484

Open
GSPP opened this Issue Jun 27, 2015 · 6 comments

Projects

None yet

5 participants

@GSPP
GSPP commented Jun 27, 2015

With SQL Server you can used table-values parameters to perform bulk DML very quickly and elegantly. Unfortunately, this is tedious:

  1. Create a table type
  2. Write an INSERT statement
  3. Create a DataTable or an IEnumerable
  4. Execute a command

Here is a proposal for how EF could pull off all of this transparently for inserts and deletes in SaveChanges:

Generate a table type programmatically for all tables that require it. 3 issues:

  1. The table type probably should be created in a separate transaction so that concurrent SaveChanges calls do not contend. But really this is optional because type creation is a one-time initialization.
  2. A type name must be chosen and the type's structure might change over time. This can be solved by appending a strong hash of the type's structure to it's name. That way table types are immutable. If the table structure changes a new type will be created. The hash would include column names, their order, data types, nullability and the primary key. Old TVP types are simply never cleaned up. They only arise on schema change which is rare.
  3. This requires DDL permission. EF could check on startup for these permissions. Alternatively, the feature could be opt-in.

I think all of this would work for deletes as well. Updates are trickier because there is a great variety of columns that might change or not change. Maybe EF can use a single type for all updates and simply ignore some columns for some updates.

Non-issues:

  1. Generated values (identity, defaults). The OUTPUT clause can return them elegantly.
  2. Performance. For SaveChanges calls with few rows the existing row-by-row strategy should be used. Over the network the TVP strategy is probably better starting with 2-3 rows due to roundtrip times. On the same machine I measured the threshold to be 10 for a particular workload.
  3. Semantics. I don't think the semantics of a SaveChanges call would be affected in any way. It's simply a "go faster" feature.
  4. "Do we really need this given that SqlBulkCopy exists?": TVPs can kind of compete with SqlBulkCopy. They are an integer factor slower but like 2 orders of magnitude faster than row-by-row inserts. 2 OOM go a long way. Often, this will be good enough. I think SqlBulkCopy usage will drop dramatically once this feature is available in EF.
  5. Topological ordering of DML. I don't see any issues here. Doing an entire table at once should always result in a valid topological order. Alternatively, EF could detect safe cases and fall back to row-by-row otherwise.
  6. Row order. Index uniqueness validation logically happens at the end of a DML statement. Therefore the order of rows in the TVP does not matter for correctness.

Performance benefits:

  1. Many-row inserts are much more efficient because SQL Server can "see" all rows at once and maintain all indexes optimally (usually by writing to indexes at a time in sorted order).
  2. Less round-trips. In a network a round-trip might cost 0.5ms.
  3. Less CPU on both client and server.

Who says that EF is not suitable for bulk inserts? Right now that might be the case but it does not have to be so. This would be awesome.

The point of this ticket is to present a viable plan for how this could be implemented in EF. Some inspiration for the dev team.

@ErikEJ
Contributor
ErikEJ commented Jun 27, 2015

Would make a great PR! Did you test the batch update/insert feature, that is already implemented?

@divega
Member
divega commented Jun 28, 2015

@GSPP I agree with @ErikEJ, this sounds like a great PR. Currently EF7 supports batching, so things are already factored in a way that multiple operations of the same type are grouped and submitted to the server together, and that could make a TVP-based strategy easier to plug in. I would expect TVPs to have a perf advantage over what we do, but I don't know how much.

@ErikEJ
Contributor
ErikEJ commented Jun 29, 2015

I think TVPs would only have a measurable perf advantage when inserting 100s or 1000s of rows (which is not what you would normally do with EF OLTP systems)

@GSPP
GSPP commented Jun 29, 2015

@ErikEJ I understand why you might think that but measurements turn out to now support this view.

I have meant this TVP feature to be used with "bulky" inserts (100-1M rows). It is not primarily targeted at OLTP inserts (dozens of rows) but the perf benefits are visible there as well. I can only encourage you to try it because the gains can be staggering.

I'd like to quote this piece:

  1. Many-row inserts are much more efficient because SQL Server can "see" all rows at once and maintain all indexes optimally (usually by writing to indexes at a time in sorted order).

Do not underestimate the difference that a better plan can do. Per-index maintenance in bigger batches can be significantly faster than issuing new queries and round-trips all the time. Even without indexes, imagine a single-row insert (small row, no indexes). 90% of the execution time is per-statement and per-batch overhead. The useful work (the per-row work) is small. With TVPs the per-statement and per-batch overheads exist once for many rows (granted, they will be a little higher but not by much).

I will not be able to issue a PR for this but I hope that this ticket might spark one!

@roji
Contributor
roji commented Jun 29, 2015

Not related but FYI I was just thinking about a similar trick for PostgreSQL/Npgsql: switch to PostgreSQL's COPY (bulk data transfer) protocol for modification batches with many inserts.

@ErikEJ
Contributor
ErikEJ commented Jun 29, 2015

@GSPP Sounds great, I have actually implemented this for doing INSERTs in a single table with 3-10 rows, and it Works great. DDL permissions could be a showstopper, however.

@rowanmiller rowanmiller added this to the Backlog milestone Jun 29, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment