-
-
Notifications
You must be signed in to change notification settings - Fork 653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
API for bulk inserting records into a table #1866
Comments
Error handling is really important here. What should happen if you submit 100 records and one of them has some kind of validation error? How should that error be reported back to you? I'm inclined to say that it defaults to all-or-nothing in a transaction - but there should be a |
Should this API accept CSV/TSV etc in addition to JSON? I'm torn on this one. My initial instinct is that it should not - and there should instead be a Datasette client library / CLI tool you can use that knows how to turn CSV into batches of JSON calls for when you want to upload a CSV file. I don't think the usability of |
So for the moment I'm just going to concentrate on the JSON API. I can consider CSV variants later on, or as plugins, or both. |
Likewise for newline-delimited JSON. While it's tempting to want to accept that as an ingest format (because it's nice to generate and stream) I think it's better to have a client application that can turn a stream of newline-delimited JSON into batched JSON inserts. |
There's one catch with batched inserts: if your CLI tool fails half way through you could end up with a partially populated table - since a bunch of batches will have succeeded first. I think that's OK. In the future I may want to come up with a way to run multiple batches of inserts inside a single transaction, but I can ignore that for the first release of this feature. |
If people care about that kind of thing they could always push all of their inserts to a table called |
I'm going to set the limit at 1,000 rows inserted at a time. I'll make this configurable using a new |
Nasty catch on this one: I wanted to return the IDs of the freshly inserted rows. But... the SQLite itself added a Two options then:
That third option might be the way to go here. I should benchmark first to figure out how much of a difference this actually makes. |
Quick crude benchmark: import sqlite3
db = sqlite3.connect(":memory:")
def create_table(db, name):
db.execute(f"create table {name} (id integer primary key, title text)")
create_table(db, "single")
create_table(db, "multi")
create_table(db, "bulk")
def insert_singles(titles):
inserted = []
for title in titles:
cursor = db.execute(f"insert into single (title) values (?)", [title])
inserted.append((cursor.lastrowid, title))
return inserted
def insert_many(titles):
db.executemany(f"insert into multi (title) values (?)", ((t,) for t in titles))
def insert_bulk(titles):
db.execute("insert into bulk (title) values {}".format(
", ".join("(?)" for _ in titles)
), titles)
titles = ["title {}".format(i) for i in range(1, 10001)] Then in iPython I ran these:
So the bulk insert really is a lot faster - 3ms compared to 24ms for single inserts, so ~8x faster. |
This needs to support the following:
|
I wonder if there's something clever I could do here within a transaction? Start a transaction. Write out a temporary in-memory table with all of the existing primary keys in the table. Run the bulk insert. Then run I don't think that's going to work well for large tables. I'm going to go with not returning inserted rows by default, unless you pass a special option requesting that. |
I changed my mind about the |
Similar to https://github.com/simonw/datasette-insert/blob/0.8/README.md#inserting-data-and-creating-tables
I expect this to become by far the most common way that data gets into a Datasette instance - more so than the individual row API in:
The text was updated successfully, but these errors were encountered: