-
-
Notifications
You must be signed in to change notification settings - Fork 644
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor .csv to be an output renderer - and teach register_output_renderer to stream all rows #1062
Comments
Relevant code: datasette/datasette/views/base.py Lines 258 to 345 in d6f9ff7
|
Implementing this would make #1356 a whole lot more interesting. |
I can get regular |
for teaching
with sqlite_timelimit(conn, time_limit_ms):
c.execute(query)
for chunk in c.fetchmany(chunk_size):
yield from chunk currently |
if you went this route: with sqlite_timelimit(conn, time_limit_ms):
c.execute(query)
for chunk in c.fetchmany(chunk_size):
yield from chunk then i wonder if this was why you were thinking this feature would need a dedicated connection? reading more, there's no real limit i can find on the number of active cursors (or more precisely active prepared statements objects, because sqlite doesn't really have cursors). maybe something like this would be okay? with sqlite_timelimit(conn, time_limit_ms):
c.execute(query)
# step through at least one to evaluate the statement, not sure if this is necessary
yield c.execute.fetchone()
for chunk in c.fetchmany(chunk_size):
yield from chunk this seems quite weird that there's not more of limit of the number of active prepared statements, but i haven't been able to find one. |
This can drive the upgrade of the
register_output_renderer
hook to be able to handle streaming all rows in a large query.The text was updated successfully, but these errors were encountered: