You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To increase throughput of loading submissions into BigQuery, switch to loading them in big chunks from PostgreSQL, but still using load jobs.
The streaming mechanism is somewhat troublesome in our case, as it needs its buffers flushed before any DELETE or UPDATE operations can be done on the table, there's no force flushing, and it could take up to 90 minutes to have everything flushed. The TRUNCATE operation which would've suited us fine also has problems with this currently. This prevents us from always loading using streaming, as this would break tests which needs to empty the database repeatedly.
Doing this will also help us to move closer to making the BigQuery dataset public, as data in PostgreSQL will be largely de-duplicated, making partitioning of BigQuery more viable, and reducing the cost of queries.
The text was updated successfully, but these errors were encountered:
Remove loading submissions into BigQuery until we come up with a way to
increase throughput (likely pulling chunks from PostgreSQL). This should
help us deal with the backlog in production submission queue.
We might also need to either switch to direct triggering of Cloud
Functions by messages from the queue, to reduce latency, or simply
switch to a persistent Cloud Run service.
Concerns: #541
Remove loading submissions into BigQuery until we come up with a way to
increase throughput (likely pulling chunks from PostgreSQL). This should
help us deal with the backlog in production submission queue.
We might also need to either switch to direct triggering of Cloud
Functions by messages from the queue, to reduce latency, or simply
switch to a persistent Cloud Run service.
Concerns: #541
Remove loading submissions into BigQuery until we come up with a way to
increase throughput (likely pulling chunks from PostgreSQL). This should
help us deal with the backlog in production submission queue.
We might also need to either switch to direct triggering of Cloud
Functions by messages from the queue, to reduce latency, or simply
switch to a persistent Cloud Run service.
Concerns: #541
To increase throughput of loading submissions into BigQuery, switch to loading them in big chunks from PostgreSQL, but still using load jobs.
The streaming mechanism is somewhat troublesome in our case, as it needs its buffers flushed before any DELETE or UPDATE operations can be done on the table, there's no force flushing, and it could take up to 90 minutes to have everything flushed. The TRUNCATE operation which would've suited us fine also has problems with this currently. This prevents us from always loading using streaming, as this would break tests which needs to empty the database repeatedly.
Doing this will also help us to move closer to making the BigQuery dataset public, as data in PostgreSQL will be largely de-duplicated, making partitioning of BigQuery more viable, and reducing the cost of queries.
The text was updated successfully, but these errors were encountered: