You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For queries for which the purpose is to create/replace tables: CREATE OR REPLACE TABLE, if the table is huge, it will eat up a lot of memory currently using read_gbq.
And it turns out that since pandas-gbq==0.12.0, a new argument max_results was implemented to limit the number of rows in result dataframe for e.g. setting it to 0 for DDL queries, which will help save memory.
So it will be nice for pandas to add it for read_gbq as pandas-gbq does.
The text was updated successfully, but these errors were encountered:
Since max_results is a new kwarg (added in pandas-gbq 0.12.0), it
is handled and tested in the same way as use_bqstorage_api,
using the "new kwargs" mechanism to maintain backwards
compatibility with older pandas-gbq versions.
Since max_results is a new kwarg (added in pandas-gbq 0.12.0), it
is handled and tested in the same way as use_bqstorage_api,
using the "new kwargs" mechanism to maintain backwards
compatibility with older pandas-gbq versions.
For queries for which the purpose is to create/replace tables:
CREATE OR REPLACE TABLE
, if the table is huge, it will eat up a lot of memory currently usingread_gbq
.And it turns out that since
pandas-gbq==0.12.0
, a new argumentmax_results
was implemented to limit the number of rows in result dataframe for e.g. setting it to0
for DDL queries, which will help save memory.So it will be nice for pandas to add it for
read_gbq
aspandas-gbq
does.The text was updated successfully, but these errors were encountered: