-
Notifications
You must be signed in to change notification settings - Fork 125
Closed
Labels
api: bigqueryIssues related to the googleapis/python-bigquery-pandas API.Issues related to the googleapis/python-bigquery-pandas API.status: will not fixInvalid (untrue/unsound/erroneous), inconsistent with product, not on roadmap.Invalid (untrue/unsound/erroneous), inconsistent with product, not on roadmap.
Description
Environment details
- OS type and version: Ubuntu 22.04
- Python version: 3.9.13
- pip version: 22.04
pandas-gbqversion: 1.5.3
Steps to reproduce
- Download a really big dataset, such as this dataset
- Convert to a pandas.DataFrame
- Try upload using pandas_gbq
Code example
pandas_gbq.to_gbq(df, 'datasets.table' , project_id='project_id', if_exists = 'replace', chunksize=100)Stack trace
raise GenericGBQException("Reason: {0}".format(ex))
pandas_gbq.exceptions.GenericGBQException: Reason: 400 Resources exceeded during query execution: The query could not be executed in the allotted memory. Peak usage: 155% of limit.
Top memory consumer(s):
input table/file scan: 100%
Considerations
Even though I am using chunksize parameter, with a low value, I always get the allotted memory error
Metadata
Metadata
Assignees
Labels
api: bigqueryIssues related to the googleapis/python-bigquery-pandas API.Issues related to the googleapis/python-bigquery-pandas API.status: will not fixInvalid (untrue/unsound/erroneous), inconsistent with product, not on roadmap.Invalid (untrue/unsound/erroneous), inconsistent with product, not on roadmap.