Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set expiry time for spark created temp table #310

Closed
karimEssawi opened this issue Feb 17, 2021 · 1 comment
Closed

Set expiry time for spark created temp table #310

karimEssawi opened this issue Feb 17, 2021 · 1 comment
Assignees

Comments

@karimEssawi
Copy link

karimEssawi commented Feb 17, 2021

Hi,

I've noticed that when I have a job reading from bigquery, a temp table prefixed with _sbc_ is created. I gather that this temp table is used to stage query results and used in the job later on. The problem is that we have multiple jobs running on a schedular - e.g. every 5 mins - so they generate quite a lot of temp tables and this clutters up bigquery itself. The temp tables do have an expiry time of 24 hours but I was wondering if there is an option to control the expiry time to 15 mins for example?

@davidrabinowitz davidrabinowitz self-assigned this Feb 22, 2021
davidrabinowitz added a commit to davidrabinowitz/spark-bigquery-connector that referenced this issue Feb 24, 2021
@davidrabinowitz
Copy link
Member

Fixed in version 0.19.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants