Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DynamicAllocation is ignored when I use the connector. #51

Closed
jgustave opened this issue Aug 17, 2019 · 1 comment
Closed

DynamicAllocation is ignored when I use the connector. #51

jgustave opened this issue Aug 17, 2019 · 1 comment
Assignees

Comments

@jgustave
Copy link

If I don't use the bigquery connector, then my project will DynamicAllocate to make use of the cluster automatically.
Once I add the call to the bq connector, the job is no longer dynamic and uses the specified spark.executor.cores, etc. I verified that the configs in the environment are identical and have dynamicAllocation enabled.

@Gaurangi94
Copy link
Contributor

Hello,

Could you answer the following questions about your setup? It will help us resolve the issue faster.

  1. How do you enable dynamicallocation on your cluster? Have you added this setting in spark-defaults.conf or do you enable it for each job?
  2. What parameters are you using in running the job WITH and WITHOUT sparkbqconnector?

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants