-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Saving to an EU location via the spark API #44
Comments
@dennishuo is this project still being maintained? no commits since Dec and no replies on the issues :( |
Sorry for the delay, indeed this project is still being maintained. I believe the location is a property of the destination BigQuery dataset, and IIRC the connector doesn't auto-create an output dataset if it doesn't exist yet. The configuration location also has to be set to match the destination dataset because the configuration location is used for the temporary dataset holding uncommitted results before appending into the destination table during You should make sure the destination dataset you pre-created is already in EU and continue setting the configuration key to match. Incidentally, are you using the older |
Thanks for the reply! I was using the older version so perhaps that was the reason I will check if that sorts the problem out. Out of interest, will using the newer version give any performance improvements? Whats the difference between the old and new? |
Indeed, the newer version has been measured to have better performance. The difference between the old and the new is that the old version tried to get too fancy and write straight into BigQuery temporary tables, calling "CopyTable" inside of The new version writes to GCS first via the GCS connector for Hadoop, and on We're still in the process of switching documentation over to recommending the new connector version as the default; since it's newer it's possible there are new bugs we haven't found yet, but generally since it's just built on the well-tested GCS connector for Hadoop, it's expected to be fairly stable nonetheless. Note however that both the old and new versions do not create a new BigQuery "dataset", and location is still determined by the "dataset" location more so than by config key. If you use the new vesion, the GCS staging location you specify should also be in EU if you're going to load it into an EU BigQuery dataset. |
I see, well thanks for the clarification and ofcourse your help! |
Hi
When saving to BQ using the
saveAsNewAPIHadoopDataset
it defaults to the US location, is there any way to save to the EU location?Setting the hadoop configuration to EU doesnt seem to be picked up by the connector
The text was updated successfully, but these errors were encountered: