-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accessing Google storage by Spark from outside Google cloud #48
Comments
Unfortunately not; the GCS connector integrates strictly through Hadoop FileSystem interfaces, which doesn't have any clear notion of masters/workers or any way to broadcast metadata to be used by all workers. Anything that could be implemented would end up fairly specific to a particular stack, e.g. relying on YARN or on Spark or HDFS or Zookeeper to do some kind of keyfile distribution. Was it impractical because you need to specify different credentials per job or something like that? One approach to make it easier if it's difficult to sync keyfile directories across your workers continuously would be to use an NFS mount shared across all your nodes to hold the keyfiles. |
@dennishuo It was really impractical when submitting Spark jobs from a Windows client to a Linux cluster, the Spark driver was running on Windows while the Spark cluster was hosted on Linux machines. So it was impossible to use the same credentials path both on Windows and Linux. |
Hadoop offers a distributed cache https://hadoop.apache.org/docs/r2.6.3/api/org/apache/hadoop/filecache/DistributedCache.html |
@dennishuo We have very similar problem and are solving it in suggested manner -- making them available as local files on workers. However, this approach has two problems as for now:
While 3) seems to be resolvable by using LinuxContainerExecutor, 1) and 2) does not seem to have easy solution at this time. I'm thinking about a mechanism that would provide different sets of auth properties depending on principal name. Something like that:
Do you believe this type of functionality could become part of upstream driver? |
@dennishuo We have a very similar problem, I wanted to setup a dataproc cluster for multi user. Since the compute engine uses a default service or custom service account credentials to connect to storage bucket which doesn't have any relation with user principals who submits the jobs or I couldn't find an option to control it, which makes the dataproc cluster unsecure and creates a problem mentioned by @chemikadze it introduces another level of indirection in multi-user environment, when key file used does not correspond to principal. Is there any workaround or solution available ? |
@krishnabigdata In my case, we've solved that indirection by implementing wrapper around GCS Hadoop driver, which is mapping users to keys according to configured mapping. Users are mapped to groups, and groups are mapped to particular "group" service account. |
@chemikadze Thanks for your reply, in my case we are submitting the job using |
All we need is the Hadoop Map Reduce submitted by users using Current: If the user has access to submit jobs to Dataproc cluster can use any storage-buckets which the service account has access to it. Required: The user has access to submit jobs to Dataproc cluster can only use the storage-buckets which the user account has access to it. So far I couldn't find a way to do it. Can you please help me on it Is there any workaround or solution available to this problem? |
@krishnabigdata you can use GCP Token Broker in conjunction with Kerberos to secure Dataproc cluster for multi-user use case with per-user GCS authentication. |
Hi Medb, could you have some ideas regarding GCP bucket to PySpark on-prem connection, so that I can get the bucket data to on-prem? |
Hi guys,
We're trying to access the Google storage from with a Spark on Yarn job (writing to gs://...) on a cluster the resides outside Google Cloud.
We have setup the correct service account and credentials but still facing some issues :
The
spark.hadoop.google.cloud.auth.service.account.keyfile
points to the credentials file on the Spark driver but the Spark code (workers running on different servers) still try to access the same file path (which doesn't exist). We got to work correctly by having the credentials file on the exact same location on both the driver and the workers, but this is not practical and was a temporary workaround.Is there any delegation token mechanism by which the driver authenticates with the Google cloud and sends the it to the workers so they don't need to have the same credential key at the exact same path ?
We tried also to upload the credential file (p12 or json) to the workers and set :
spark.executorEnv.GOOGLE_APPLICATION_CREDENTIALS
or
spark.executor.extraJavaOptions
to the file path (different from the driver file path) but we're getting :
Is there any documentation for this use case that we missed ?
Thanks,
The text was updated successfully, but these errors were encountered: