This how-to provides general instructions on how to deploy Worker and UDF (User-Defined Function) binaries,
including which Environment Variables to set up and some commonly used parameters
when launching applications with spark-submit
.
When deploying workers and writing UDFs, there are a few commonly used environment variables that you may need to set:
Environment Variable | Description |
DOTNET_WORKER_DIR | Path where the Microsoft.Spark.Worker binary has been generated.It's used by the Spark driver and will be passed to Spark executors. If this variable is not set up, the Spark executors will search the path specified in the PATH environment variable.e.g. "C:\bin\Microsoft.Spark.Worker" |
DOTNET_ASSEMBLY_SEARCH_PATHS | Comma-separated paths where Microsoft.Spark.Worker will load assemblies.Note that if a path starts with ".", the working directory will be prepended. If in yarn mode, "." would represent the container's working directory. e.g. "C:\Users\<user name>\<mysparkapp>\bin\Debug\<dotnet version>" |
DOTNET_WORKER_DEBUG | If you want to debug a UDF, then set this environment variable to 1 before running spark-submit . |
Once the Spark application is bundled, you can launch it using spark-submit
. The following table shows some of the commonly used options:
Parameter Name | Description |
--class | The entry point for your application. e.g. org.apache.spark.deploy.dotnet.DotnetRunner |
--master | The master URL for the cluster. e.g. yarn |
--deploy-mode | Whether to deploy your driver on the worker nodes (cluster ) or locally as an external client (client ).Default: client |
--conf | Arbitrary Spark configuration property in key=value format.e.g. spark.yarn.appMasterEnv.DOTNET_WORKER_DIR=.\worker\Microsoft.Spark.Worker |
--files | Comma-separated list of files to be placed in the working directory of each executor.
myLocalSparkApp.dll#appSeen.dll . Your application should use the name as appSeen.dll to reference myLocalSparkApp.dll when running on YARN. |
--archives | Comma-separated list of archives to be extracted into the working directory of each executor.
hdfs://<path to your worker file>/Microsoft.Spark.Worker.zip#worker . This will copy and extract the zip file to worker folder. |
application-jar | Path to a bundled jar including your application and all dependencies. e.g. hdfs://<path to your jar>/microsoft-spark-<version>.jar |
application-arguments | Arguments passed to the main method of your main class, if any. e.g. hdfs://<path to your app>/<your app>.zip <your app name> <app args> |
Note: Please specify all the
--options
beforeapplication-jar
when launching applications withspark-submit
, otherwise they will be ignored. Please see morespark-submit
options here and running spark on YARN details here.
Error: [ ] [ ] [Error] [TaskRunner] [0] ProcessStream() failed with exception: System.IO.FileNotFoundException: Assembly 'mySparkApp, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' file not found: 'mySparkApp.dll'
Answer: Please check if the DOTNET_ASSEMBLY_SEARCH_PATHS
environment variable is set correctly. It should be the path that contains your mySparkApp.dll
.
2. Question: After I upgraded my Spark Dotnet version and reset the DOTNET_WORKER_DIR
environment variable, why do I still get the following error?
Error: Lost task 0.0 in stage 11.0 (TID 24, localhost, executor driver): java.io.IOException: Cannot run program "Microsoft.Spark.Worker.exe": CreateProcess error=2, The system cannot find the file specified.
Answer: Please try restarting your PowerShell window (or other command windows) first so that it can take the latest environment variable values. Then start your program.
3. Question: After submitting my Spark application, I get the error System.TypeLoadException: Could not load type 'System.Runtime.Remoting.Contexts.Context'
.
Error: [ ] [ ] [Error] [TaskRunner] [0] ProcessStream() failed with exception: System.TypeLoadException: Could not load type 'System.Runtime.Remoting.Contexts.Context' from assembly 'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=...'.
Answer: Please check the Microsoft.Spark.Worker
version you are using. We currently provide two versions: .NET Framework 4.6.1 and .NET Core 2.1.x. In this case, Microsoft.Spark.Worker.net461.win-x64-<version>
(which you can download here) should be used since System.Runtime.Remoting.Contexts.Context
is only for .NET Framework.
4. Question: How to run my spark application with UDFs on YARN? Which environment variables and parameters should I use?
Answer: To launch the spark application on YARN, the environment variables should be specified as spark.yarn.appMasterEnv.[EnvironmentVariableName]
. Please see below as an example using spark-submit
:
spark-submit \
--class org.apache.spark.deploy.dotnet.DotnetRunner \
--master yarn \
--deploy-mode cluster \
--conf spark.yarn.appMasterEnv.DOTNET_WORKER_DIR=./worker/Microsoft.Spark.Worker-<version> \
--conf spark.yarn.appMasterEnv.DOTNET_ASSEMBLY_SEARCH_PATHS=./udfs \
--archives hdfs://<path to your files>/Microsoft.Spark.Worker.net461.win-x64-<version>.zip#worker,hdfs://<path to your files>/mySparkApp.zip#udfs \
hdfs://<path to jar file>/microsoft-spark-2.4.x-<version>.jar \
hdfs://<path to your files>/mySparkApp.zip mySparkApp