Search before asking
What would you like to be improved?
The Spark optimizer job includes all the dependencies required for task execution. However, when there is a conflict between the existing libraries in Spark Jars and the dependencies of the task, the Spark optimizer cannot function properly.
We should fix this issue because Spark Jars may typically contain additional dependencies, such as the commonly used iceberg-spark-runtime package.
How should we improve?
Add spark.executor.userClassPathFirst and spark.driver.userClassPathFirst parameter support for spark optimizer.
Are you willing to submit PR?
Subtasks
No response
Code of Conduct