New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[enhancement](spark-load)support dynamic set env #12276
Conversation
@@ -103,19 +104,22 @@ public static DeployMode fromString(String deployMode) { | |||
// broker username and password | |||
@SerializedName(value = "brokerProperties") | |||
private Map<String, String> brokerProperties; | |||
@SerializedName(value = "envConfigs") | |||
private Map<String, String> envConfigs; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whether the envConfigs
can be overwrited in users' load sqls.
If so, please add the corresponding variable and update logic in ResourceDesc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can be overwrite,the code is in updateProperties function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR approved by at least one committer and no changes requested. |
PR approved by anyone and no changes requested. |
@chenlinzhong rebase |
838af77
to
802503c
Compare
done |
802503c
to
6c6c851
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* [enhancement](spark-load)support dynamic set env and display spark appid * [enhancement](spark-load)support dynamic set env
Proposed changes
Issue Number: close #xxx
Problem summary
Fe submits tasks to spark through the spark submit client, which needs two environment variables Hadoop_ user_ name hadoop_ user_ The two variables password do not support dynamic settings now. I want these two variables to support dynamic settings so that different jobs can be submitted with different accounts
fe通过spark-submit客户提交任务到spark,这个客户端需要两个环境变量hadoop_user_name hadoop_user_password 这两个变量现在不支持动态设置,我想要这两个变量支持动态设置,这样不同的job可以使用不同的账号来提交
前缀为
env.
开头的变量为环境变量,hadoop认证方式不是simple的不用填Describe your changes.
Checklist(Required)
Further comments
If this is a relatively large or complex change, kick off the discussion at dev@doris.apache.org by explaining why you chose the solution you did and what alternatives you considered, etc...