[SPARK-21775][Core]Dynamic Log Level Settings for executors#18987
[SPARK-21775][Core]Dynamic Log Level Settings for executors#18987lvdongr wants to merge 10 commits intoapache:masterfrom
Conversation
… be configured, when you use DirectKafkaInputDStream to connect the kafka in a Spark Streaming application has been successfully created.
|
Can one of the admins verify this patch? |
|
This seems like a ton of complexity. I don't think this functionality is worth nearly this. |
|
I agree with @srowen and I'm not a fan out the UI either |
|
This seems like a super specific requirement, AFAIK there's no Hadoop related project support dynamically changing log level, do we really need this feature is Spark, do we have other workaround without changing the code? From my aspect, changing a mount of the code to support such specific requirement usually hard to persuade the community. |
|
The log level setting is a very useful function.Our team is doing a spark application and when we want to see the debug log, we have to restart the application every time. So we develop this function. |
|
We won't merge this, it's too much overhead for little gain. You can close this. |
|
ok. Thank you all the same for your review @srowen @jerryshao @ajbozarth . |
|
spark not to supported the feature that is foolish |
What changes were proposed in this pull request?
Sometimes we want to change the log level of executor when our application has already deployed, to see detail infomation or decrease the log items. Changing the log4j configure file is not convenient,so We add the ability to set log level settings for a running executor.
https://issues.apache.org/jira/browse/SPARK-21775
How was this patch tested?
manual tests
Please review http://spark.apache.org/contributing.html before opening a pull request.