New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-22425][CORE][SQL] record inputs/outputs that imported/generate… #21642
Conversation
…d by DataFrameReader/DataFrameWriter to event log
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code style problem can be found by run UT locally, maybe we should do this first. We'd better add corresponding test in AppStatusListenerSuite.
*/ | ||
@DeveloperApi | ||
case class SparkListenerInputUpdate(format: String, | ||
options: Map[String, String], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
*/ | ||
@DeveloperApi | ||
case class SparkListenerOutputUpdate(format: String, | ||
mode: String, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
@@ -19,6 +19,7 @@ package org.apache.spark.status | |||
|
|||
import java.util.Date | |||
import java.util.concurrent.ConcurrentHashMap | |||
import java.util.concurrent.atomic.AtomicLong |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import order error here.
@@ -73,6 +74,10 @@ private[spark] class AppStatusListener( | |||
// around liveExecutors. | |||
@volatile private var activeExecutorCount = 0 | |||
|
|||
private val inputDataSetId = new AtomicLong(0) | |||
private val outputDataSetId = new AtomicLong(0) | |||
private val maxRecords = conf.getInt("spark.data.maxRecords", 1000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's this spark.data.maxRecords
for? Maybe you should follow the config in core/src/main/scala/org/apache/spark/status/config.scala
Test build #4393 has finished for PR 21642 at commit
|
Can one of the admins verify this patch? |
We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable. |
…d by DataFrameReader/DataFrameWriter to event log
What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
How was this patch tested?
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
Please review http://spark.apache.org/contributing.html before opening a pull request.