Skip to content

Conversation

@uncleGen
Copy link
Contributor

What changes were proposed in this pull request?

Run a streaming application which souce from kafka.There are many batchs queued in the job list before application stopped, and then stop the application, as follow starting it from checkpointed file, in the spark ui, the size of the queued batchs which stored in the checkpoint file are 0

How was this patch tested?

update unit test

@SparkQA
Copy link

SparkQA commented Jan 20, 2017

Test build #71708 has started for PR 16656 at commit 547ecb3.

@uncleGen
Copy link
Contributor Author

process was terminated by signal 9

@uncleGen
Copy link
Contributor Author

retest this please.

@SparkQA
Copy link

SparkQA commented Jan 20, 2017

Test build #71709 has finished for PR 16656 at commit 547ecb3.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

ois.defaultReadObject()
generatedRDDs = new HashMap[Time, RDD[T]]()
recoveredReports = new HashMap[Time, StreamInputInfo]()
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use recoveredReports to hold recovered report information. We can not report to inputInfoTracker, as jobScheduler not yet initialized here.

@uncleGen
Copy link
Contributor Author

uncleGen commented Feb 15, 2017

@zsxwing Could you please take a review? This pr has been desolate for a long period.

@uncleGen
Copy link
Contributor Author

uncleGen commented Mar 5, 2017

ping @zsxwing

@uncleGen uncleGen closed this Sep 19, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants