New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FLINK-19503][state] Add DFS-based StateChangelog #15371
Conversation
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Automated ChecksLast check on commit 202eb50 (Thu Mar 25 12:28:06 UTC 2021) Warnings:
Mention the bot in a comment to re-run the automated checks. Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR.
I took a look at the PR and I think there are some concurrency and other issues.
If my concerns are valid, I'd drop at least the "simplify" commit which introduces them.
task.getResultFuture() | ||
.thenApplyAsync( | ||
(results) -> { | ||
results.forEach(e -> uploaded.put(e.sequenceNumber, e)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to check here whether the upload result is still relevant (or another upload was started in the meantime). For example, it could have been aborted, or another checkpoint (with the overlapping change range) was started.
(this was one of concerns when I published #15322)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you are right. This mailbox WIP had simplify too much the re-uploading tracking logic.
On the other hand, maybe fixing it now would be a wasted effort, as we are not sure how final re-uploading logic will be handled as it even might be completely removed in the final version? Maybe let's say notifyCheckpointAbort
is not supported (and throw an exception) until we figure out a solution for the JM/TM changelog ownership question?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm afraid that not supporting aborts can have performance consequences (having more uploads than needed).
Replacing the newer upload with an older one makes the future snapshots incorrect.
So this solution will be neither correct nor informative to estimate efficiency/performance.
As for changing the ownership of checkpoints or logs - seems a much bigger effort. If we decide not to do that (quite likely I suppose) then this code will change significantly and will become more complex (adding pre-uploads also adds complexity).
(sequenceNumber, changeSetAndResult) -> { | ||
changeSets.remove(sequenceNumber); | ||
uploaded.remove(sequenceNumber); | ||
confirmed.put(sequenceNumber, changeSetAndResult); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that we need to check whether this range is still relevant (or was truncated in the meantime).
(this was one of concerns when I published #15322)
private boolean isOverSizeThresholdAndCancellationSucceded() { | ||
return scheduledSizeInBytes >= sizeThresholdBytes && scheduledFuture.cancel(false); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the 2nd check may result in threshold being exceeded but not scheduled for upload:
- T1: add 1 byte, schedule future 1
- Executor: waits and eventually runs future 1, calls
scheduleUploadIfNeeded
and returns without scheduling; now thread gets suspended by the OS (so future 1 isn't completed) - T1: add more bytes exceeding the threshold; try to cancel future 1 and fail => future2 is not scheduled by T1
- Executor also doesn't schedule any future
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My intention was that if cancellation doesn't succeed, it means it's already running, and drainAndSave
will take care of rescheduling next upload. Isn't this happening?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, if drainAndSave
is being executed but it already passed the re-scheduling section (2) then nothing will be scheduled.
private void scheduleUploadIfNeeded() { | ||
checkState(holdsLock(scheduled)); | ||
if (scheduledFuture.isDone() || isOverSizeThresholdAndCancellationSucceded()) { | ||
scheduleUpload(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is a race condition here which could lead to upload not to be scheduled:
- Future is runnig in
drainAndSave()
, entersscheduleUploadIfNeeded()
and sees no need to reschedule (no data added) - It exits the
synchronized
section but not the task yet (so future isn't finished yet) - Another thread adds data and sees future.isDone == false, so it doesn't schedule neither
WDYT?
(the related issue below is about scheduledFuture.cancel
, this one is about scheduledFuture.isDone()
; I guess fix will address both of them)
tasks.forEach(task -> task.fail(error.get())); | ||
return; | ||
} | ||
delegate.save(tasks); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a single-threaded upload now, I think we need more threads as discussed offline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes we do, I wanted to simplify the first version for reviewing and later add a bit more sophisticated logic for handling concurrent writes. At the moment I'm not sure how multi threaded writes should actually be working in the first place (each one writing to an independent file? I'm a bit worried about this approach from the perspective of the recovery).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, each upload should be an independent file so it shouldn't affect the recovery.
I think we should have it in the first version (not sure whether this PR or not) especially if we ignore aborts.
throw t; | ||
error.compareAndSet(null, t); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain the removal of throw t;
?
Without it:
- error can be not logged at all - if no more upload scheduled
- or if upload errors are not logged later
- it can be logged with a delay (complicated debug)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a top level method from the uploader/executor thread pool and the only meaningful way of returning the error to the task thread is via error
? If we throw here, it would have no effect, or am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right (in my version, the method was also called directly).
class RetryingExecutor implements AutoCloseable { | ||
private static final Logger LOG = LoggerFactory.getLogger(RetryingExecutor.class); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather like to see this removal as a separate commit.
I'm also not sure that it should be removed. Are you planning to resurrect it in a subsequent PR?
synchronized (scheduled) { | ||
scheduleUploadIfNeeded(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This won't be executed in case of exception (but I guess it will change if concurrency issues are addressed).
But more important, no future uploads will start because this.error
is set later in catch
block. I guess the assumption is that a single failure leads to the whole job failover?
But that shouldn't be the case, as tasks should tolerate checkpoint failures and it's the JM who decides how to handle checkpoint failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you are right, this should be moved to finally block?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think this would solve the 1st issue.
WDYT about the 2nd, more important one?
|
||
rollover(); | ||
Collection<StoreResult> readyToReturn = confirmed.tailMap(from, true).values(); | ||
Collection<StateChangeSet> toUpload = changeSets.tailMap(from, true).values(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm thinking how could we later implement pre-emptive upload (i.e. call persist()
from append()
).
In this version (#15371 or #15322 as opposed to #14839), we need to filter out already uploaded changes (uploaded.subMap
) from toUpload
.
But we also don't want to include the changes from the previous, not yet confirmed/nor aborted checkpoint. Which will also reside in uploaded
. Do you have any idea how to implement this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First and foremost, I'm not sure how much longer we will have this re-uploading logic and if we will, in what form?
But assuming no changes to the ownership of the changelog, why couldn't we just move/call the code from the persist()
to preEmptiveUpload()
? One issue would be that we would have to store the result of preEmptiveUpload()
as FsStateChangelogWriter
state instead of just returning it, so that we can combine the final StateChangelogHandleStreamImpl
in the persist()
call, but isn't that all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So for pre-emptive uploads there will be a separate collection which will also be updated by a separate callback. We'll also have to truncate it.
Sounds a bit complex but I think it should work.
This PR supersedes #15322
What is the purpose of the change
Add a DFS-based state changelog implementation.
Verifying this change
Added unit tests:
BatchingStateChangeStoreTest
,FsStateChangelogWriterSqnTest
,FsStateChangelogWriterTest
Added integration test:
StateChangelogClientTest
Still not covered:
RetryingExecutor
(TBD)Covered by integration test only:
StateChangeFormat
,StateChangelogHandleStreamImpl
(TBD)This change is a trivial rework without any test coverage.
Does this pull request potentially affect one of the following parts:
@Public(Evolving)
: noDocumentation