Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-19503][state] Add DFS-based StateChangelog #15371

Closed
wants to merge 6 commits into from

Conversation

pnowojski
Copy link
Contributor

This PR supersedes #15322

What is the purpose of the change

Add a DFS-based state changelog implementation.

Verifying this change

Added unit tests: BatchingStateChangeStoreTest, FsStateChangelogWriterSqnTest, FsStateChangelogWriterTest
Added integration test: StateChangelogClientTest

Still not covered: RetryingExecutor (TBD)
Covered by integration test only: StateChangeFormat, StateChangelogHandleStreamImpl (TBD)

This change is a trivial rework without any test coverage.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): no
  • The public API, i.e., is any changed class annotated with @Public(Evolving): no
  • The serializers: no
  • The runtime per-record code paths (performance sensitive): no
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: yes
  • The S3 file system connector: no

Documentation

  • Does this pull request introduce a new feature? no
  • If yes, how is the feature documented? no

@flinkbot
Copy link
Collaborator

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit 202eb50 (Thu Mar 25 12:28:06 UTC 2021)

Warnings:

  • 3 pom.xml files were touched: Check for build and licensing issues.
  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Mar 25, 2021

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

Copy link
Contributor

@rkhachatryan rkhachatryan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR.
I took a look at the PR and I think there are some concurrency and other issues.
If my concerns are valid, I'd drop at least the "simplify" commit which introduces them.

task.getResultFuture()
.thenApplyAsync(
(results) -> {
results.forEach(e -> uploaded.put(e.sequenceNumber, e));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to check here whether the upload result is still relevant (or another upload was started in the meantime). For example, it could have been aborted, or another checkpoint (with the overlapping change range) was started.

(this was one of concerns when I published #15322)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right. This mailbox WIP had simplify too much the re-uploading tracking logic.

On the other hand, maybe fixing it now would be a wasted effort, as we are not sure how final re-uploading logic will be handled as it even might be completely removed in the final version? Maybe let's say notifyCheckpointAbort is not supported (and throw an exception) until we figure out a solution for the JM/TM changelog ownership question?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid that not supporting aborts can have performance consequences (having more uploads than needed).
Replacing the newer upload with an older one makes the future snapshots incorrect.
So this solution will be neither correct nor informative to estimate efficiency/performance.

As for changing the ownership of checkpoints or logs - seems a much bigger effort. If we decide not to do that (quite likely I suppose) then this code will change significantly and will become more complex (adding pre-uploads also adds complexity).

(sequenceNumber, changeSetAndResult) -> {
changeSets.remove(sequenceNumber);
uploaded.remove(sequenceNumber);
confirmed.put(sequenceNumber, changeSetAndResult);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we need to check whether this range is still relevant (or was truncated in the meantime).

(this was one of concerns when I published #15322)

Comment on lines +116 to +118
private boolean isOverSizeThresholdAndCancellationSucceded() {
return scheduledSizeInBytes >= sizeThresholdBytes && scheduledFuture.cancel(false);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the 2nd check may result in threshold being exceeded but not scheduled for upload:

  1. T1: add 1 byte, schedule future 1
  2. Executor: waits and eventually runs future 1, calls scheduleUploadIfNeeded and returns without scheduling; now thread gets suspended by the OS (so future 1 isn't completed)
  3. T1: add more bytes exceeding the threshold; try to cancel future 1 and fail => future2 is not scheduled by T1
  4. Executor also doesn't schedule any future

WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My intention was that if cancellation doesn't succeed, it means it's already running, and drainAndSave will take care of rescheduling next upload. Isn't this happening?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, if drainAndSave is being executed but it already passed the re-scheduling section (2) then nothing will be scheduled.

Comment on lines +109 to +112
private void scheduleUploadIfNeeded() {
checkState(holdsLock(scheduled));
if (scheduledFuture.isDone() || isOverSizeThresholdAndCancellationSucceded()) {
scheduleUpload();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is a race condition here which could lead to upload not to be scheduled:

  1. Future is runnig in drainAndSave(), enters scheduleUploadIfNeeded() and sees no need to reschedule (no data added)
  2. It exits the synchronized section but not the task yet (so future isn't finished yet)
  3. Another thread adds data and sees future.isDone == false, so it doesn't schedule neither

WDYT?

(the related issue below is about scheduledFuture.cancel, this one is about scheduledFuture.isDone(); I guess fix will address both of them)

tasks.forEach(task -> task.fail(error.get()));
return;
}
delegate.save(tasks);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a single-threaded upload now, I think we need more threads as discussed offline.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we do, I wanted to simplify the first version for reviewing and later add a bit more sophisticated logic for handling concurrent writes. At the moment I'm not sure how multi threaded writes should actually be working in the first place (each one writing to an independent file? I'm a bit worried about this approach from the perspective of the recovery).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, each upload should be an independent file so it shouldn't affect the recovery.
I think we should have it in the first version (not sure whether this PR or not) especially if we ignore aborts.

Comment on lines 153 to 152
throw t;
error.compareAndSet(null, t);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain the removal of throw t; ?
Without it:

  1. error can be not logged at all - if no more upload scheduled
  2. or if upload errors are not logged later
  3. it can be logged with a delay (complicated debug)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a top level method from the uploader/executor thread pool and the only meaningful way of returning the error to the task thread is via error? If we throw here, it would have no effect, or am I missing something?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right (in my version, the method was also called directly).

Comment on lines 38 to 39
class RetryingExecutor implements AutoCloseable {
private static final Logger LOG = LoggerFactory.getLogger(RetryingExecutor.class);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather like to see this removal as a separate commit.
I'm also not sure that it should be removed. Are you planning to resurrect it in a subsequent PR?

Comment on lines +144 to +146
synchronized (scheduled) {
scheduleUploadIfNeeded();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't be executed in case of exception (but I guess it will change if concurrency issues are addressed).

But more important, no future uploads will start because this.error is set later in catch block. I guess the assumption is that a single failure leads to the whole job failover?

But that shouldn't be the case, as tasks should tolerate checkpoint failures and it's the JM who decides how to handle checkpoint failures.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right, this should be moved to finally block?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think this would solve the 1st issue.
WDYT about the 2nd, more important one?


rollover();
Collection<StoreResult> readyToReturn = confirmed.tailMap(from, true).values();
Collection<StateChangeSet> toUpload = changeSets.tailMap(from, true).values();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking how could we later implement pre-emptive upload (i.e. call persist() from append()).

In this version (#15371 or #15322 as opposed to #14839), we need to filter out already uploaded changes (uploaded.subMap) from toUpload.

But we also don't want to include the changes from the previous, not yet confirmed/nor aborted checkpoint. Which will also reside in uploaded. Do you have any idea how to implement this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First and foremost, I'm not sure how much longer we will have this re-uploading logic and if we will, in what form?

But assuming no changes to the ownership of the changelog, why couldn't we just move/call the code from the persist() to preEmptiveUpload()? One issue would be that we would have to store the result of preEmptiveUpload() as FsStateChangelogWriter state instead of just returning it, so that we can combine the final StateChangelogHandleStreamImpl in the persist() call, but isn't that all?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So for pre-emptive uploads there will be a separate collection which will also be updated by a separate callback. We'll also have to truncate it.
Sounds a bit complex but I think it should work.

@pnowojski pnowojski closed this Aug 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants