-
Notifications
You must be signed in to change notification settings - Fork 13.8k
Release 1.8 #8432
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Release 1.8 #8432
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The test did not actually run since the class was refactored with JUnit's parameterized, because it was always running into a NPE and the NPE was then silently swallowed in a shutdown catch-block. (cherry picked from commit 168660a)
(cherry picked from commit c0149ba)
The cause of the instability seems to be that due to a not-so-rare timing, the thread that calls the `interrupt()` on the main thread, runs still after its original test finishes and calls `interrupt()` during execution of the next test. This causes the normal execution (or `sleep()` in this case) to be interrupted.
…ocksDBIncrementalRestoreOperation (cherry picked from commit 6b9ec27)
… local state This corrects a problem that was introduced with the refactorings in FLINK-10043. This closes #7841. (cherry picked from commit 7a078a6)
(cherry picked from commit 5bc04d2)
…t has not been started
Add a dedicated onStart method to the RpcEndpoint which is called when the RpcEndpoint is started via the start() method. Due to this change it is no longer necessary for the user to override the start() method which is error prone because it always requires to call super.start(). Now this contract is explicitly enforced. Moreover, it allows to execute the setup logic in the RpcEndpoint's main thread.
…ce size of method
… its behaviour This closes #7808.
…ecovery Wait until the Dispatcher has been started before adding new JobGraphs to the SubmittedJobGraphStore
…ompatibility.md. This closes #7802
…gnature Prior to this commit, the CompositeTypeSerializerSnapshot class signature was a bit confusing and contained raw types. Moreover, it required subclasses to always erase types and re-cast. This closes #7818.
…ot field / method names in InternalTimersSnapshot This renaming corresponds to the fact that TypeSerializerConfigSnapshot is now deprecated, and is fully replaced by TypeSerializerSnapshot.
…lization compatibility APIs for key / namespace serializer checks This commit lets the InternalTimerServiceImpl properly use TypeSerializerSchemaCompatibility / TypeSerializerSnapshot#resolveSchemaCompatibility when attempting to check the compatibility of new key and namespace serializers. This also fixes the fact that this check was previously broken, in that the key / namespace serializer was not reassigned to be reconfigured ones.
…uld not be serializing timers' key / namespace serializers anymore All of the changes done to managed state surrounding how we no longer Java-serialize serializers anymore, and only write the serializer snapshot, was not reflected to how we snapshot timers. This was mainly due to the fact that timers were not handled by state backends (and were therefore not managed state) in the past, and were handled in an isolated manner by the InternalTimerServiceSerializationProxy. This closes #7849.
…in CompositeTypeSerializerConfigSnapshot We often want to get only the restored serializer snapshots from a legacy CompostieTypeSerializerConfigSnapshot when attempting to redirect compatibility checks to new snapshots. This commit adds a getNestedSerializerSnapshots utility method for that purpose.
…lity method with SelfResolvingTypeSerializer implementation
… method using SelfResolvingTypeSerializer interface Only the TtlSerializer needs to implement the SelfResolvingTypeSerializer interface, because all other subclasses of CompositeSerializer are test serializers.
…state loss for chained keyed operators - Change Will change the local data path from `.../local_state_root/allocatio_id/job_id/jobvertext_id_subtask_id/chk_id/rocksdb` to `.../local_state_root/allocatio_id/job_id/jobvertext_id_subtask_id/chk_id/operator_id` When preparing the local directory Flink deletes the local directory for each subtask if it already exists, If more than one stateful operators chained in a single task, they'll share the same local directory path, then the local directory will be deleted unexpectedly, and the we'll get data loss. This closes #8263. (cherry picked from commit ee60846)
Jar caching is not required since they are rebuilt in the test profiles anyway.
Run dependency convergence in main compile run. Invoking maven once per modules requires significant time. Run convergence in install phase (i.e. after the shade plugin) to work against dependency-reduced poms.
- fix find -mindepth parameter - pass PROFILE to maven to prevent downloads of modules that weren't built beforehand - add -maxdepth parameter for pom.xml searches
…ersion This closes #8313.
…r#jobReachedGloballyTerminalState fails FutureUtils#assertNoException will assert that the given future has not been completed exceptionally. If it has been completed exceptionally, then it will call the FatalExitExceptionHandler. This commit uses assertNoException to assert that the Dispatcher#jobReachedGloballyTerminalState method has not failed. This closes #8334.
…ontainer requests Flink's YarnResourceManager sets a faster heartbeat interval when it is requesting containers from Yarn's ResourceManager. Since requests and responses are transported via heartbeats, this speeds up requests. However, it can also put additional load on Yarn due to excessive container requests. Therefore, this commit introduces a config option which allows to control this heartbeat.
We now use Scala reflection because it correctly deals with Scala language features.
Collaborator
|
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. DetailsThe Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
…eout and race TaskExecutor registration has asynchronous process, which allows a next retry after timeout to be processed first ahead of earlier request. Such delayed timed-out request can accidently unregister a valid task manager, whose slots are permanently not reported to job manager. This patch introduces ongoing task executor futures to prevent such race.
…ayedRegisterTaskExecutor Use latches instead of timeouts/sleeps to test problematic thread interleaving. This closes #8415.
Member
|
@sxganapa could you please close this pr which wants to merge release-1.8 to master |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What is the purpose of the change
(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)
Brief change log
(for example:)
Verifying this change
(Please pick either of the following options)
This change is a trivial rework / code cleanup without any test coverage.
(or)
This change is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Does this pull request potentially affect one of the following parts:
@Public(Evolving): (yes / no)Documentation