-
Notifications
You must be signed in to change notification settings - Fork 13.8k
Release 1.14 #17477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release 1.14 #17477
Conversation
…r slow with AdaptiveScheduler This closes #16229
This commit hardens the RunnablesTest.testExecutorService_uncaughtExceptionHandler to not rely on timeouts to check whether the uncaught exception handler was called. This closes #16262.
…rvice_uncaughtExceptionHandler
The query doesn't impose any ordering and a reorder may happen as is.
… writeInt to serialize the length This closes #16258.
…le in "Time Attribute" page This closes #16301
Also make flink-sql-connector-kinesis use the Guava library coming transitively from the connector-kinesis dependency.
…ing in e2e test failure
…mon dependencies. Connectors get shaded into the user jar and as such should contain no unnecessary dependencies to flink. However, connector-base is exposing `flink-core` which then by default gets shaded into the user jar. Except for 6MB of extra size, the dependency also causes class loading issues, when `classloader.parent-first-patterns` does not include `o.a.f`.
OperatorChain creates the outputs and owns them, so that it should also close them. Specific operators should not close the outputs. Also, ChainingOutput should never close the chained operator; it's not owning the operator.
…on until checkpoint confirmed Currently, the test job makes too many attempts with little progress and eventually times out. Too many attempts are made because each checkpoint can be failed by any FailingMapper that reaches its randomly chosen threshold. And if some subtask becomes back-pressured then any of three others will likely fail the checkpoint, reverting the progress. This change makes sources to pause for the checkpoint confirmation; and fixes static fields so that it runs more reliably in a loop locally.
…e job is canceling or failing Cancel all pending requests of a canceled/failed execution version, or the requests may be fulfilled with a slot released by a previously fulfilled task and then released immediately, which is executed recursively and may cause StackOverflowException when the scale is too large.
…ouldn't merge properties for alter DB operation
…DB operation This closes #16335
…rs semantics change in MiniClusterJobClient FLINK-18685 changed the semantics of the MiniClusterJobClient. This commit updates the 1.13 release notes accordingly. This closes #16256.
…mation.svg display issue This closes #16364
…e schema in properties This closes #16149
…est_flat_aggregate
…t on checkpoint and disable it if group ID is not specified (cherry picked from commit ca8bff2)
…ace for validating offset initializer in KafkaSourceBuilder (cherry picked from commit 2da73ed)
… lead to full failover, not JobManager failure. Instead of letting exceptions during the creation of the Source Enumerator bubble up (and utimately fail the JobManager / Scheduler creation), we now catch those exceptions and trigger a full (global) failover for that case.
…sm is higher than partitions in KafkaSource Before this commit the enumerator signalled the leftover source readers without a partition to finish. This caused that checkpointing was not possible anymore because it is only supported if all tasks are running or FLIP-147 is enabled. This closes #17330
…ion may mislead users
…g memory sizes. This closes #17335
A fatal error will be thrown if the retry fails. This closes #17336.
…ty string. This closes #17322.
…imeout. This closes #17372
… providers This closes #17389.
This commit hardens the common_ha.sh#ha_tm_watchdog function by always starting the expected number of TaskManagers. This will ensure that the e2e tests that are using this function also pass if a TaskManager dies accidentally.
…ter_datastream.sh and test_ha_datastream.sh This commit reduces the number of JM kills in test_ha_per_job_cluster_datastream.sh and test_ha_datastream.sh because on CI killing the JM 3 times can take more than the current timeout (15 minutes) (2 minutes per successful checkpoint, 8 successful checkpoints required to pass). This closes #17139.
State that only the MemoryStateBackend does not support local recovery.
…FO configured. (#17417) * FLINK-24431 Stop consumer deregistration when EAGER EFO configured. * FLINK-24431 Verify criteria for stream deregistration in unit tests. * FLINK-24431 Update documentation to reflect new EAGER EFO strategy changes. * FLINK-24431 Remove Java 11 language feature usage in consumer unit test. * FLINK-24431 Untranslated zh documentation updated to match kinesis documentation changes.
|
拉取apache flink 1.14 |
|
pull apache flink release-1.14 |
|
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Automated ChecksLast check on commit a58dd48 (Thu Oct 14 09:22:47 UTC 2021) Warnings:
Mention the bot in a comment to re-run the automated checks. Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. DetailsThe Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
What is the purpose of the change
(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)
Brief change log
(for example:)
Verifying this change
(Please pick either of the following options)
This change is a trivial rework / code cleanup without any test coverage.
(or)
This change is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Does this pull request potentially affect one of the following parts:
@Public(Evolving): (yes / no)Documentation