-
Notifications
You must be signed in to change notification settings - Fork 13.8k
1.9.0 #10258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.9.0 #10258
Conversation
FLINK-13249 was a bug where a deadlock occurred when the network thread got blocked on a lock while requesting partitions to be read by remote channels. The test mimicks that situation to guard the fix applied in an earlier commit.
…03 jar This closes #9223
…stUtils The method Unsafe.defineClass() is removed in Java 11. To support Java 11, we rework the method "CommonTestUtils.createClassNotInClassPath()" to use a different mechanism. This commit now writes the class byte code out to a temporary file and create a new URLClassLoader that loads the class from that file. That solution is not a complete drop-in replacement, because it cannot add the class to an existing class loader, but can only create a new pair of (classloader & new-class-in-that-classloader). Because of that, the commit also adjusts the existing tests to work with that new mechanism. This closes #9251
… strategy This closes #9241.
…ing partition request On producer side the netty handler receives the CancelPartitionRequest for releasing the SubpartitionView resource. In previous implementation we try to find the corresponding view via available queue in PartitionRequestQueue. But in reality the view is not always available to stay in this queue, then the view would never be released. Furthermore the release of ResultPartition/ResultSubpartitions is based on the reference counter in ReleaseOnConsumptionResultPartition, but while handling the CancelPartitionRequest in PartitionRequestQueue, the ReleaseOnConsumptionResultPartition is never notified of consumed subpartition. That means the reference counter would never decrease to 0 to trigger partition release, which would bring file resource leak in the case of BoundedBlockingSubpartition. In order to fix above two issues, the corresponding view is released via all reader queue instead, and then it would call ReleaseOnConsumptionResultPartition#onConsumedSubpartition meanwhile to solve this bug.
…led input channel IDs
Currently test cases will fail when trying to close the output stream if all data written but ClosedByInterruptException occurs at the ending phase. This commit fixes it. This closes #9235
… of any one This closes #9249
…witch case This closes #9227.
…nd OptimizerConfigOptions This closes #9203
…nge of artifact of flink-python module (#9270)
…ame & withBuiltinDatabaseName
…experimental annotation
… size on the TM side
Only kill Yarn application if it does not properly terminate. This closes #9175.
…ed memory size into wrong configuration instance. [FLINK-13241][yarn][test] Update YarnResourceManagerTest#testCreateSlotsPerWorker to compute tmCalculatedResourceProfile based on the RM altered configuration. [FLINK-13241][yarn][test] Update YarnConfigurationITCase to verify that TMs are started with correct managed memory size. [FLINK-13241][runtime] Calculating and set managed memory size outside of ResourceManager. [FLINK-13241][rumtime/yarn][test] Move YarnResourceManagerTest#testCreateSlotsPerWorker to ResourceManagerTest#testCreateWorkerSlotProfiles, and update to verify slot profile calculation with determinate managed memory size. [FLINK-13241][runtime] Move getResourceManagerConfiguration from ResourceManagerFactory to ResourceManagerUtil. This closes #9246.
… function and DIV(), DIV_INT() function from blink planner This commit remove BITAND, BITOR, BITNOT, BITXOR scalar functions because they are not standard. This commit also removes DIV(), DIV_INT() because we already have "/" and "/INT" operators.
… keep it compatible with old planner The behavior of AVG aggregate function in blink planner always return double/decimal type which is not standard.
…de of "explainTerms" to generate operator names
…t in blink planner This closes #9363
…link planner in scala shell This closes #9389
…g docs-and-source profile
…intFailureManager This closes #9364.
…to keep it compatible with old planner CONCAT(string1, string2, ...) should returns NULL if any argument is NULL. CONCAT_WS(sep, string1, string2,...) should returns NULL if sep is NULL and automatically skips NULL arguments.
…RING type instead of BINARY This fix the behavior of FROM_BASE64() to align with old planner.
…TE() function with old planner
…ALUE(), SUBSTR() builtin functions which are not standard. LENGTH, SUBSTR, KEYVALUE can be covered by existing functions, e.g. CHAR_LENGTH, SUBSTRING, STR_TO_MAP(str)[key].
…common.md, queryable_state.md) This closes #9384
…nCallResolver for class name more meaningful. This closes #9281
…6c6b48) into Chinese documents
…Chinese This closes #9348
… stream group aggregate in FlinkRelMdColumnInterval This closes #9346
…nt toString method to explain more info This closes #9347
…for blink planner This close #9396
…base crashes sql-client Avoid crashing sql-client when switching to non-existing catalog or database. This closes #9399.
Hive documentation is currently spread across a number of pages and fragmented. In particular: - An example was added to getting-started/examples, however, this section is being removed - There is a dedicated page on hive integration but also a lot of hive specific information is on the catalog page This closes #9308.
…2e test This closes #9391.
Fix the issue that Flink cannot access Hive table with decimal columns. This closes #9390.
…in blink planner to fix TPC-H e2e test failed This closes #9427
|
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Automated ChecksLast check on commit 5a5966b (Wed Dec 04 15:52:27 UTC 2019) Warnings:
Mention the bot in a comment to re-run the automated checks. Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. DetailsThe Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
What is the purpose of the change
(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)
Brief change log
(for example:)
Verifying this change
(Please pick either of the following options)
This change is a trivial rework / code cleanup without any test coverage.
(or)
This change is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Does this pull request potentially affect one of the following parts:
@Public(Evolving): (yes / no)Documentation