-
Notifications
You must be signed in to change notification settings - Fork 13.9k
Release 1.9FLINK-13461 #9338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release 1.9FLINK-13461 #9338
Conversation
…ng to PubSubSink serializer and emulator settings
…tests for ResourceProfile.
…rofileTest and ResourceSpecTest
… and ResourceProfile
- Some JavaDoc comments - Make the class final, because several methods are not designed to handle inheritence well. - Avoid repeated string concatenation/building
…ices have configured resource profiles
…lot is oversubscribed
…resource profile This change is covered by various existing integration tests that failed prior to this fix.
…ke it compile with Scala 2.12
Before, exceptions that occurred after cancelling a source (as the KafkaConsumer did, for example) would make a job fail when attempting a "stop-with-savepoint". Now we ignore those exceptions.
…eStreamTask checkpoint injecting thread
…ial revert of FLINK-11458): use single threaded Task's dispatcher thread pool
…on in the blocking method in case of spurious wakeups
This commit reworks JSON format to use a runtime converter created based on given TypeInformation. Pre this commit conversion logic was based on reference comparison of TypeInformation which was not working after serialization of the format. This also introduces a builder pattern for ensuring future immutability of schemas. This closes #7932.
This PR makes HiveTableSink implements OverwritableTableSink. This closes #9067.
…talog when creating sink for CatalogTable Planner should first try getting table factory from catalog when creating table sinks for CatalogTable. This closes #9039.
This PR adds comprehensive documentation for unified catalog APIs and catalogs. The ticket for corresponding Chinese documentation is FLINK-13086. This closes #8976.
This PR integrates FunctionCatalog with Catalog APIs. This closes #8920.
…LI SessionContext This PR supports remembering current catalog and database that users set in SQL CLI SessionContext. This closes #9049.
Add a areTypesCompatible() method to LogicalTypeChecks. This will compare two LogicalTypes without field names and other logical attributes (e.g. description, isFinal).
…re row type field names
This commit combines HBaseTableSourceITCase and HBaseLookupFunctionITCase and HBaseConnectorITCase into one class. This can save much cluster initialization time for us. This closes #9275
…name has upper-case characters This closes #9254
…emantics fixed per partition type In a long term we do not need auto-release semantics for blocking (persistent) partition. We expect them always to be released externally by JM and assume they can be consumed multiple times. The pipelined partitions have always only one consumer and one consumption attempt. Afterwards they can be always released automatically. ShuffleDescriptor.ReleaseType was introduced to make release semantics more flexible but it is not needed in a long term. FORCE_PARTITION_RELEASE_ON_CONSUMPTION was introduced as a safety net to be able to fallback to 1.8 behaviour without the partition tracker and JM taking care about blocking partition release. We can make this option specific for NettyShuffleEnvironment which was the only existing shuffle service before. If it is activated then the blocking partition is also auto-released on a consumption attempt as it was before. The fine-grained recovery will just not find the partition after the job restart in this case and will restart the producer.
… based on ResultPartitionType.isBlocking
… configurations This closes #9277
Make MultiTaskSlot not available for allocation when it’s releasing children to avoid ConcurrentModificationException. This closes #9288.
|
Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community Automated ChecksLast check on commit bf99b26 (Tue Aug 06 15:59:02 UTC 2019) Warnings:
Mention the bot in a comment to re-run the automated checks. Review Progress
Please see the Pull Request Review Guide for a full explanation of the review process. DetailsThe Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commandsThe @flinkbot bot supports the following commands:
|
|
I'm so sorry to do that, but I didn't mean it, It's my misoperation |
What is the purpose of the change
(For example: This pull request makes task deployment go through the blob server, rather than through RPC. That way we avoid re-transferring them on each deployment (during recovery).)
Brief change log
(for example:)
Verifying this change
(Please pick either of the following options)
This change is a trivial rework / code cleanup without any test coverage.
(or)
This change is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Does this pull request potentially affect one of the following parts:
@Public(Evolving): (yes / no)Documentation