Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update web site for 1.6.0 release and add 1.6.0 release announcement #117

Merged
merged 3 commits into from Aug 9, 2018

Conversation

Projects
None yet
5 participants
@tillrohrmann
Copy link
Contributor

commented Aug 7, 2018

This PR updates the Flink web site for the Flink 1.6.0 release and adds the Flink 1.6.0 release announcement.

@tillrohrmann tillrohrmann requested review from aljoscha and zentol Aug 7, 2018

@aljoscha
Copy link
Contributor

left a comment

Overall, I really like this announcement. 👍

I had some nitpicks about grammar and the changes should be split into one commit that has the actual changed and a second commit for rebuilding the website.

In Flink 1.6.0 we continue the groundwork we laid out in earlier versions: Enabling Flink users to seamlessly run fast data processing and build data-driven and data-intensive applications effortlessly.

* Flink's state support is one of the key features which makes Flink so versatile and powerful when it comes to implementing all kinds of use cases.
To make it even easier, the community added **native support for state TTL** ([FLINK-9510](https://issues.apache.org/jira/browse/FLINK-9510)).

This comment has been minimized.

Copy link
@aljoscha

aljoscha Aug 7, 2018

Contributor

Maybe mention the umbrella issue here (https://issues.apache.org/jira/browse/FLINK-3089) or at least mention https://issues.apache.org/jira/browse/FLINK-9938, because FLINK-9510 only mentions TTL on access.

This feature allows to specify a time-to-live (TTL) for Flink state.
Once the time-to-live has been exceeded Flink will no longer give access to the respective state values.
The expired data is cleaned up on access so that the operator keyed state doesn’t grow infinitely.
This feature fully complies with new data protection regulations (e.g. GDPR).

This comment has been minimized.

Copy link
@aljoscha

aljoscha Aug 7, 2018

Contributor

Also mention here the deletion-on-scan, I think.


* **Job Cluster Container Entrypoint** ([FLINK-9488](https://issues.apache.org/jira/browse/FLINK-9488)):
Flink 1.6.0 provides an easy-to-use container entrypoint to bootstrap a job cluster.
Combining this entrypoint with a user code jar creates a self-contained image which automatically executes the contained Flink job when deployed.

This comment has been minimized.

Copy link
@aljoscha

aljoscha Aug 7, 2018

Contributor

Nitpick, but I think it should be "user-code jar"

Avoiding additional communication steps with the client reduces the number of moving parts and, improves operations in a container environment significantly.

* **Fully RESTified Job Submission** ([FLINK-9280](https://issues.apache.org/jira/browse/FLINK-9280)):
The Flink client now sends all job relevant content via a single POST call to the server.

This comment has been minimized.

Copy link
@aljoscha

aljoscha Aug 7, 2018

Contributor

Same here, I think it should be "job-relevant content"


* **Support for INSERT INTO Statements in SQL Client CLI** ([FLINK-8858](https://issues.apache.org/jira/browse/FLINK-8858)):
By supporting SQL’s INSERT INTO statements, the SQL Client CLI can be used to submit long-running SQL queries to Flink that sink their results in external systems.
The SQL Client itself can be shutdown after submission.

This comment has been minimized.

Copy link
@aljoscha

aljoscha Aug 7, 2018

Contributor

I think it should be "can be shut down"


* **Faster Timer Deletions** ([FLINK-9423](https://issues.apache.org/jira/browse/FLINK-9423)):
Improving Flink’s internal timer data structure such that the deletion complexity is reduced from O(n) to O(log n).
This significantly improves Flink jobs using Flink’s timers.

This comment has been minimized.

Copy link
@zentol

zentol Aug 8, 2018

Contributor

would remove second "Flink"

* **Faster Timer Deletions** ([FLINK-9423](https://issues.apache.org/jira/browse/FLINK-9423)):
Improving Flink’s internal timer data structure such that the deletion complexity is reduced from O(n) to O(log n).
This significantly improves Flink jobs using Flink’s timers.
Deleting timers are also exposed through a user-facing API now.

This comment has been minimized.

Copy link
@zentol

zentol Aug 8, 2018

Contributor

is also exposed

Flink 1.6.0 provides an easy-to-use container entrypoint to bootstrap a job cluster.
Combining this entrypoint with a user code jar creates a self-contained image which automatically executes the contained Flink job when deployed.
Since the image already contains the Flink job, client communication is no longer necessary.
Avoiding additional communication steps with the client reduces the number of moving parts and, improves operations in a container environment significantly.

This comment has been minimized.

Copy link
@zentol

zentol Aug 8, 2018

Contributor

either "moving parts and improves operations" or "moving parts, improving operations"


* **Unified Table Sinks and Formats** ([FLINK-8866](https://issues.apache.org/jira/browse/FLINK-8866), [FLINK-8558](https://issues.apache.org/jira/browse/FLINK-8558)):
In the past, table sinks had to be configured programmatically.
They were tied to a specific format and implementation.

This comment has been minimized.

Copy link
@zentol

zentol Aug 8, 2018

Contributor

combine with previous sentence, "and were tied to a ..."

In the past, table sinks had to be configured programmatically.
They were tied to a specific format and implementation.
This release reworked how table sinks are discovered and configured.
It also decouples connectors and formats.

This comment has been minimized.

Copy link
@zentol

zentol Aug 8, 2018

Contributor

combine with previous sentence

Table sinks can be defined in a YAML file using the new unified table sink properties.

* **Unified Table Sinks and Formats** ([FLINK-8866](https://issues.apache.org/jira/browse/FLINK-8866), [FLINK-8558](https://issues.apache.org/jira/browse/FLINK-8558)):
In the past, table sinks had to be configured programmatically.

This comment has been minimized.

Copy link
@zentol

zentol Aug 8, 2018

Contributor

This section reads like a series of bullet items.

This comment has been minimized.

Copy link
@tillrohrmann

tillrohrmann Aug 8, 2018

Author Contributor

I'll reword it a bit.

* **Unified Table Sinks and Formats** ([FLINK-8866](https://issues.apache.org/jira/browse/FLINK-8866), [FLINK-8558](https://issues.apache.org/jira/browse/FLINK-8558)):
In the past, table sinks had to be configured programmatically.
They were tied to a specific format and implementation.
This release reworked how table sinks are discovered and configured.

This comment has been minimized.

Copy link
@zentol

zentol Aug 8, 2018

Contributor

this part is effectively the inverse of the above which we can write in a nicer way. "This release reworked how table sink are discovered and configured so that not a single line of code has to be written."?

The Kafka table sink now uses the new unified APIs and supports both JSON and Avro formats.

* **Full SQL Avro Support** ([FLINK-9444](https://issues.apache.org/jira/browse/FLINK-9444)):
Flink’s Table & SQL API understands now the full spectrum of Avro types including generic/specific records and logical types.

This comment has been minimized.

Copy link
@zentol

zentol Aug 8, 2018

Contributor

"now understands"

The types are automatically mapped from and to Flink-equivalent types allowing to specify end-to-end ETL pipelines in SQL.

* **Improved Expressiveness of SQL and Table API** ([FLINK-5878](https://issues.apache.org/jira/browse/FLINK-5878), [FLINK-8688](https://issues.apache.org/jira/browse/FLINK-8688), [FLINK-6810](https://issues.apache.org/jira/browse/FLINK-6810)):
Flink’s Table & SQL API supports left, right, and full outer joins that allow for continuous result updating queries.

This comment has been minimized.

Copy link
@zentol

zentol Aug 8, 2018

Contributor

"result-updating"?

Exactly-once is supported through integration of the sink with Flink’s checkpointing mechanism.
The new sink is built upon Flink’s own `FileSystem` abstraction and it supports local file system and HDFS, with plans for S3 support in the near future.
It exposes pluggable file rolling and bucketing policies.
Apart from row-wise encoding formats, it exposes APIs for bulk-encoding formats like Parquet/ORC and it already has support for Parquet.

This comment has been minimized.

Copy link
@zentol

zentol Aug 8, 2018

Contributor

mentioning parquet twice reads weird, remove the first instance

This comment has been minimized.

Copy link
@tillrohrmann

tillrohrmann Aug 8, 2018

Author Contributor

True.

---
layout: post
title: "Apache Flink 1.6.0 Release Announcement"
date: 2018-08-08 18:30:00

This comment has been minimized.

Copy link
@zentol

zentol Aug 8, 2018

Contributor

AFAIK we wanted to keep use a consistent time (12:00:00), but I don't really know why. Just saw this comment in another PR.

This comment has been minimized.

Copy link
@tillrohrmann

tillrohrmann Aug 8, 2018

Author Contributor

Alright, will update the blog post accordingly.

@tillrohrmann tillrohrmann force-pushed the tillrohrmann:release-1.6 branch from 4d0393d to de97de8 Aug 8, 2018

@tillrohrmann

This comment has been minimized.

Copy link
Contributor Author

commented Aug 8, 2018

Thanks for the review @aljoscha and @zentol. I've addressed your comments and split the original commit into a new commit and the rebuild commit.

@zentol

zentol approved these changes Aug 8, 2018

Copy link
Contributor

left a comment

+1

@twalthr

twalthr approved these changes Aug 8, 2018

Copy link
Contributor

left a comment

+1

Once the time-to-live has been exceeded Flink will no longer give access to the respective state values.
The expired data is cleaned up on access so that the operator keyed state doesn’t grow infinitely and it won't be included in subsequent checkpoints.
This feature fully complies with new data protection regulations (e.g. GDPR).

This comment has been minimized.

Copy link
@twalthr

twalthr Aug 8, 2018

Contributor

Remove empty line?

tillrohrmann added some commits Aug 8, 2018

@tillrohrmann tillrohrmann force-pushed the tillrohrmann:release-1.6 branch from de97de8 to d68df51 Aug 9, 2018

@asfgit asfgit merged commit d68df51 into apache:asf-site Aug 9, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.