From 0504bcec52dd390acafd538842d8a8839cf5168f Mon Sep 17 00:00:00 2001 From: Aljoscha Krettek Date: Fri, 13 Nov 2020 16:41:04 +0100 Subject: [PATCH 1/6] [FLINK-20153] Add documentation for BATCH execution mode This adds documentation for the new `BATCH` execution mode. We also explain `STREAMING` execution mode because there is no central page that explains the basic behavior, so far. --- docs/dev/datastream_execution_mode.md | 264 ++++++++++++++++++++++ docs/dev/stream/operators/index.md | 122 +++++----- docs/fig/datastream-example-job-graph.svg | 1 + 3 files changed, 325 insertions(+), 62 deletions(-) create mode 100644 docs/dev/datastream_execution_mode.md create mode 100644 docs/fig/datastream-example-job-graph.svg diff --git a/docs/dev/datastream_execution_mode.md b/docs/dev/datastream_execution_mode.md new file mode 100644 index 0000000000000..1ab32d2a68bc3 --- /dev/null +++ b/docs/dev/datastream_execution_mode.md @@ -0,0 +1,264 @@ +--- +title: "Execution Mode (Batch/Streaming)" +nav-id: datastream_execution_mode +nav-parent_id: streaming +nav-pos: 1 +--- + + +The DataStream API supports different runtime execution modes from which you +can choose depending on the requirements of your use case and the +characteristics of your job. + +There is the "classic" execution behavior of the DataStream API, which we call +`STREAMING` execution mode. This should be used for unbounded jobs that require +continuous incremental processing and are expected to stay online indefinitely. + +Additionally, there is a batch-style execution mode that we call `BATCH` +execution mode. This executes jobs in a way that is more reminiscent of batch +processing frameworks such as MapReduce. This should be used for bounded jobs +for which you have a known fixed input and which do not run continuously. + +Apache Flink's unified approach to stream and batch processing means that a +DataStream application executed over bounded input will produce the same +results regardless of the configured execution mode. By enabling `BATCH` +execution, we allow Flink to apply additional optimizations that we can only do +when we know that our input is bounded. For example, different join/aggregation +strategies can be used, in addition to a different shuffle implementation that +allows more efficient task scheduling and failure recovery behavior. We will go +into some of the details of the execution behavior below. + +* This will be replaced by the TOC +{:toc} + +## When can/should I use BATCH execution mode? + +The `BATCH` execution mode can only be used for Jobs/Flink Programs that are +_bounded_. Boundedness is a property of a data source that tells us whether all +the input coming from that source is known before execution or whether new data +will show up, potentially indefinitely. A job, in turn, is bounded if all its +sources are bounded, and unbounded otherwise. + +`STREAMING` execution mode, on the other hand, can be used for both bounded and +unbounded jobs. + +As a rule of thumb, you should be using `BATCH` execution mode when your program +is bounded because this will be more efficient. You have to use `STREAMING` +execution mode when your program is unbounded because only this mode is general +enough to be able to deal with continuous data streams. + +One obvious outlier is when you want to use a bounded job to bootstrap some job +state that you then want to use in an unbounded job. For example, by running a +bounded job using `STREAMING` mode, taking a savepoint, and then restoring that +savepoint on an unbounded job. This is a very specific use case and one that +might soon become obsolete when we allow producing a savepoint as additional +output of a `BATCH` execution job. + +Another case where you might run a bounded job using `STREAMING` mode is when +writing tests for code that will eventually run with unbounded sources. For +testing it can be more natural to use a bounded source in those cases. + +## Configuring BATCH execution mode + +The execution mode can be configured via the `execution.runtime-mode` setting. +There are three possible values: + + - `STREAMING`: The classic DataStream execution mode (default) + - `BATCH`: Batch-style execution on the DataStream API + - `AUTOMATIC`: Let the system decide based on the boundedness of the sources + +This can be configured via command line parameters of `bin/flink run ...`, or +programmatically when creating/configuring the `StreamExecutionEnvironment`. + +Here's how you can configure the execution mode via the command line: + +```bash +$ bin/flink run -Dexecution.runtime-mode=BATCH examples/streaming/WordCount.jar +``` + +This example shows how you can configure the execution mode in code: + + ```java +StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); +env.setRuntimeMode(RuntimeExecutionMode.BATCH); + ``` + +
+We recommend users to NOT set the runtime mode in their program but to instead +set it using the command-line when submitting the application. Keeping the +application code configuration-free allows for more flexibility as the same +application can be executed in any execution mode. +
+ +## Execution Behavior + +This section provides an overview of the execution behavior of `BATCH` +execution mode and contrasts it with `STREAMING` execution mode. For more +details, please refer to the FLIPs that introduced this feature: +[FLIP-134](https://cwiki.apache.org/confluence/x/4i94CQ) and +[FLIP-140](https://cwiki.apache.org/confluence/x/kDh4CQ). + +### Task Scheduling And Network Shuffle + +Flink jobs consist of different operations that are connected together in a +dataflow graph. The system decides how to schedule the execution of these +operations on different processes/machines (TaskManagers) and how data is +shuffled (sent) between them. + +Multiple operations/operators can be chained together using a feature called +[chaining]({% link dev/stream/operators/index.md +%}#task-chaining-and-resource-groups). A group of one or multiple (chained) +operators that Flink considers as a unit of scheduling is called a _task_. +Often the term _subtask_ is used to refer to the individual instances of tasks +that are running in parallel on multiple TaskManagers but we will only use the +term _task_ here. + +Task scheduling and network shuffles work differently for `BATCH` and +`STREAMING` execution mode. Mostly due to the fact that we know our input data +is bounded in `BATCH` execution mode, which allows Flink to use more efficient +data structures and algorithms. + +We will use this example to explain the differences in task scheduling and +network transfer: + +```java +StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); + +DataStreamSource source = env.fromElements(...); + +source.name("source") + .map(...).name("map1") + .map(...).name("map2") + .rebalance() + .map(...).name("map3") + .map(...).name("map4") + .keyBy((value) -> value) + .map(...).name("map5") + .map(...).name("map6") + .sinkTo(...).name("sink"); +``` + +Operations that imply a 1-to-1 connection pattern between operations, such as +`map()`, `flatMap()`, or `filter()` can just forward data straight to the next +operation, which allows these operations to be chained together. This means +that Flink would not normally insert a network shuffle between them. + +Operation such as `keyBy()` or `rebalance()` on the other hand require data to +be shuffled between different parallel instances of tasks. This induces a +network shuffle. + +For the above example Flink would group operations together as tasks like this: + +- Task1: `source`, `map1`, and `map2` +- Task2: `map3`, `map4` +- Task3: `map5`, `map6`, and `sink` + +And we have a network shuffle between Tasks 1 and 2, and also Tasks 2 and 3. +This is a visual representation of that job: + +Example Job Graph + +#### STREAMING Execution Mode + +In `STREAMING` execution mode, all tasks need to be online/running all the +time. This allows Flink to immediately process new records through the whole +pipeline, which we need for continuous and low-latency stream processing. This +also means that the TaskManagers that are allotted to a job need to have enough +resources to run all the tasks at the same time. + +Network shuffles are _pipelined_, meaning that records are immediately sent to +downstream tasks, with some buffering on the network layer. Again, this is +required because when processing a continuous stream of data there are no +natural points (in time) where data could be materialized between tasks (or +pipelines of tasks). This contrasts with `BATCH` execution mode where +intermediate results can be materialized, as explained below. + +#### BATCH Execution Mode + +In `BATCH` execution mode, the tasks of a job can be separated into stages that +can be executed one after another. We can do this because the input is bounded +and Flink can therefore fully process one stage of the pipeline before moving +on to the next. In the above example the job would have three stages that +correspond to the three tasks that are separated by the shuffle barriers. + +Instead of sending records immediately to downstream tasks, as explained above +for `STREAMING` mode, processing in stages requires Flink to materialize +intermediate results of tasks to some non-ephemeral storage which allows +downstream tasks to read them after upstream tasks have already gone off line. +This will increase the latency of processing but comes with other interesting +properties. For one, this allows Flink to backtrack to the latest available +results when a failure happens instead of restarting the whole job. Another +side effect is that `BATCH` jobs can execute on fewer resources (in terms of +available slots at TaskManagers) because the system can execute tasks +sequentially one after the other. + +TaskManagers will keep intermediate results at least as long as downstream +tasks have not consumed them. (Technically, they will be kept until the +consuming *pipelined regions* have produced their output.) After +that, they will be kept for as long as space allows in order to allow the +aforementioned backtracking to earlier results in case of a failure. + +### State Backends / State + +In `STREAMING` mode, Flink uses a [StateBackend]({% link +dev/stream/state/state_backends.md %}) to control how state is stored and how +checkpointing works. + +In `BATCH` mode, the configured state backend is ignored. Instead, the input of +a keyed operation is grouped by key (using sorting) and then we process all +records of a key in turn. This allows keeping only the state of only one key at +the same time. State for a given key will be discarded when moving on to the +next key. + +See [FLIP-140](https://cwiki.apache.org/confluence/x/kDh4CQ) for background +information on this. + +### Event Time / Watermarks + +### Processing Time + +### Failure Recovery + +In `STREAMING` execution mode, Flink uses checkpoints for failure recovery. +Take a look at the [checkpointing documentation]({% link +dev/stream/state/checkpointing.md %}) for hands-on documentation about this and +how to configure it. There is also a more introductory section about [fault +tolerance via state snapshots]({% link learn-flink/fault_tolerance.md %}) that +explains the concepts at a higher level. + +One of the characteristics of checkpointing for failure recovery is that Flink +will restart all the running tasks from a checkpoint in case of a failure. This +can be more costly than what we have to do in `BATCH` mode (as explained +below), which is one of the reasons that you should use `BATCH` execution mode +if your job allows it. + +In `BATCH` execution mode, Flink will try and backtrack to previous processing +stages for which intermediate results are still available. Potentially, only +the tasks that failed (or their predecessors in the graph) will have to be +restarted, which can improve processing efficiency and overall processing time +of the job compared to restarting all tasks from a checkpoint. + +## Important Considerations + +What doesn't work: + - broadcast state/pattern + - iterations + - operations that rely on checkpointing + - this includes most "regular" sinks diff --git a/docs/dev/stream/operators/index.md b/docs/dev/stream/operators/index.md index 55030500f25f2..8f4713a881cef 100644 --- a/docs/dev/stream/operators/index.md +++ b/docs/dev/stream/operators/index.md @@ -49,7 +49,7 @@ partitioning after applying those as well as insights into Flink's operator chai - Map
DataStream → DataStream +
Map
DataStream → DataStream

Takes one element and produces one element. A map function that doubles the values of the input stream:

{% highlight java %} @@ -65,7 +65,7 @@ dataStream.map(new MapFunction() { - FlatMap
DataStream → DataStream +
FlatMap
DataStream → DataStream

Takes one element and produces zero, one, or more elements. A flatmap function that splits sentences to words:

{% highlight java %} @@ -82,7 +82,7 @@ dataStream.flatMap(new FlatMapFunction() { - Filter
DataStream → DataStream +
Filter
DataStream → DataStream

Evaluates a boolean function for each element and retains those for which the function returns true. A filter that filters out zero values: @@ -98,7 +98,7 @@ dataStream.filter(new FilterFunction() { - KeyBy
DataStream → KeyedStream +

KeyBy
DataStream → KeyedStream

Logically partitions a stream into disjoint partitions. All records with the same key are assigned to the same partition. Internally, keyBy() is implemented with hash partitioning. There are different ways to specify keys.

@@ -119,7 +119,7 @@ dataStream.keyBy(value -> value.f0) // Key by the first element of a Tuple - Reduce
KeyedStream → DataStream +

Reduce
KeyedStream → DataStream

A "rolling" reduce on a keyed data stream. Combines the current element with the last reduced value and emits the new value. @@ -139,7 +139,7 @@ keyedStream.reduce(new ReduceFunction() { - Aggregations
KeyedStream → DataStream +

Aggregations
KeyedStream → DataStream

Rolling aggregations on a keyed data stream. The difference between min and minBy is that min returns the minimum value, whereas minBy returns @@ -159,7 +159,7 @@ keyedStream.maxBy("key"); - Window
KeyedStream → WindowedStream +

Window
KeyedStream → WindowedStream

Windows can be defined on already partitioned KeyedStreams. Windows group the data in each key according to some characteristic (e.g., the data that arrived within the last 5 seconds). @@ -171,7 +171,7 @@ dataStream.keyBy(value -> value.f0).window(TumblingEventTimeWindows.of(Time.seco - WindowAll
DataStream → AllWindowedStream +

WindowAll
DataStream → AllWindowedStream

Windows can be defined on regular DataStreams. Windows group all the stream events according to some characteristic (e.g., the data that arrived within the last 5 seconds). @@ -184,7 +184,7 @@ dataStream.windowAll(TumblingEventTimeWindows.of(Time.seconds(5))); // Last 5 se - Window Apply
WindowedStream → DataStream
AllWindowedStream → DataStream +

Window Apply
WindowedStream → DataStream
AllWindowedStream → DataStream

Applies a general function to the window as a whole. Below is a function that manually sums the elements of a window.

Note: If you are using a windowAll transformation, you need to use an AllWindowFunction instead.

@@ -218,7 +218,7 @@ allWindowedStream.apply (new AllWindowFunction, Integer, - Window Reduce
WindowedStream → DataStream +
Window Reduce
WindowedStream → DataStream

Applies a functional reduce function to the window and returns the reduced value.

{% highlight java %} @@ -231,7 +231,7 @@ windowedStream.reduce (new ReduceFunction>() { - Aggregations on windows
WindowedStream → DataStream +
Aggregations on windows
WindowedStream → DataStream

Aggregates the contents of a window. The difference between min and minBy is that min returns the minimum value, whereas minBy returns @@ -251,7 +251,7 @@ windowedStream.maxBy("key"); - Union
DataStream* → DataStream +

Union
DataStream* → DataStream

Union of two or more data streams creating a new stream containing all the elements from all the streams. Note: If you union a data stream with itself you will get each element twice in the resulting stream.

@@ -261,7 +261,7 @@ dataStream.union(otherStream1, otherStream2, ...); - Window Join
DataStream,DataStream → DataStream +
Window Join
DataStream,DataStream → DataStream

Join two data streams on a given key and a common window.

{% highlight java %} @@ -273,7 +273,7 @@ dataStream.join(otherStream) - Interval Join
KeyedStream,KeyedStream → DataStream +
Interval Join
KeyedStream,KeyedStream → DataStream

Join two elements e1 and e2 of two keyed streams with a common key over a given time interval, so that e1.timestamp + lowerBound <= e2.timestamp <= e1.timestamp + upperBound

{% highlight java %} @@ -288,7 +288,7 @@ keyedStream.intervalJoin(otherKeyedStream) - Window CoGroup
DataStream,DataStream → DataStream +
Window CoGroup
DataStream,DataStream → DataStream

Cogroups two data streams on a given key and a common window.

{% highlight java %} @@ -300,7 +300,7 @@ dataStream.coGroup(otherStream) - Connect
DataStream,DataStream → ConnectedStreams +
Connect
DataStream,DataStream → ConnectedStreams

"Connects" two data streams retaining their types. Connect allowing for shared state between the two streams.

@@ -313,7 +313,7 @@ ConnectedStreams connectedStreams = someStream.connect(otherStr - CoMap, CoFlatMap
ConnectedStreams → DataStream +
CoMap, CoFlatMap
ConnectedStreams → DataStream

Similar to map and flatMap on a connected data stream

{% highlight java %} @@ -346,7 +346,7 @@ connectedStreams.flatMap(new CoFlatMapFunction() { - Iterate
DataStream → IterativeStream → DataStream +
Iterate
DataStream → IterativeStream → DataStream

Creates a "feedback" loop in the flow, by redirecting the output of one operator @@ -354,7 +354,6 @@ connectedStreams.flatMap(new CoFlatMapFunction() { continuously update a model. The following code starts with a stream and applies the iteration body continuously. Elements that are greater than 0 are sent back to the feedback channel, and the rest of the elements are forwarded downstream. - See iterations for a complete description. {% highlight java %} IterativeStream iteration = initialStream.iterate(); DataStream iterationBody = iteration.map (/*do something*/); @@ -393,7 +392,7 @@ DataStream output = iterationBody.filter(new FilterFunction(){ - Map
DataStream → DataStream +

Map
DataStream → DataStream

Takes one element and produces one element. A map function that doubles the values of the input stream:

{% highlight scala %} @@ -403,7 +402,7 @@ dataStream.map { x => x * 2 } - FlatMap
DataStream → DataStream +
FlatMap
DataStream → DataStream

Takes one element and produces zero, one, or more elements. A flatmap function that splits sentences to words:

{% highlight scala %} @@ -412,7 +411,7 @@ dataStream.flatMap { str => str.split(" ") } - Filter
DataStream → DataStream +
Filter
DataStream → DataStream

Evaluates a boolean function for each element and retains those for which the function returns true. A filter that filters out zero values: @@ -423,7 +422,7 @@ dataStream.filter { _ != 0 } - KeyBy
DataStream → KeyedStream +

KeyBy
DataStream → KeyedStream

Logically partitions a stream into disjoint partitions, each partition containing elements of the same key. Internally, this is implemented with hash partitioning. See keys on how to specify keys. @@ -435,7 +434,7 @@ dataStream.keyBy(_._1) // Key by the first element of a Tuple - Reduce
KeyedStream → DataStream +

Reduce
KeyedStream → DataStream

A "rolling" reduce on a keyed data stream. Combines the current element with the last reduced value and emits the new value. @@ -449,7 +448,7 @@ keyedStream.reduce { _ + _ } - Aggregations
KeyedStream → DataStream +

Aggregations
KeyedStream → DataStream

Rolling aggregations on a keyed data stream. The difference between min and minBy is that min returns the minimum value, whereas minBy returns @@ -469,7 +468,7 @@ keyedStream.maxBy("key") - Window
KeyedStream → WindowedStream +

Window
KeyedStream → WindowedStream

Windows can be defined on already partitioned KeyedStreams. Windows group the data in each key according to some characteristic (e.g., the data that arrived within the last 5 seconds). @@ -481,7 +480,7 @@ dataStream.keyBy(_._1).window(TumblingEventTimeWindows.of(Time.seconds(5))) // L - WindowAll
DataStream → AllWindowedStream +

WindowAll
DataStream → AllWindowedStream

Windows can be defined on regular DataStreams. Windows group all the stream events according to some characteristic (e.g., the data that arrived within the last 5 seconds). @@ -494,7 +493,7 @@ dataStream.windowAll(TumblingEventTimeWindows.of(Time.seconds(5))) // Last 5 sec - Window Apply
WindowedStream → DataStream
AllWindowedStream → DataStream +

Window Apply
WindowedStream → DataStream
AllWindowedStream → DataStream

Applies a general function to the window as a whole. Below is a function that manually sums the elements of a window.

Note: If you are using a windowAll transformation, you need to use an AllWindowFunction instead.

@@ -508,7 +507,7 @@ allWindowedStream.apply { AllWindowFunction } - Window Reduce
WindowedStream → DataStream +
Window Reduce
WindowedStream → DataStream

Applies a functional reduce function to the window and returns the reduced value.

{% highlight scala %} @@ -517,7 +516,7 @@ windowedStream.reduce { _ + _ } - Aggregations on windows
WindowedStream → DataStream +
Aggregations on windows
WindowedStream → DataStream

Aggregates the contents of a window. The difference between min and minBy is that min returns the minimum value, whereas minBy returns @@ -537,7 +536,7 @@ windowedStream.maxBy("key") - Union
DataStream* → DataStream +

Union
DataStream* → DataStream

Union of two or more data streams creating a new stream containing all the elements from all the streams. Note: If you union a data stream with itself you will get each element twice in the resulting stream.

@@ -547,7 +546,7 @@ dataStream.union(otherStream1, otherStream2, ...) - Window Join
DataStream,DataStream → DataStream +
Window Join
DataStream,DataStream → DataStream

Join two data streams on a given key and a common window.

{% highlight scala %} @@ -559,7 +558,7 @@ dataStream.join(otherStream) - Window CoGroup
DataStream,DataStream → DataStream +
Window CoGroup
DataStream,DataStream → DataStream

Cogroups two data streams on a given key and a common window.

{% highlight scala %} @@ -571,7 +570,7 @@ dataStream.coGroup(otherStream) - Connect
DataStream,DataStream → ConnectedStreams +
Connect
DataStream,DataStream → ConnectedStreams

"Connects" two data streams retaining their types, allowing for shared state between the two streams.

@@ -584,7 +583,7 @@ val connectedStreams = someStream.connect(otherStream) - CoMap, CoFlatMap
ConnectedStreams → DataStream +
CoMap, CoFlatMap
ConnectedStreams → DataStream

Similar to map and flatMap on a connected data stream

{% highlight scala %} @@ -600,7 +599,7 @@ connectedStreams.flatMap( - Iterate
DataStream → IterativeStream → DataStream +
Iterate
DataStream → IterativeStream → DataStream

Creates a "feedback" loop in the flow, by redirecting the output of one operator @@ -608,7 +607,6 @@ connectedStreams.flatMap( continuously update a model. The following code starts with a stream and applies the iteration body continuously. Elements that are greater than 0 are sent back to the feedback channel, and the rest of the elements are forwarded downstream. - See iterations for a complete description. {% highlight java %} initialStream.iterate { iteration => { @@ -648,7 +646,7 @@ is not supported by the API out-of-the-box. To use this feature, you should use - Map
DataStream → DataStream +

Map
DataStream → DataStream

Takes one element and produces one element. A map function that doubles the values of the input stream:

{% highlight python %} @@ -659,7 +657,7 @@ data_stream.map(lambda x: 2 * x, output_type=Types.INT()) - FlatMap
DataStream → DataStream +
FlatMap
DataStream → DataStream

Takes one element and produces zero, one, or more elements. A flatmap function that splits sentences to words:

{% highlight python %} @@ -669,7 +667,7 @@ data_stream.flat_map(lambda x: x.split(' '), result_type=Types.STRING()) - Filter
DataStream → DataStream +
Filter
DataStream → DataStream

Evaluates a boolean function for each element and retains those for which the function returns true. A filter that filters out zero values: @@ -681,7 +679,7 @@ data_stream.filter(lambda x: x != 0) - KeyBy
DataStream → KeyedStream +

KeyBy
DataStream → KeyedStream

Logically partitions a stream into disjoint partitions. All records with the same key are assigned to the same partition. Internally, keyBy() is implemented with hash partitioning. There are different ways to specify keys.

This transformation returns a KeyedStream, which is, among other things, required to use keyed state.

@@ -696,7 +694,7 @@ data_stream.key_by(lambda x: x[1], key_type_info=Types.STRING()) // Key by the r - Reduce
KeyedStream → DataStream +
Reduce
KeyedStream → DataStream

A "rolling" reduce on a keyed data stream. Combines the current element with the last reduced value and emits the new value. @@ -711,7 +709,7 @@ data_stream.key_by(lambda x: x[1]).reduce(lambda a, b: (a[0] + b[0], b[1])) - Union
DataStream* → DataStream +

Union
DataStream* → DataStream

Union of two or more data streams creating a new stream containing all the elements from all the streams. Note: If you union a data stream with itself you will get each element twice in the resulting stream.

@@ -721,7 +719,7 @@ data_stream.union(otherStream1, otherStream2, ...) - Connect
DataStream,DataStream → ConnectedStreams +
Connect
DataStream,DataStream → ConnectedStreams

"Connects" two data streams retaining their types, allowing for shared state between the two streams.

@@ -733,7 +731,7 @@ connected_streams = stream_1.connect(stream_2) - CoMap, CoFlatMap
ConnectedStreams → DataStream +
CoMap, CoFlatMap
ConnectedStreams → DataStream

Similar to map and flatMap on a connected data stream

{% highlight python %} @@ -784,7 +782,7 @@ The following transformations are available on data streams of Tuples: - Project
DataStream → DataStream +
Project
DataStream → DataStream

Selects a subset of fields from the tuples {% highlight java %} @@ -811,7 +809,7 @@ DataStream> out = in.project(2,0); - Project
DataStream → DataStream +

Project
DataStream → DataStream

Selects a subset of fields from the tuples {% highlight python %} @@ -848,7 +846,7 @@ via the following functions. - Custom partitioning
DataStream → DataStream +

Custom partitioning
DataStream → DataStream

Uses a user-defined Partitioner to select the target task for each element. @@ -860,7 +858,7 @@ dataStream.partitionCustom(partitioner, 0); - Random partitioning
DataStream → DataStream +

Random partitioning
DataStream → DataStream

Partitions elements randomly according to a uniform distribution. @@ -871,7 +869,7 @@ dataStream.shuffle(); - Rebalancing (Round-robin partitioning)
DataStream → DataStream +

Rebalancing (Round-robin partitioning)
DataStream → DataStream

Partitions elements round-robin, creating equal load per partition. Useful for performance @@ -883,7 +881,7 @@ dataStream.rebalance(); - Rescaling
DataStream → DataStream +

Rescaling
DataStream → DataStream

Partitions elements, round-robin, to a subset of downstream operations. This is @@ -927,7 +925,7 @@ dataStream.rescale(); - Broadcasting
DataStream → DataStream +

Broadcasting
DataStream → DataStream

Broadcasts elements to every partition. @@ -955,7 +953,7 @@ dataStream.broadcast(); - Custom partitioning
DataStream → DataStream +

Custom partitioning
DataStream → DataStream

Uses a user-defined Partitioner to select the target task for each element. @@ -967,7 +965,7 @@ dataStream.partitionCustom(partitioner, 0) - Random partitioning
DataStream → DataStream +

Random partitioning
DataStream → DataStream

Partitions elements randomly according to a uniform distribution. @@ -978,7 +976,7 @@ dataStream.shuffle() - Rebalancing (Round-robin partitioning)
DataStream → DataStream +

Rebalancing (Round-robin partitioning)
DataStream → DataStream

Partitions elements round-robin, creating equal load per partition. Useful for performance @@ -990,7 +988,7 @@ dataStream.rebalance() - Rescaling
DataStream → DataStream +

Rescaling
DataStream → DataStream

Partitions elements, round-robin, to a subset of downstream operations. This is @@ -1035,7 +1033,7 @@ dataStream.rescale() - Broadcasting
DataStream → DataStream +

Broadcasting
DataStream → DataStream

Broadcasts elements to every partition. @@ -1063,7 +1061,7 @@ dataStream.broadcast() - Custom partitioning
DataStream → DataStream +

Custom partitioning
DataStream → DataStream

Uses a user-defined Partitioner to select the target task for each element. {% highlight python %} @@ -1074,7 +1072,7 @@ data_stream.partition_custom(lambda key, num_partition: key % partition, lambda - Random partitioning
DataStream → DataStream +

Random partitioning
DataStream → DataStream

Partitions elements randomly according to a uniform distribution. @@ -1085,7 +1083,7 @@ data_stream.shuffle() - Rebalancing (Round-robin partitioning)
DataStream → DataStream +

Rebalancing (Round-robin partitioning)
DataStream → DataStream

Partitions elements round-robin, creating equal load per partition. Useful for performance @@ -1097,7 +1095,7 @@ data_stream.rebalance() - Rescaling
DataStream → DataStream +

Rescaling
DataStream → DataStream

Partitions elements, round-robin, to a subset of downstream operations. This is @@ -1141,7 +1139,7 @@ data_stream.rescale() - Broadcasting
DataStream → DataStream +

Broadcasting
DataStream → DataStream

Broadcasts elements to every partition. diff --git a/docs/fig/datastream-example-job-graph.svg b/docs/fig/datastream-example-job-graph.svg new file mode 100644 index 0000000000000..b6a9b446f809d --- /dev/null +++ b/docs/fig/datastream-example-job-graph.svg @@ -0,0 +1 @@ + \ No newline at end of file From 2660fada677542e3a741ea7b17b4df0180269ef7 Mon Sep 17 00:00:00 2001 From: Dawid Wysakowicz Date: Wed, 18 Nov 2020 10:16:52 +0100 Subject: [PATCH 2/6] [FLINK-20153] Add important considerations in execution mode docs --- docs/dev/datastream_execution_mode.md | 87 +++++++++++++++++++++++++-- 1 file changed, 82 insertions(+), 5 deletions(-) diff --git a/docs/dev/datastream_execution_mode.md b/docs/dev/datastream_execution_mode.md index 1ab32d2a68bc3..4f12a124507e4 100644 --- a/docs/dev/datastream_execution_mode.md +++ b/docs/dev/datastream_execution_mode.md @@ -257,8 +257,85 @@ of the job compared to restarting all tasks from a checkpoint. ## Important Considerations -What doesn't work: - - broadcast state/pattern - - iterations - - operations that rely on checkpointing - - this includes most "regular" sinks +Compared to classic `STREAMING` execution mode, in `BATCH` mode some things +might not work as expected. Some features will work slightly differently while +others are not supported. + +Behavior Change in BATCH mode: + +* "Rolling" operations such as [reduce()]({% link dev/stream/operators/index.md + %}#reduce) or [sum()]({% link dev/stream/operators/index.md %}#aggregations) + emit an incremental update for every new record that arrives in `STREAMING` + mode. In `BATCH` mode, these operations are not "rolling". They emit only the + final result. + + +Unsupported in BATCH mode: + +* [Checkpointing]({% link concepts/stateful-stream-processing.md + %}#stateful-stream-processing) and any operations that depend on + checkpointing do not work. +* [Broadcast State]({% link dev/stream/state/broadcast_state.md %}) +* [Iterations]({% link dev/stream/operators/index.md %}#iterate) + +Custom operators should be implemented with care, otherwise they might behave +improperly. See also additional explanations below for more details. + +### Checkpointing + +As explained [above](#failure-recovery), failure recovery for batch programs +does not use checkpointing. + +It is important to remember that because there are no checkpoints, certain +features such as [CheckpointListener]({{ site.javadocs_baseurl +}}/api/java/org/apache/flink/api/common/state/CheckpointListener.html) and, as +a result, Kafka's [EXACTLY_ONCE]({% link dev/connectors/kafka.md %} +#kafka-producers-and-fault-tolerance) mode or `StreamingFileSink`'s +[OnCheckpointRollingPolicy]({% link dev/connectors/streamfile_sink.md +%}#rolling-policy) won't work. If you need a transactional sink that works in +`BATCH` mode make sure it uses the Unified Sink API as proposed in +[FLIP-143](https://cwiki.apache.org/confluence/x/KEJ4CQ). + +You can still use all the [state primitives]({% link dev/stream/state/state.md +%}#working-with-state), it's just that the mechanism used for failure recovery +will be different. + +### Broadcast State + +This feature was introduced to allow users to implement use-cases where a +“control” stream needs to be broadcast to all downstream tasks, and the +broadcast elements, e.g. rules, need to be applied to all incoming elements +from another stream. + +In this pattern, Flink provides no guarantees about the order in which the +inputs are read. Use-cases like the one above make sense in the streaming +world where jobs are expected to run for a long period with input data that are +not known in advance. In these settings, requirements may change over time +depending on the incoming data. + +In the batch world though, we believe that such use-cases do not make much +sense, as the input (both the elements and the control stream) are static and +known in advance. + +We plan to support a variation of that pattern for `BATCH` processing where the +broadcast side is processed first entirely in the future. + +### Writing Custom Operators + +

+Custom operators are an advanced usage pattern of Apache Flink. For most +use-cases, consider using a (keyed-)process function instead. +
+ +It is important to remember the assumptions made for `BATCH` execution mode +when writing a custom operator. Otherwise, an operator that works just fine for +`STREAMING` mode might produce wrong results in `BATCH` mode. Operators are +never scoped to a particular key which means they see some properties of +`BATCH` processing Flink tries to leverage. + +First of all you should not cache the last seen watermark within an operator. +In `BATCH` mode we process records key by key. As a result, the Watermark will +switch from `MAX_VALUE` to `MIN_VALUE` between each key. You should not assume +that the Watermark will always be ascending in an operator. For the same +reasons timers will fire first in key order and then in timestamp order within +each key. Moreover, operations that change a key manually are not supported. From 70c746375b5b0f7ddfe044df430ef84875453127 Mon Sep 17 00:00:00 2001 From: Kostas Kloudas Date: Wed, 18 Nov 2020 14:47:23 +0100 Subject: [PATCH 3/6] [FLINK-20153] Describe time behaviour in execution mode docs --- docs/dev/datastream_execution_mode.md | 46 +++++++++++++++++++++++++++ 1 file changed, 46 insertions(+) diff --git a/docs/dev/datastream_execution_mode.md b/docs/dev/datastream_execution_mode.md index 4f12a124507e4..2f166d0b2514e 100644 --- a/docs/dev/datastream_execution_mode.md +++ b/docs/dev/datastream_execution_mode.md @@ -232,8 +232,54 @@ information on this. ### Event Time / Watermarks +When it comes to supporting [event time]({% link dev/event_time.md %}), Flink’s +streaming runtime builds on the pessimistic assumption that events may come +out-of-order, _i.e._ an event with timestamp `t` may come after an event with +timestamp `t+1`. Because of this, the system can never be sure that no more +elements with timestamp `t < T` for a given timestamp `T` can come in the +future. To amortise the impact of this out-of-orderness on the final result +while making the system practical, in `STREAMING` mode, Flink uses a heuristic +called [Watermarks]({% link concepts/timely-stream-processing.md +%}#event-time-and-watermarks). A watermark with timestamp `T` signals that no +element with timestamp `t < T` will follow. + +In `BATCH` mode, where the input dataset is known in advance, there is no need +for such a heuristic as, at the very least, elements can be sorted by timestamp +so that they are processed in temporal order. For readers familiar with +streaming, in `BATCH` we can assume “perfect watermarks”. + +Given the above, in `BATCH` mode, we only need a `MAX_WATERMARK` at the end of +the input associated with each key, or at the end of input if the input stream +is not keyed. Based on this scheme, all registered timers will fire at the *end +of time* and user-defined `WatermarkAssigners` or `WatermarkStrategies` are +ignored. + ### Processing Time +Processing Time is the wall-clock time on the machine that a record is +processed, at the specific instance that the record is being processed. Based +on this definition, we see that the results of a computation that is based on +processing time are not reproducible. This is because the same record processed +twice will have two different timestamps. + +Despite the above, using processing time in `STREAMING` mode can be useful. The +reason has to do with the fact that streaming pipelines often ingest their +unbounded input in *real time* so there is a correlation between event time and +processing time. In addition, because of the above, in `STREAMING` mode `1h` in +event time can often be almost `1h` in processing time, or wall-clock time. So +using processing time can be used for early (incomplete) firings that give +hints about the expected results. + +This correlation does not exist in the batch world where the input dataset is +static and known in advance. Given this, in `BATCH` mode we allow users to +request the current processing time and register processing time timers, but, +as in the case of Event Time, all the timers are going to fire at the end of +the input. + +Conceptually, we can imagine that processing time does not advance during the +execution of a job and we fast-forward to the *end of time* when the whole +input is processed. + ### Failure Recovery In `STREAMING` execution mode, Flink uses checkpoints for failure recovery. From 3c4905bde4e003d1c2f617ec85f68c4b6a86cb65 Mon Sep 17 00:00:00 2001 From: Aljoscha Krettek Date: Mon, 23 Nov 2020 17:34:37 +0100 Subject: [PATCH 4/6] [FLINK-20153] Add glossary entry for runtime execution mode --- docs/concepts/glossary.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/concepts/glossary.md b/docs/concepts/glossary.md index 83f0a1de486ce..c45cf1fc052de 100644 --- a/docs/concepts/glossary.md +++ b/docs/concepts/glossary.md @@ -142,6 +142,12 @@ or [partitions](#partition) of data streams or data sets. Records are the constituent elements of a data set or data stream. [Operators](#operator) and [Functions](#Function) receive records as input and emit records as output. +#### (Runtime) Execution Mode + +DataStream API programs can be executed in one of two execution modes: `BATCH` +or `STREAMING`. See [Execution Mode]({% link dev/datastream_execution_mode.md +%}) for more details. + #### Flink Session Cluster A long-running [Flink Cluster](#flink-cluster) which accepts multiple [Flink Jobs](#flink-job) for From f363079934dd488db5f048021dbf98120830a873 Mon Sep 17 00:00:00 2001 From: Aljoscha Krettek Date: Mon, 23 Nov 2020 17:42:59 +0100 Subject: [PATCH 5/6] [FLINK-20302] Recommend DataStream API with BATCH execution mode in DataSet docs --- docs/_includes/note.html | 23 +++++++++++++++++++++++ docs/dev/batch/index.md | 11 +++++++++++ 2 files changed, 34 insertions(+) create mode 100644 docs/_includes/note.html diff --git a/docs/_includes/note.html b/docs/_includes/note.html new file mode 100644 index 0000000000000..a0ac41357bcbb --- /dev/null +++ b/docs/_includes/note.html @@ -0,0 +1,23 @@ + + + diff --git a/docs/dev/batch/index.md b/docs/dev/batch/index.md index 51900055c96b3..d488571a0303a 100644 --- a/docs/dev/batch/index.md +++ b/docs/dev/batch/index.md @@ -42,6 +42,17 @@ and gradually add your own [transformations](#dataset-transformations). The remaining sections act as references for additional operations and advanced features. +{% capture deprecation_note %} +Starting with Flink 1.12 the DataSet has been soft deprecated. +We recommend that you use the DataStream API with BATCH [execution mode]({% +link dev/datastream_execution_mode.md %}). The linked secion also outlines +cases where it makes sense to use the DataSet API but those cases will become +rarer as we continue and the DataSet API will eventually be removed. Please +also see [FLIP-131](https://cwiki.apache.org/confluence/x/NR14CQ) for +background information on this decision. +{% endcapture %} +{% include note.html content=deprecation_note %} + * This will be replaced by the TOC {:toc} From 5fb8fb7dba52d56dae9722e2875b051b204a0c52 Mon Sep 17 00:00:00 2001 From: Aljoscha Krettek Date: Tue, 24 Nov 2020 17:14:09 +0100 Subject: [PATCH 6/6] fixup! add license to svg --- docs/fig/datastream-example-job-graph.svg | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/docs/fig/datastream-example-job-graph.svg b/docs/fig/datastream-example-job-graph.svg index b6a9b446f809d..3482b638d9bb5 100644 --- a/docs/fig/datastream-example-job-graph.svg +++ b/docs/fig/datastream-example-job-graph.svg @@ -1 +1,21 @@ - \ No newline at end of file + + + +