Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
92 commits
Select commit Hold shift + click to select a range
3859968
[FLINK-10168] [Core/DataStream] Add FileFilter to FileInputFormat and…
bowenli86 Nov 15, 2018
3eac419
[FLINK-11045][table] Set correct UserCodeClassLoader for RuntimeUDFCo…
hequn8128 Dec 2, 2018
600bd9c
[FLINK-10522][fs-connector] Check if RecoverableWriter supportsResume…
kl0u Dec 4, 2018
2e1cbf2
[hotfix] Fixing the broken code examples
Dec 4, 2018
cc19bb0
[hotfix][docs] Fix typo in Windows documentation
KarmaGYZ Dec 5, 2018
b3a378a
[FLINK-9555][scala-shell] Support table api in scala shell.
shuiqiangchen Nov 16, 2018
3e9f494
[hotfix][docs] Fix invalid link in schema_evolution doc
KarmaGYZ Dec 2, 2018
339ec17
[FLINK-10997][formats] Bundle kafka-scheme-registry-client
zentol Nov 23, 2018
ed8ff14
[FLINK-10987] Add LICENSE & NOTICE files for flink-avro-confluent-reg…
zentol Nov 27, 2018
64f6d0b
[FLINK-11011][E2E][JM] Log error messages about null CheckpointCoordi…
azagrebin Dec 6, 2018
114cb2c
[FLINK-10482] Fix double counting of checkpoint stat
azagrebin Nov 15, 2018
eaead20
[hotfix][documentation] Mention limitation of local recovery with Roc…
StefanRRichter Dec 7, 2018
5716e4d
[FLINK-10543][table] Leverage efficient timer deletion in relational …
hequn8128 Oct 24, 2018
1a0d49e
[FLINK-10865] Add Aliyun OSS file systems without Hadoop dependencies
wujinhu Nov 16, 2018
375c428
[FLINK-10985][tests] Enable submission of multiple jobs. (#7166)
GJL Dec 10, 2018
1438051
[FLINK-10986][tests] Enable fine-grained configuration of required dbs
GJL Nov 26, 2018
c6d0446
[FLINK-10986][tests] Implement DB to setup Apache Kafka
GJL Nov 26, 2018
e6d98a3
[hotfix] Move code from nemesis to generator
GJL Nov 26, 2018
a6a2ffd
[hotfix] Rename keys-as-allowed-values-help-text to keys->allowed-val…
GJL Nov 26, 2018
be80fcc
[FLINK-10986][tests] Add example on how to run a Jepsen test with Kafka
GJL Nov 26, 2018
1dd68ec
[hotfix] Remove redundant keyword conversion when validating CLI args
GJL Nov 26, 2018
9bcad11
[hotfix][docs] Add and update description of local-recovery config op…
KarmaGYZ Dec 2, 2018
0382830
[FLINK-9552][iterations] fix not syncing on checkpoint lock before em…
Oct 30, 2018
156c09c
[FLINK-11123][docs] fix the import of the class is missing in ml quic…
sunjincheng121 Dec 11, 2018
2ab4d41
[FLINK-10743] [runtime] Use 0 processExitCode for ApplicationStatus.C…
uce Oct 31, 2018
bc41b75
[hotfix] [state backend, tests] Certain StateBackendMigrationTestBase…
tzulitai Dec 7, 2018
3b16b4e
[hotfix] [state backends] New namespace serializer in HeapKeyedStateB…
tzulitai Dec 7, 2018
7fe1743
[hotfix] [state backends, tests] Make StateBackendMigrationTestBase m…
tzulitai Dec 8, 2018
b879a49
[FLINK-11094] [rocksdb] Eagerly create meta infos for restored state …
tzulitai Dec 9, 2018
ebc46c5
[FLINK-11094] [state backends] Let meta infos always lazily access re…
tzulitai Dec 9, 2018
97f556e
[FLINK-11094] [state backends] State backends no longer need separate…
tzulitai Dec 10, 2018
735e2f8
[hotfix] [tests] Remove unused enum from StateBackendTestBase
tzulitai Dec 9, 2018
34ca8b7
[hotfix] Cleanup unused methods / appropriate method renames in State…
tzulitai Dec 9, 2018
8b898cd
[FLINK-11087] [state] Incorrect broadcast state K/V serializer snapsh…
tzulitai Dec 6, 2018
a30164b
[FLINK-11087] [docs] Amend compatibility table to notify issue with r…
tzulitai Dec 6, 2018
7067798
[hotfix] [docs] Fix typo in Joining documentation
Dec 11, 2018
30a7c5a
[hotfix] [docs] Fix tEnv in tableApi docs
Dec 6, 2018
0078a7c
[hotfix] [docs] Correct the parameter in Operators Overview doc
KarmaGYZ Dec 3, 2018
de454e5
[FLINK-11029] [docs] Fixed incorrect parameter in Working with state doc
KarmaGYZ Nov 29, 2018
2f7bb1c
[FLINK-10359] [docs] Scala example in DataSet docs is broken
yanghua Dec 10, 2018
927936d
[hotfix][test][streaming] Fix invalid testNotSideOutputXXX in WindowO…
Dec 3, 2018
70b2029
[FLINK-11041][test] ReinterpretDataStreamAsKeyedStreamITCase source s…
StefanRRichter Dec 10, 2018
8c6ff79
[FLINK-11090][streaming api] Remove unused parameter in WindowedStrea…
hequn8128 Dec 11, 2018
7e6feea
[FLINK-10252][metrics] Pass akkaFrameSize to MetricQueryService
yanghua Oct 18, 2018
1cf6d30
[FLINK-10252][metrics] Handle oversized metric messages
yanghua Oct 18, 2018
8492d00
[hotfix][metrics][tests] Ensure ActorSystem is terminated
zentol Dec 7, 2018
f8aee29
[hotfix][metrics][tests] Refactor MetricQueryServiceTest
zentol Dec 6, 2018
a79fb6f
[hotfix][docs][table] Fix string concatenation in avro example
maqingxiang Dec 11, 2018
547a503
[hotfix][build] Add Aliyun FS dependency to flink-dist
zentol Dec 11, 2018
3c5eb92
[FLINK-11080][ES] Rework shade-plugin filters
zentol Dec 5, 2018
183ca20
[hotfix] [docs] Fix typo in dynamic tables documentation
zzchun Dec 11, 2018
ff42f87
[FLINK-11125] clean up useless imports
hequn8128 Dec 11, 2018
ea8373e
[FLINK-11085][build] Fix inheritance of shading filters
zentol Dec 7, 2018
f670cac
[hotfix] [docs] Fix typos in Table and SQL docs
Dec 12, 2018
c8dc83c
[hotfix] [docs] Improve DataSet.partitionCustom() documentation.
KarmaGYZ Dec 10, 2018
ed0aefa
[FLINK-11136] [table] Fix the merge logic of DISTINCT aggregates.
dianfu Dec 10, 2018
ff9b7f1
[FLINK-11001] [table] Fix alias on window rowtime attribute in Java T…
hequn8128 Dec 12, 2018
41d4bf4
[hotfix] [docs] Add notice about buggy DATE_FORMAT function
twalthr Dec 13, 2018
cec17d0
[hotfix] [docs] Add notice about buggy dateFormat() function
twalthr Dec 13, 2018
e546069
[FLINK-11122][core] Change signature of WrappingProxyUtil#stripProxy(T)
GJL Dec 10, 2018
ca28085
[hotfix][core] Fix typo in WrappingProxyUtil Javadoc
GJL Dec 12, 2018
dcddf0b
[FLINK-11144][tests] Make tests runnable on Java 9
GJL Dec 12, 2018
d29848d
[FLINK-11152][core] Use asm 6 in ClosureCleaner
GJL Dec 13, 2018
951ae9a
[FLINK-11145] Fix Hadoop version handling in binary release script
tweise Dec 11, 2018
bc4194a
[FLINK-11040][docs] fixed the Incorrect generic type of output in bro…
KarmaGYZ Dec 14, 2018
2870c7a
[FLINK-10566] Fix exponential planning time of large programs
mxm Dec 11, 2018
1f9f13e
[hotfix] [docs] Improve the correctness in Detecting Patterns doc
KarmaGYZ Dec 14, 2018
ff848d0
[hotfix] [docs] Correct the field name in Connect to External Systems…
KarmaGYZ Dec 14, 2018
9a45fca
[FLINK-11074] [table][tests] Enable harness tests with RocksdbStateBa…
dianfu Dec 6, 2018
eab3ce7
[FLINK-7599][table] Refactored AggregateUtil#transformToAggregateFunc…
dawidwys Nov 26, 2018
04c44f3
[hotfix][table] Move generate runner functions to MatchCodeGenerator …
dawidwys Dec 5, 2018
1458caf
[FLINK-7599][table, cep] Support for aggregates in MATCH_RECOGNIZE
dawidwys Nov 26, 2018
185be98
[FLINK-7599][docs] Updated MATCH_RECOGNIZE documentation with aggrega…
dawidwys Nov 27, 2018
41b3a3b
[FLINK-7599][table] Throw exception on DISTINCT in aggregations in MA…
dawidwys Nov 29, 2018
31665d6
[hotfix][docs] Unified indentation
dawidwys Dec 16, 2018
45f9ccc
[FLINK-11083] [Table&SQL] CRowSerializerConfigSnapshot is not instant…
Dec 10, 2018
b078c80
[FLINK-10457][fs-connector] Add SequenceFile support to StreamingFile…
Sep 28, 2018
cf88590
[FLINK-10457][fs-connector] Add SequenceFile support to StreamingFile…
kl0u Dec 11, 2018
030743f
[FLINK-11100][s3][tests] Add FS type argument to s3_setup
zentol Dec 7, 2018
a431f76
[FLINK-11101][S3] Ban openjdk.jol dependencies
zentol Dec 12, 2018
84b6616
[FLINK-11151][rest] Create parent directories in FileUploadHandler
zentol Dec 13, 2018
68323de
[FLINK-11168][tests] Relax time-constraints for LargePlanTest
mxm Dec 18, 2018
eb7039d
[FLINK-11026][ES6] Rework creation of fat sql-client jar
zentol Dec 18, 2018
4616c60
[FLINK-10784][tests] Update FlinkKafkaConsumerBaseMigrationTest for 1.7
yanghua Dec 18, 2018
aeabfc5
[FLINK-10788][tests] Update ContinuousFileProcessingMigrationTest for…
yanghua Dec 18, 2018
d1e8dd9
[FLINK-10782][tests] Update AbstractKeyedOperatorRestoreTestBase for 1.7
yanghua Dec 18, 2018
ab33429
[FLINK-10787][tests] Update AbstractNonKeyedOperatorRestoreTestBase f…
yanghua Dec 18, 2018
e954ae8
[FLINK-10786][tests][cep] Update CEPMigrationTest for 1.7
yanghua Dec 18, 2018
9d62dbc
[hotfix][docs] Fix typos in documentation
zzchun Dec 18, 2018
6dfeb1a
[hotfix][docs][table] Fix various examples
maqingxiang Dec 18, 2018
660a71d
[FLINK-10168] [Core/DataStream] Add FileFilter to FileInputFormat and…
bowenli86 Nov 15, 2018
630980b
Merge branch 'FLINK-10168' of https://github.com/bowenli86/flink into…
bowenli86 Dec 18, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/_includes/generated/checkpointing_configuration.html
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
<tr>
<td><h5>state.backend.local-recovery</h5></td>
<td style="word-wrap: break-word;">false</td>
<td></td>
<td>This option configures local recovery for this state backend. By default, local recovery is deactivated. Local recovery currently only covers keyed state backends. Currently, MemoryStateBackend does not support local recovery and ignore this option.</td>
</tr>
<tr>
<td><h5>state.checkpoints.dir</h5></td>
Expand All @@ -50,7 +50,7 @@
<tr>
<td><h5>taskmanager.state.local.root-dirs</h5></td>
<td style="word-wrap: break-word;">(none)</td>
<td></td>
<td>The config parameter defining the root directories for storing file-based state for local recovery. Local recovery currently only covers keyed state backends. Currently, MemoryStateBackend does not support local recovery and ignore this option</td>
</tr>
</tbody>
</table>
4 changes: 2 additions & 2 deletions docs/dev/batch/dataset_transformations.md
Original file line number Diff line number Diff line change
Expand Up @@ -712,7 +712,7 @@ class MyCombinableGroupReducer
out: Collector[String]): Unit =
{
val r: (String, Int) =
in.asScala.reduce( (a,b) => (a._1, a._2 + b._2) )
in.iterator.asScala.reduce( (a,b) => (a._1, a._2 + b._2) )
// concat key and sum and emit
out.collect (r._1 + "-" + r._2)
}
Expand All @@ -722,7 +722,7 @@ class MyCombinableGroupReducer
out: Collector[(String, Int)]): Unit =
{
val r: (String, Int) =
in.asScala.reduce( (a,b) => (a._1, a._2 + b._2) )
in.iterator.asScala.reduce( (a,b) => (a._1, a._2 + b._2) )
// emit tuple with key and sum
out.collect(r)
}
Expand Down
15 changes: 9 additions & 6 deletions docs/dev/batch/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -401,12 +401,14 @@ DataSet<Integer> result = in.partitionByRange(0)
<tr>
<td><strong>Custom Partitioning</strong></td>
<td>
<p>Manually specify a partitioning over the data.
<p>Assigns records based on a key to a specific partition using a custom Partitioner function.
The key can be specified as position key, expression key, and key selector function.
<br/>
<i>Note</i>: This method works only on single field keys.</p>
<i>Note</i>: This method only works with a single field key.</p>
{% highlight java %}
DataSet<Tuple2<String,Integer>> in = // [...]
DataSet<Integer> result = in.partitionCustom(Partitioner<K> partitioner, key)
DataSet<Integer> result = in.partitionCustom(partitioner, key)
.mapPartition(new PartitionMapper());
{% endhighlight %}
</td>
</tr>
Expand Down Expand Up @@ -703,13 +705,14 @@ val result = in.partitionByRange(0).mapPartition { ... }
<tr>
<td><strong>Custom Partitioning</strong></td>
<td>
<p>Manually specify a partitioning over the data.
<p>Assigns records based on a key to a specific partition using a custom Partitioner function.
The key can be specified as position key, expression key, and key selector function.
<br/>
<i>Note</i>: This method works only on single field keys.</p>
<i>Note</i>: This method only works with a single field key.</p>
{% highlight scala %}
val in: DataSet[(Int, String)] = // [...]
val result = in
.partitionCustom(partitioner: Partitioner[K], key)
.partitionCustom(partitioner, key).mapPartition { ... }
{% endhighlight %}
</td>
</tr>
Expand Down
20 changes: 7 additions & 13 deletions docs/dev/connectors/streamfile_sink.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,17 +60,14 @@ Basic usage thus looks like this:
<div class="codetabs" markdown="1">
<div data-lang="java" markdown="1">
{% highlight java %}
import org.apache.flink.api.common.serialization.Encoder;
import org.apache.flink.api.common.serialization.SimpleStringEncoder;
import org.apache.flink.core.fs.Path;
import org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink;

DataStream<String> input = ...;

final StreamingFileSink<String> sink = StreamingFileSink
.forRowFormat(new Path(outputPath), (Encoder<String>) (element, stream) -> {
PrintStream out = new PrintStream(stream);
out.println(element.f1);
})
.forRowFormat(new Path(outputPath), new SimpleStringEncoder<>("UTF-8"))
.build();

input.addSink(sink);
Expand All @@ -79,19 +76,16 @@ input.addSink(sink);
</div>
<div data-lang="scala" markdown="1">
{% highlight scala %}
import org.apache.flink.api.common.serialization.Encoder
import org.apache.flink.api.common.serialization.SimpleStringEncoder
import org.apache.flink.core.fs.Path
import org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink

val input: DataStream[String] = ...

final StreamingFileSink[String] sink = StreamingFileSink
.forRowFormat(new Path(outputPath), (element, stream) => {
val out = new PrintStream(stream)
out.println(element.f1)
})
.build()

val sink: StreamingFileSink[String] = StreamingFileSink
.forRowFormat(new Path(outputPath), new SimpleStringEncoder[String]("UTF-8"))
.build()

input.addSink(sink)

{% endhighlight %}
Expand Down
2 changes: 2 additions & 0 deletions docs/dev/libs/ml/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,6 +153,8 @@ A conversion can be done using a simple normalizer mapping function:

{% highlight scala %}

import org.apache.flink.ml.math.Vector

def normalizer : LabeledVector => LabeledVector = {
lv => LabeledVector(if (lv.label > 0.0) 1.0 else -1.0, lv.vector)
}
Expand Down
4 changes: 2 additions & 2 deletions docs/dev/stream/operators/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -434,14 +434,14 @@ IterativeStream<Long> iteration = initialStream.iterate();
DataStream<Long> iterationBody = iteration.map (/*do something*/);
DataStream<Long> feedback = iterationBody.filter(new FilterFunction<Long>(){
@Override
public boolean filter(Integer value) throws Exception {
public boolean filter(Long value) throws Exception {
return value > 0;
}
});
iteration.closeWith(feedback);
DataStream<Long> output = iterationBody.filter(new FilterFunction<Long>(){
@Override
public boolean filter(Integer value) throws Exception {
public boolean filter(Long value) throws Exception {
return value <= 0;
}
});
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/stream/operators/joining.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ orangeStream.join(greenStream)
</div>

## Sliding Window Join
When performing a sliding window join, all elements with a common key and common sliding window are joined are pairwise combinations and passed on to the `JoinFunction` or `FlatJoinFunction`. Elements of one stream that do not have elements from the other stream in the current sliding window are not emitted! Note that some elements might be joined in one sliding window but not in another!
When performing a sliding window join, all elements with a common key and common sliding window are joined as pairwise combinations and passed on to the `JoinFunction` or `FlatJoinFunction`. Elements of one stream that do not have elements from the other stream in the current sliding window are not emitted! Note that some elements might be joined in one sliding window but not in another!

<img src="{{ site.baseurl }}/fig/sliding-window-join.svg" class="center" style="width: 80%;" />

Expand Down
4 changes: 2 additions & 2 deletions docs/dev/stream/operators/windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -783,7 +783,7 @@ When the window is closed, the `ProcessWindowFunction` will be provided with the
This allows it to incrementally compute windows while having access to the
additional window meta information of the `ProcessWindowFunction`.

<span class="label label-info">Note</span> You can also the legacy `WindowFunction` instead of
<span class="label label-info">Note</span> You can also use the legacy `WindowFunction` instead of
`ProcessWindowFunction` for incremental window aggregation.

#### Incremental Window Aggregation with ReduceFunction
Expand Down Expand Up @@ -1034,7 +1034,7 @@ different keys and events for all of them currently fall into the *[12:00, 13:00
then there will be 1000 window instances that each have their own keyed per-window state.

There are two methods on the `Context` object that a `process()` invocation receives that allow
access two the two types of state:
access to the two types of state:

- `globalState()`, which allows access to keyed state that is not scoped to a window
- `windowState()`, which allows access to keyed state that is also scoped to the window
Expand Down
4 changes: 2 additions & 2 deletions docs/dev/stream/state/broadcast_state.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ The exact type of the function depends on the type of the non-broadcasted stream
</div>

{% highlight java %}
DataStream<Match> output = colorPartitionedStream
DataStream<String> output = colorPartitionedStream
.connect(ruleBroadcastStream)
.process(

Expand All @@ -107,7 +107,7 @@ DataStream<Match> output = colorPartitionedStream
new KeyedBroadcastProcessFunction<Color, Item, Rule, String>() {
// my matching logic
}
)
);
{% endhighlight %}

### BroadcastProcessFunction and KeyedBroadcastProcessFunction
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/stream/state/schema_evolution.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ serialization schema than the previous serializer; if so, the previous serialize
and written back to bytes again with the new serializer.

Further details about the migration process is out of the scope of this documentation; please refer to
[here]({{ site.baseurl }}/dev/stream/state/custom_serialization).
[here]({{ site.baseurl }}/dev/stream/state/custom_serialization.html).

## Supported data types for schema evolution

Expand Down
2 changes: 1 addition & 1 deletion docs/dev/stream/state/state.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ is available in a `RichFunction` has these methods for accessing state:
* `ValueState<T> getState(ValueStateDescriptor<T>)`
* `ReducingState<T> getReducingState(ReducingStateDescriptor<T>)`
* `ListState<T> getListState(ListStateDescriptor<T>)`
* `AggregatingState<IN, OUT> getAggregatingState(AggregatingState<IN, OUT>)`
* `AggregatingState<IN, OUT> getAggregatingState(AggregatingStateDescriptor<IN, ACC, OUT>)`
* `FoldingState<T, ACC> getFoldingState(FoldingStateDescriptor<T, ACC>)`
* `MapState<UK, UV> getMapState(MapStateDescriptor<UK, UV>)`

Expand Down
14 changes: 7 additions & 7 deletions docs/dev/table/connect.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ This page describes how to declare built-in table sources and/or table sinks and
Dependencies
------------

The following table list all available connectors and formats. Their mutual compatibility is tagged in the corresponding sections for [table connectors](connect.html#table-connectors) and [table formats](connect.html#table-formats). The following table provides dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
The following tables list all available connectors and formats. Their mutual compatibility is tagged in the corresponding sections for [table connectors](connect.html#table-connectors) and [table formats](connect.html#table-formats). The following tables provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.

{% if site.is_stable %}

Expand All @@ -43,7 +43,7 @@ The following table list all available connectors and formats. Their mutual comp
| Name | Version | Maven dependency | SQL Client JAR |
| :---------------- | :------------------ | :--------------------------- | :----------------------|
| Filesystem | | Built-in | Built-in |
| Elasticsearch | 6 | `flink-connector-elasticsearch6` | [Download](http://central.maven.org/maven2/org/apache/flink/flink-connector-elasticsearch6{{site.scala_version_suffix}}/{{site.version}}/flink-connector-elasticsearch6{{site.scala_version_suffix}}-{{site.version}}-sql-jar.jar) |
| Elasticsearch | 6 | `flink-connector-elasticsearch6` | [Download](http://central.maven.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch6{{site.scala_version_suffix}}/{{site.version}}/flink-sql-connector-elasticsearch6{{site.scala_version_suffix}}-{{site.version}}.jar) |
| Apache Kafka | 0.8 | `flink-connector-kafka-0.8` | Not available |
| Apache Kafka | 0.9 | `flink-connector-kafka-0.9` | [Download](http://central.maven.org/maven2/org/apache/flink/flink-connector-kafka-0.9{{site.scala_version_suffix}}/{{site.version}}/flink-connector-kafka-0.9{{site.scala_version_suffix}}-{{site.version}}-sql-jar.jar) |
| Apache Kafka | 0.10 | `flink-connector-kafka-0.10` | [Download](http://central.maven.org/maven2/org/apache/flink/flink-connector-kafka-0.10{{site.scala_version_suffix}}/{{site.version}}/flink-connector-kafka-0.10{{site.scala_version_suffix}}-{{site.version}}-sql-jar.jar) |
Expand All @@ -60,7 +60,7 @@ The following table list all available connectors and formats. Their mutual comp

{% else %}

This table is only available for stable releases.
These tables are only available for stable releases.

{% endif %}

Expand Down Expand Up @@ -145,7 +145,7 @@ tableEnvironment
" {\"name\": \"user\", \"type\": \"long\"}," +
" {\"name\": \"message\", \"type\": [\"string\", \"null\"]}" +
" ]" +
"}" +
"}"
)
)

Expand All @@ -154,7 +154,7 @@ tableEnvironment
new Schema()
.field("rowtime", Types.SQL_TIMESTAMP)
.rowtime(new Rowtime()
.timestampsFromField("ts")
.timestampsFromField("timestamp")
.watermarksPeriodicBounded(60000)
)
.field("user", Types.LONG)
Expand Down Expand Up @@ -1166,7 +1166,7 @@ ClusterBuilder builder = ... // configure Cassandra cluster connection
CassandraAppendTableSink sink = new CassandraAppendTableSink(
builder,
// the query must match the schema of the table
INSERT INTO flink.myTable (id, name, value) VALUES (?, ?, ?));
"INSERT INTO flink.myTable (id, name, value) VALUES (?, ?, ?)");

tableEnv.registerTableSink(
"cassandraOutputTable",
Expand All @@ -1187,7 +1187,7 @@ val builder: ClusterBuilder = ... // configure Cassandra cluster connection
val sink: CassandraAppendTableSink = new CassandraAppendTableSink(
builder,
// the query must match the schema of the table
INSERT INTO flink.myTable (id, name, value) VALUES (?, ?, ?))
"INSERT INTO flink.myTable (id, name, value) VALUES (?, ?, ?)")

tableEnv.registerTableSink(
"cassandraOutputTable",
Expand Down
8 changes: 3 additions & 5 deletions docs/dev/table/functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -3487,7 +3487,7 @@ DATE_FORMAT(timestamp, string)
{% endhighlight %}
</td>
<td>
<p>Returns a string that formats <i>timestamp</i> with a specified format <i>string</i>. The format specification is given in the <a href="#date-format-specifiers">Date Format Specifier table</a>.</p>
<p><span class="label label-danger">Attention</span> This function has serious bugs and should not be used for now. Please implement a custom UDF instead or use EXTRACT as a workaround.</p>
</td>
</tr>

Expand Down Expand Up @@ -3756,8 +3756,7 @@ dateFormat(TIMESTAMP, STRING)
{% endhighlight %}
</td>
<td>
<p>Returns a string that formats <i>TIMESTAMP</i> with a specified format <i>STRING</i>. The format specification is given in the <a href="#date-format-specifiers">Date Format Specifier table</a>.</p>
<p>E.g., <code>dateFormat(ts, '%Y, %d %M')</code> results in strings formatted as "2017, 05 May".</p>
<p><span class="label label-danger">Attention</span> This function has serious bugs and should not be used for now. Please implement a custom UDF instead or use extract() as a workaround.</p>
</td>
</tr>

Expand Down Expand Up @@ -4014,8 +4013,7 @@ dateFormat(TIMESTAMP, STRING)
{% endhighlight %}
</td>
<td>
<p>Returns a string that formats <i>TIMESTAMP</i> with a specified format <i>STRING</i>. The format specification is given in the <a href="#date-format-specifiers">Date Format Specifier table</a>.</p>
<p>E.g., <code>dateFormat('ts, "%Y, %d %M")</code> results in strings formatted as "2017, 05 May".</p>
<p><span class="label label-danger">Attention</span> This function has serious bugs and should not be used for now. Please implement a custom UDF instead or use extract() as a workaround.</p>
</td>
</tr>

Expand Down
2 changes: 1 addition & 1 deletion docs/dev/table/sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ specific language governing permissions and limitations
under the License.
-->

SQL queries are specified with the `sqlQuery()` method of the `TableEnvironment`. The method returns the result of the SQL query as a `Table`. A `Table` can be used in [subsequent SQL and Table API queries](common.html#mixing-table-api-and-sql), be [converted into a DataSet or DataStream](common.html#integration-with-datastream-and-dataset-api), or [written to a TableSink](common.html#emit-a-table)). SQL and Table API queries can seamlessly mixed and are holistically optimized and translated into a single program.
SQL queries are specified with the `sqlQuery()` method of the `TableEnvironment`. The method returns the result of the SQL query as a `Table`. A `Table` can be used in [subsequent SQL and Table API queries](common.html#mixing-table-api-and-sql), be [converted into a DataSet or DataStream](common.html#integration-with-datastream-and-dataset-api), or [written to a TableSink](common.html#emit-a-table)). SQL and Table API queries can be seamlessly mixed and are holistically optimized and translated into a single program.

In order to access a table in a SQL query, it must be [registered in the TableEnvironment](common.html#register-tables-in-the-catalog). A table can be registered from a [TableSource](common.html#register-a-tablesource), [Table](common.html#register-a-table), [DataStream, or DataSet](common.html#register-a-datastream-or-dataset-as-table). Alternatively, users can also [register external catalogs in a TableEnvironment](common.html#register-an-external-catalog) to specify the location of the data sources.

Expand Down
Loading