Skip to content

Commit 06d43cb

Browse files
a49apan3793
authored andcommitted
[KYUUBI #3072][DOC] Add a doc of Flink Table Store for Flink SQL engine
### _Why are the changes needed?_ ### _How was this patch tested?_ - [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible - [ ] Add screenshots for manual tests if appropriate - [ ] [Run test](https://kyuubi.apache.org/docs/latest/develop_tools/testing.html#running-tests) locally before make a pull request Closes #3234 from deadwind4/fts-flink-doc. Closes #3072 f389629 [Luning Wang] [KYUUBI #3072][DOC] Add a doc of Flink Table Store for Flink SQL engine Authored-by: Luning Wang <wang4luning@gmail.com> Signed-off-by: Cheng Pan <chengpan@apache.org>
1 parent 23ad780 commit 06d43cb

File tree

2 files changed

+112
-0
lines changed

2 files changed

+112
-0
lines changed
Lines changed: 111 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,111 @@
1+
.. Licensed to the Apache Software Foundation (ASF) under one or more
2+
contributor license agreements. See the NOTICE file distributed with
3+
this work for additional information regarding copyright ownership.
4+
The ASF licenses this file to You under the Apache License, Version 2.0
5+
(the "License"); you may not use this file except in compliance with
6+
the License. You may obtain a copy of the License at
7+
8+
.. http://www.apache.org/licenses/LICENSE-2.0
9+
10+
.. Unless required by applicable law or agreed to in writing, software
11+
distributed under the License is distributed on an "AS IS" BASIS,
12+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
See the License for the specific language governing permissions and
14+
limitations under the License.
15+
16+
`Flink Table Store`_
17+
==========
18+
19+
Flink Table Store is a unified storage to build dynamic tables for both streaming and batch processing in Flink,
20+
supporting high-speed data ingestion and timely data query.
21+
22+
.. tip::
23+
This article assumes that you have mastered the basic knowledge and operation of `Flink Table Store`_.
24+
For the knowledge about Flink Table Store not mentioned in this article,
25+
you can obtain it from its `Official Documentation`_.
26+
27+
By using kyuubi, we can run SQL queries towards Flink Table Store which is more
28+
convenient, easy to understand, and easy to expand than directly using
29+
flink to manipulate Flink Table Store.
30+
31+
Flink Table Store Integration
32+
-------------------
33+
34+
To enable the integration of kyuubi flink sql engine and Flink Table Store, you need to:
35+
36+
- Referencing the Flink Table Store :ref:`dependencies<flink-table-store-deps>`
37+
38+
.. _flink-table-store-deps:
39+
40+
Dependencies
41+
************
42+
43+
The **classpath** of kyuubi flink sql engine with Flink Table Store supported consists of
44+
45+
1. kyuubi-flink-sql-engine-|release|_2.12.jar, the engine jar deployed with Kyuubi distributions
46+
2. a copy of flink distribution
47+
3. flink-table-store-dist-<version>.jar (example: flink-table-store-dist-0.2.jar), which can be found in the `Maven Central`_
48+
49+
In order to make the Flink Table Store packages visible for the runtime classpath of engines, we can use these methods:
50+
51+
1. Put the Flink Table Store packages into ``$FLINK_HOME/lib`` directly
52+
2. Setting the HADOOP_CLASSPATH environment variable or copy the `Pre-bundled Hadoop Jar`_ to flink/lib.
53+
54+
.. warning::
55+
Please mind the compatibility of different Flink Table Store and Flink versions, which can be confirmed on the page of `Flink Table Store multi engine support`_.
56+
57+
Flink Table Store Operations
58+
------------------
59+
60+
Taking ``CREATE CATALOG`` as a example,
61+
62+
.. code-block:: sql
63+
64+
CREATE CATALOG my_catalog WITH (
65+
'type'='table-store',
66+
'warehouse'='hdfs://nn:8020/warehouse/path' -- or 'file:///tmp/foo/bar'
67+
);
68+
69+
USE CATALOG my_catalog;
70+
71+
Taking ``CREATE TABLE`` as a example,
72+
73+
.. code-block:: sql
74+
75+
CREATE TABLE MyTable (
76+
user_id BIGINT,
77+
item_id BIGINT,
78+
behavior STRING,
79+
dt STRING,
80+
PRIMARY KEY (dt, user_id) NOT ENFORCED
81+
) PARTITIONED BY (dt) WITH (
82+
'bucket' = '4'
83+
);
84+
85+
Taking ``Query Table`` as a example,
86+
87+
.. code-block:: sql
88+
89+
SET 'execution.runtime-mode' = 'batch';
90+
SELECT * FROM orders WHERE catalog_id=1025;
91+
92+
Taking ``Streaming Query`` as a example,
93+
94+
.. code-block:: sql
95+
96+
SET 'execution.runtime-mode' = 'streaming';
97+
SELECT * FROM MyTable /*+ OPTIONS ('log.scan'='latest') */;
98+
99+
Taking ``Rescale Bucket` as a example,
100+
101+
.. code-block:: sql
102+
103+
ALTER TABLE my_table SET ('bucket' = '4');
104+
INSERT OVERWRITE my_table PARTITION (dt = '2022-01-01');
105+
106+
107+
.. _Flink Table Store: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
108+
.. _Official Documentation: https://nightlies.apache.org/flink/flink-table-store-docs-stable/
109+
.. _Maven Central: https://mvnrepository.com/artifact/org.apache.flink/flink-table-store-dist
110+
.. _Pre-bundled Hadoop Jar: https://flink.apache.org/downloads.html
111+
.. _Flink Table Store multi engine support: https://nightlies.apache.org/flink/flink-table-store-docs-stable/docs/engines/overview/

docs/connector/flink/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,5 +19,6 @@ Connectors For Flink SQL Query Engine
1919
.. toctree::
2020
:maxdepth: 2
2121

22+
flink_table_store
2223
hudi
2324
iceberg

0 commit comments

Comments
 (0)