Skip to content

Commit

Permalink
Feature k8s (#283)
Browse files Browse the repository at this point in the history
* [optimize] streamx-packer rename to streamx-flink-packer #249

* [Feature] streamx-console/streamx-console-webapp provide additional GUI adaptations for flink k8s mode #259
  • Loading branch information
wolfboys committed Aug 8, 2021
1 parent aecafa6 commit 87b18fa
Show file tree
Hide file tree
Showing 60 changed files with 587 additions and 516 deletions.
13 changes: 8 additions & 5 deletions .github/ISSUE_TEMPLATE/bug-report.md
@@ -1,21 +1,23 @@
---
name: "\U0001F41B Bug Report"
about: As a User, I want to report a Bug.
title: "[Bug] Bug title "
about: As a User, I want to report a Bug. title: "[Bug] Bug title "
labels: type/bug
---

*Please answer these questions before submitting your issue. Thanks!*

### Environment Description
* **StreamX Version**:

* **StreamX Version**:
* **JVM version** (`java -version`):
* **OS version** (`uname -a` if on a Unix-like system):

### Bug Description
A clear and concise description of what the bug is.

### How to Reproduce
A clear and concise description of what the bug is.

### How to Reproduce

Steps to reproduce the behavior, for example:

1. Go to '...'
Expand All @@ -24,6 +26,7 @@ Steps to reproduce the behavior, for example:
4. See error

### Additional context

Add any other context about the problem here such as JVM track stack log.

### Requirement or improvement
Expand Down
3 changes: 1 addition & 2 deletions .github/ISSUE_TEMPLATE/enhancement.md
@@ -1,7 +1,6 @@
---
name: "\U0001F680 Enhancement"
about: As a StreamX developer, I want to make an enhancement.
title: "[Enhancement] Enhancement title "
about: As a StreamX developer, I want to make an enhancement. title: "[Enhancement] Enhancement title "
labels: type/enhancement
---

Expand Down
3 changes: 1 addition & 2 deletions .github/ISSUE_TEMPLATE/feature-request.md
@@ -1,7 +1,6 @@
---
name: "\U0001F680 Feature Request"
about: As a user, I want to request a New Feature on the product.
title: "[Feature] Feature request title "
about: As a user, I want to request a New Feature on the product. title: "[Feature] Feature request title "
labels: type/feature-request
---

Expand Down
3 changes: 1 addition & 2 deletions .github/ISSUE_TEMPLATE/general-question.md
@@ -1,7 +1,6 @@
---
name: "\U0001F914 Ask a Question"
about: I want to ask a question.
title: "[Question] Question title "
about: I want to ask a question. title: "[Question] Question title "
labels: type/question
---

Expand Down
13 changes: 9 additions & 4 deletions .github/ISSUE_TEMPLATE/zh-bug-report.md
@@ -1,29 +1,34 @@
---
name: "\U0001F41B 报告 Bug(Bug Report)"
about: 我想要提交一个 StreamX 的 Bug 报告
title: "[Bug] 标题 "
about: 我想要提交一个 StreamX 的 Bug 报告 title: "[Bug] 标题 "
labels: type/bug
---

*请在提交你的 issue 前,请回答以下问题,这有助于社区快速定位问题,谢谢! 🙏*

### Environment Description(运行环境描述)
* **StreamX Version**:

* **StreamX Version**:
* **JVM version** (`java -version`):
* **OS version** (`uname -a` if on a Unix-like system):

### Bug Description(Bug 描述)

请简要、清晰地对您遇到的 Bug 进行描述。

### How to Reproduce(如何重现这个 Bug)
### How to Reproduce(如何重现这个 Bug)

重现 Bug 现场的步骤,比如:

1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

### Additional context(额外的上下文信息)

添加额外有助于描述该问题的上下文信息,如 JVM 堆栈日志等

### Requirement or improvement(诉求 & 改进建议)

- 请描述您的诉求,或者对此的改进建议
3 changes: 1 addition & 2 deletions .github/ISSUE_TEMPLATE/zh-enhancement.md
@@ -1,7 +1,6 @@
---
name: "\U0001F680 优化建议(Enhancement)"
about: 我想提供一些对于 StreamX 的优化/增强建议
title: "[Enhancement] 标题 "
about: 我想提供一些对于 StreamX 的优化/增强建议 title: "[Enhancement] 标题 "
labels: type/enhancement
---

Expand Down
3 changes: 1 addition & 2 deletions .github/ISSUE_TEMPLATE/zh-feature-request.md
@@ -1,7 +1,6 @@
---
name: "\U0001F680 新功能诉求(Feature Request)"
about: 我有一个绝妙的 idea,希望 StreamX 可以支持它!
title: "[feature] 标题 "
about: 我有一个绝妙的 idea,希望 StreamX 可以支持它! title: "[feature] 标题 "
labels: type/feature-request
---

Expand Down
3 changes: 1 addition & 2 deletions .github/ISSUE_TEMPLATE/zh-general-question.md
@@ -1,7 +1,6 @@
---
name: "\U0001F914 问题提问(Ask a Question)"
about: 我想提问一个问题
title: "[Question] 标题 "
about: 我想提问一个问题 title: "[Question] 标题 "
labels: type/question
---

Expand Down
61 changes: 29 additions & 32 deletions README.md
Expand Up @@ -38,9 +38,9 @@ Make Flink|Spark easier
## 🚀 Introduction

The original intention of `StreamX` is to make the development of `Flink` easier. `StreamX` focuses on the management of
development phases and tasks. Our ultimate goal is to build a one-stop big data solution integrating stream processing,
batch processing, data warehouse and data laker.
The original intention of `StreamX` is to make the development of `Flink` easier. `StreamX` focuses on the management of development phases
and tasks. Our ultimate goal is to build a one-stop big data solution integrating stream processing, batch processing, data warehouse and
data laker.

[![StreamX video](http://assets.streamxhub.com/streamx_player.png)](http://assets.streamxhub.com/streamx.mp4)

Expand Down Expand Up @@ -75,26 +75,24 @@ batch processing, data warehouse and data laker.

### 1️⃣ streamx-core

`streamx-core` is a framework that focuses on coding, standardizes configuration, and develops in a way that is better
than configuration by convention. Also it provides a development-time `RunTime Content` and a series of `Connector` out
of the box. At the same time, it extends `DataStream` some methods, and integrates `DataStream` and `Flink sql` api to
simplify tedious operations, focus on the business itself, and improve development efficiency and development
experience.
`streamx-core` is a framework that focuses on coding, standardizes configuration, and develops in a way that is better than configuration by
convention. Also it provides a development-time `RunTime Content` and a series of `Connector` out of the box. At the same time, it
extends `DataStream` some methods, and integrates `DataStream` and `Flink sql` api to simplify tedious operations, focus on the business
itself, and improve development efficiency and development experience.

### 2️⃣ streamx-pump

`streamx-pump` is a planned data extraction component, similar to `flinkx`. Based on the various `connector` provided
in `streamx-core`, the purpose is to create a convenient, fast, out-of-the-box real-time data extraction and migration
component for big data, and it will be integrated into the `streamx-console`.
`streamx-pump` is a planned data extraction component, similar to `flinkx`. Based on the various `connector` provided in `streamx-core`, the
purpose is to create a convenient, fast, out-of-the-box real-time data extraction and migration component for big data, and it will be
integrated into the `streamx-console`.

### 3️⃣ streamx-console

`streamx-console` is a stream processing and `Low Code` platform, capable of managing `Flink` tasks, integrating project
compilation, deploy, configuration, startup, `savepoint`, `flame graph`, `Flink SQL`, monitoring and many other
features. Simplify the daily operation and maintenance of the `Flink` task.
`streamx-console` is a stream processing and `Low Code` platform, capable of managing `Flink` tasks, integrating project compilation,
deploy, configuration, startup, `savepoint`, `flame graph`, `Flink SQL`, monitoring and many other features. Simplify the daily operation
and maintenance of the `Flink` task.

Our ultimate goal is to build a one-stop big data solution integrating stream processing, batch processing, data
warehouse and data laker.
Our ultimate goal is to build a one-stop big data solution integrating stream processing, batch processing, data warehouse and data laker.

* [Apache Flink](http://flink.apache.org)
* [Apache YARN](http://hadoop.apache.org)
Expand All @@ -111,10 +109,10 @@ warehouse and data laker.
* [Monaco Editor](https://microsoft.github.io/monaco-editor/)
* ...

Thanks to the above excellent open source projects and many outstanding open source projects that are not mentioned, for
giving the greatest respect, special thanks to [Apache Zeppelin](http://zeppelin.apache.org)
, [IntelliJ IDEA](https://www.jetbrains.com/idea/), Thanks to
the [fire-spark](https://github.com/GuoNingNing/fire-spark) project for the early inspiration and help.
Thanks to the above excellent open source projects and many outstanding open source projects that are not mentioned, for giving the greatest
respect, special thanks to [Apache Zeppelin](http://zeppelin.apache.org)
, [IntelliJ IDEA](https://www.jetbrains.com/idea/), Thanks to the [fire-spark](https://github.com/GuoNingNing/fire-spark) project for the
early inspiration and help.

### 🚀 Quick Start

Expand All @@ -130,26 +128,26 @@ click [Document](http://www.streamxhub.com/zh/doc/) for more information

### Apache Zeppelin

[Apache Zeppelin](https://zeppelin.apache.org) is a Web-based notebook that enables data-driven, interactive data
analytics and collaborative documents with SQL, Java, Scala and more.
[Apache Zeppelin](https://zeppelin.apache.org) is a Web-based notebook that enables data-driven, interactive data analytics and
collaborative documents with SQL, Java, Scala and more.

At the same time we also need a one-stop tool that can cover `development`, `test`, `package`, `deploy`, and `start`.
`streamx-console` solves these pain points very well, positioning is a one-stop stream processing platform, and has
developed more exciting features (such as `Flink SQL WebIDE`, `dependency isolation`, `task rollback `, `flame diagram`
`streamx-console` solves these pain points very well, positioning is a one-stop stream processing platform, and has developed more exciting
features (such as `Flink SQL WebIDE`, `dependency isolation`, `task rollback `, `flame diagram`
etc.)

### FlinkX

[FlinkX](http://github.com/DTStack/flinkx) is a distributed offline and real-time data synchronization framework based
on flink widely used in DTStack, which realizes efficient data migration between multiple heterogeneous data sources.
[FlinkX](http://github.com/DTStack/flinkx) is a distributed offline and real-time data synchronization framework based on flink widely used
in DTStack, which realizes efficient data migration between multiple heterogeneous data sources.

`StreamX` focuses on the management of development phases and tasks. The `streamx-pump` module is also under planning,
dedicated to solving data source migration, and will eventually be integrated into the `streamx-console`.
`StreamX` focuses on the management of development phases and tasks. The `streamx-pump` module is also under planning, dedicated to solving
data source migration, and will eventually be integrated into the `streamx-console`.

## 🍼 Feedback

You can quickly submit an issue. Before submitting, please check the problem and try to use the following contact
information! Maybe your question has already been asked by others, or it has already been answered. Thank you!
You can quickly submit an issue. Before submitting, please check the problem and try to use the following contact information! Maybe your
question has already been asked by others, or it has already been answered. Thank you!

You can contact us or ask questions via:

Expand All @@ -160,8 +158,7 @@ You can contact us or ask questions via:

Are you **enjoying this project** ? 👋

If you like this framework, and appreciate the work done for it to exist, you can still support the developers by
donating ☀️ 👊
If you like this framework, and appreciate the work done for it to exist, you can still support the developers by donating ☀️ 👊

| WeChat Pay | Alipay |
|:----------|:----------|
Expand Down
4 changes: 2 additions & 2 deletions README_CN.md
Expand Up @@ -39,8 +39,8 @@ Make Flink|Spark easier!!!
## 🚀 什么是StreamX

    大数据技术如今发展的如火如荼,已经呈现百花齐放欣欣向荣的景象,实时处理流域 `Apache Spark``Apache Flink`
更是一个伟大的进步,尤其是 `Apache Flink` 被普遍认为是下一代大数据流计算引擎, 我们在使用 `Flink` 时发现从编程模型, 启动配置到运维管理都有很多可以抽象共用的地方, 我们将一些好的经验固化下来并结合业内的最佳实践,
通过不断努力终于诞生了今天的框架 —— `StreamX`, 项目的初衷是 —— 让 `Flink` 开发更简单, 使用 `StreamX` 开发,可以极大降低学习成本和开发门槛, 让开发者只用关心最核心的业务, `StreamX`
更是一个伟大的进步,尤其是 `Apache Flink` 被普遍认为是下一代大数据流计算引擎, 我们在使用 `Flink` 时发现从编程模型, 启动配置到运维管理都有很多可以抽象共用的地方, 我们将一些好的经验固化下来并结合业内的最佳实践, 通过不断努力终于诞生了今天的框架
—— `StreamX`, 项目的初衷是 —— 让 `Flink` 开发更简单, 使用 `StreamX` 开发,可以极大降低学习成本和开发门槛, 让开发者只用关心最核心的业务, `StreamX`
规范了项目的配置,鼓励函数式编程,定义了最佳的编程方式,提供了一系列开箱即用的 `Connectors` ,标准化了配置、开发、测试、部署、监控、运维的整个过程, 提供 `Scala``Java` 两套api,
其最终目的是打造一个一站式大数据平台,流批一体,湖仓一体的解决方案

Expand Down
7 changes: 3 additions & 4 deletions SECURITY.md
Expand Up @@ -2,8 +2,7 @@

## Supported Versions

Use this section to tell people about which versions of your project are currently being supported with security
updates.
Use this section to tell people about which versions of your project are currently being supported with security updates.

| Version | Supported |
| ------- | ------------------ |
Expand All @@ -16,6 +15,6 @@ updates.

Use this section to tell people how to report a vulnerability.

Tell them where to go, how often they can expect to get an update on a reported vulnerability, what to expect if the
vulnerability is accepted or declined, etc.
Tell them where to go, how often they can expect to get an update on a reported vulnerability, what to expect if the vulnerability is
accepted or declined, etc.

Expand Up @@ -56,7 +56,7 @@ public enum ExecutionMode implements Serializable {
/**
* kubernetes application
*/
KUBERNETES_NATIVE_APPLICATION(6,"kubernetes-application");
KUBERNETES_NATIVE_APPLICATION(6, "kubernetes-application");

private Integer mode;
private String name;
Expand Down
Expand Up @@ -21,9 +21,7 @@
package com.streamxhub.streamx.common.enums;

/**
*
* @author benjobs
*
*/
public enum SqlErrorType {
/**
Expand Down
Expand Up @@ -25,6 +25,7 @@ import com.streamxhub.streamx.common.enums.StorageType

/**
* just for java.
*
* @author benjobs
*/
object FsOperatorGetter {
Expand All @@ -42,7 +43,7 @@ object FsOperator {
def of(storageType: StorageType): FsOperator = {
storageType match {
case StorageType.HDFS => HdfsOperator
case StorageType.LFS => LfsOperator
case StorageType.LFS => LFsOperator
case _ => throw new UnsupportedOperationException(s"Unsupported storageType:$storageType")
}
}
Expand Down
Expand Up @@ -37,7 +37,7 @@ object HdfsOperator extends FsOperator with Logger {
override def move(srcPath: String, dstPath: String): Unit = HdfsUtils.move(toHdfsPath(srcPath), toHdfsPath(dstPath))

override def upload(srcPath: String, dstPath: String, delSrc: Boolean, overwrite: Boolean): Unit =
HdfsUtils.upload(toHdfsPath(srcPath), toHdfsPath(dstPath), delSrc = delSrc, overwrite = overwrite)
HdfsUtils.upload(srcPath, toHdfsPath(dstPath), delSrc = delSrc, overwrite = overwrite)

override def copy(srcPath: String, dstPath: String, delSrc: Boolean, overwrite: Boolean): Unit =
HdfsUtils.copyHdfs(toHdfsPath(srcPath), toHdfsPath(dstPath), delSrc = delSrc, overwrite = overwrite)
Expand Down
Expand Up @@ -33,7 +33,7 @@ import java.io.{File, FileInputStream}
* Local File System (aka LFS) Operator
*/
//noinspection DuplicatedCode
object LfsOperator extends FsOperator with Logger {
object LFsOperator extends FsOperator with Logger {

override def exists(path: String): Boolean = {
StringUtils.isNotBlank(path) && new File(path).exists()
Expand Down
Expand Up @@ -24,7 +24,7 @@ import java.text.{ParseException, SimpleDateFormat}
import java.time.LocalDateTime
import java.time.format.DateTimeFormatter
import java.util.concurrent.TimeUnit
import java.util.{Calendar, TimeZone, _}
import java.util._
import scala.util._


Expand Down
Expand Up @@ -348,7 +348,7 @@ object HadoopUtils extends Logger {
val tmpDir = FileUtils.createTempDir()
val fs = FileSystem.get(new Configuration)
val sourcePath = fs.makeQualified(new Path(jarOnHdfs))
if (!fs.exists(sourcePath)) throw new IOException("jar file: " + jarOnHdfs + " doesn't exist.")
if (!fs.exists(sourcePath)) throw new IOException(s"jar file: $jarOnHdfs doesn't exist.")
val destPath = new Path(tmpDir.getAbsolutePath + "/" + sourcePath.getName)
fs.copyToLocalFile(sourcePath, destPath)
new File(destPath.toString).getAbsolutePath
Expand Down
Expand Up @@ -50,8 +50,8 @@ object HdfsUtils extends Logger {
def upload(src: String, dst: String, delSrc: Boolean = false, overwrite: Boolean = true): Unit =
HadoopUtils.hdfs.copyFromLocalFile(delSrc, overwrite, getPath(src), getPath(dst))

def upload2(srcs: Array[String], dst: String, delSrc: Boolean = false, overwrite: Boolean = true): Unit =
HadoopUtils.hdfs.copyFromLocalFile(delSrc, overwrite, srcs.map(getPath), getPath(dst))
def uploadMulti(src: Array[String], dst: String, delSrc: Boolean = false, overwrite: Boolean = true): Unit =
HadoopUtils.hdfs.copyFromLocalFile(delSrc, overwrite, src.map(getPath), getPath(dst))

def download(src: String, dst: String, delSrc: Boolean = false, useRawLocalFileSystem: Boolean = false): Unit =
HadoopUtils.hdfs.copyToLocalFile(delSrc, getPath(src), getPath(dst), useRawLocalFileSystem)
Expand Down
Expand Up @@ -26,7 +26,6 @@ import java.io._
import java.util.concurrent.atomic.AtomicInteger
import java.util.{Properties, Scanner, HashMap => JavaMap, LinkedHashMap => JavaLinkedMap}
import scala.collection.JavaConversions._
import scala.collection.JavaConverters._
import scala.collection.mutable.{Map => MutableMap}

/**
Expand Down
Expand Up @@ -22,7 +22,7 @@
package com.streamxhub.streamx.common.util

import redis.clients.jedis.exceptions.JedisConnectionException
import redis.clients.jedis.{Jedis, _}
import redis.clients.jedis._

import java.util.concurrent.ConcurrentHashMap
import scala.annotation.meta.getter
Expand Down
Expand Up @@ -24,8 +24,8 @@ import redis.clients.jedis.{Jedis, JedisCluster, Pipeline, ScanParams}

import java.lang.{Integer => JInt}
import java.util.Set
import scala.collection.JavaConverters._
import scala.collection.JavaConversions._
import scala.collection.JavaConverters._
import scala.collection.immutable
import scala.util.{Failure, Success, Try}

Expand Down
Expand Up @@ -143,7 +143,7 @@ object SqlConvertUtils extends Logger {
}
}

val body = sql.substring(sql.indexOf("("),sql.lastIndexOf(")") + 1)
val body = sql.substring(sql.indexOf("("), sql.lastIndexOf(")") + 1)
.replaceAll("\r\n", "")
.replaceFirst("\\(", "(\n")
.replaceFirst("\\)$", "\n)")
Expand Down

0 comments on commit 87b18fa

Please sign in to comment.