-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Data Quality Design #4283
Comments
good feature |
LGTM |
Essential functions of big data ETL System~~Looking forward to going online soon |
+1 |
Development Planning:Version 1.0
Version 2.0 (Time to be determined)
开发计划:1.0 版本 (本地开发已完成95%以上,尚未提PR)
2.0 版本 (时间待定)
|
+1 |
2 similar comments
+1 |
+1 |
@zixi0825 大佬,想请教一下怎么才能将您完成的功能跑起来? |
It is not completed yet |
I have also implemented the data quality management function in my company. The difference is that my work is based on Spark + parquet/avro to complete the data quality calculation. So I can provide some reference for the design of the calculation rules or the implementation of the specific code. I am very interested in participating in the development of this feature.How can I help you? |
When you see the current solution, please give some suggestions and we can discuss |
* add data quality module * add license * add package configuration in dist pom * fix license and jar import bug * replace apache/skywalking-eyes@9bd5feb SHA * refacotr jbdc-connector and writer * modify parameter name in HiveConnector * fix checkstyle error * fix checkstyle error in dolphinschesuler-dist * fix checkstyle error in dolphinschesuler-dist * fix checkstyle error in dolphinschesuler-dist * fix duplicate code bug * fix code style bug * fix code smells * add dq relevant enums and parameter * replace apache/skywalking-eyes@9bd5feb SHA * fix Constants bug * remove the unused class * add unit test * fix code style error * add unit test * refactor data quality common entity * fix code style error * add unit test * close e2e test * fix code smell bug * modify dataquality enum value to 14 in TaskType * add data qualtiy task * update * add getDatasourceOptions interface * fix checkstyle * close e2e test * add data quality task ui * update skywalking-eyes SHA * fix style * fix eslint error * fix eslint error * test e2e * add unit test and alter dataquality task result * fix checkstyle * fix process service test error * add unit test and fix code smells * fix checkstyle error * fix unit test error * fix checkstyle error * change execute sql type name * revert ui pom.xml * fix data quality task error * fix checkstyle error * fix dq task src_connector_type ui select bug * fix spark rw postgresql bug * change mysql driver scope * fix form-create json bug * fix code smell * fix DolphinException Bug * fix ui validate rule and Alert title * fix target connection param bug * fix threshold validate change * add rule input entry index * change statistic_comparison_check logic * remove check type change * add DateExpressionReplaceUtil * fix null point expetion * fix null point expetion * fix test error * add more sql driver * fix test error and remove DateExprReplaceUtil * add get datasource tables and columns * add get datasource tables and columns * remove hive-jdbc in pom.xml * fix code smells * update sql * change the pom.xml * optimize multi_table_accuracy ui * fix v-show error * fix code smells * update sql * [Feature][DataQuality] Add data quality task ui (#5054) * add data quality task ui * update skywalking-eyes SHA * fix style * fix eslint error * fix eslint error * test e2e * fix dq task src_connector_type ui select bug * fix threshold validate change * remove check type change * add get datasource tables and columns * optimize multi_table_accuracy ui * fix v-show error * fix code smells Co-authored-by: sunchaohe <sunzhaohe@linklogis.com> * [Feature][DataQuality] Add data quality module (#4830) * add data quality module * add license * add package configuration in dist pom * fix license and jar import bug * replace apache/skywalking-eyes@9bd5feb SHA * refacotr jbdc-connector and writer * modify parameter name in HiveConnector * fix checkstyle error * fix checkstyle error in dolphinschesuler-dist * fix checkstyle error in dolphinschesuler-dist * fix checkstyle error in dolphinschesuler-dist * fix duplicate code bug * fix code style bug * fix code smells * update * close e2e test * fix spark rw postgresql bug * change mysql driver scope * add more sql driver * remove hive-jdbc in pom.xml * change the pom.xml Co-authored-by: sunchaohe <sunzhaohe@linklogis.com> * [Feature][DataQuality] Add data quality task backend (#4883) * add dq relevant enums and parameter * replace apache/skywalking-eyes@9bd5feb SHA Co-authored-by: sunchaohe <sunzhaohe@linklogis.com> * refactor data_quality_module * add header license * data quality module refactor * fix unit test error * fix checkstyle error * fix unit test error * fix checkstyle error * fix unit test error * fix code smell * fix check style * fix unit test error * task statistics value add unique code * fix unit test error * fix checkstyle error * fix checkstyle * fix security hotspot * fix unit test error * fix security hotspot * fix check * add data quality task error handling * fix unit test error * add unit test * add unit test * optimize data quality result alert * fix unit test * fix sql script error * fix bug * update sql script * fix checkstyle * add license * fix checkstyle * fix checkstyle * fix unit test * add jacoco dependencies * fix unit test * fix unit test * add jacoco dependencies * add unit test * add unit test * add license * fix checkstyle * fix pom * fix checkstyle * fix checkstyle * merge dev * fix ui error * fix pom error * fix pom error * fix test error * fix test error * mssql-jdbc exclude azure-keyvault * fix test error * merge dev and add unit test * add notes * rollback the CollectionUtils * fix * update sql * fix * fix * fix query rule page error * change dq.jar path * fix sql error * fix ui error * fix(dq): jar path&task enum description * add notes on DataQualityApplication * fix dq result jump error * fix(ui): page condition * feat(ui): add show error output path * change version * remove all chinese word in sql * merge Co-authored-by: sunchaohe <sunzhaohe@linklogis.com>
@zixi0825 Thanks for this amazing job. Do we only output check result to csv file at present? |
At present, only CSV is supported. I will add other formats. |
Do we have plan support output to database ? I want to show these results in UI or BI tools. |
I think this issue can be discussed in the dev@dolphinscheduler.apache.org mailing list |
or I think you can submit an new issue to describe what you want |
I have opened an new #8586 |
Exception in thread "main" org.apache.spark.sql.AnalysisException: Table or view not found: |
请问你这个问题解决了吗?我也遇到了一样的问题。改了hive spark下面的hive-site.xml,但是还是报这个错误。 |
需要把hive-site.xml 加载到conf中,我 是这么加的
|
赞👍🏻,不过我就改了上面的那句就可以了,然后打包替换成对应的这个jar(dolphinscheduler-data-quality-xxx.jar). |
In cdh,adding hive-site.xml
into /opt/cloudera/parcels/SPARK2/lib/spark2/conf can solve the problem
…On Fri, May 20, 2022 at 10:39 AM a092cc ***@***.***> wrote:
需要把hive-site.xml 加载到conf中,我 * The SparkRuntimeEnvironment is responsible
for creating SparkSession and SparkExecution
*/
@@ -47,14 +52,29 @@ public class SparkRuntimeEnvironment {
}
public void prepare() {
-
sparkSession = SparkSession.builder().config(createSparkConf()).getOrCreate();
-
sparkSession = SparkSession.builder().config(createSparkConf())
-
.enableHiveSupport()
-
.getOrCreate();
}
private SparkConf createSparkConf() {
SparkConf conf = new SparkConf();
-
this.config.entrySet()
.forEach(entry -> conf.set(entry.getKey(), String.valueOf(entry.getValue())));
-
conf.set("spark.sql.crossJoin.enabled","true");
-
-
Configuration cf = new Configuration();
-
cf.addResource("hive-site.xml");
-
cf.addResource("hdfs-site.xml");
-
cf.addResource("core-site.xml");
-
for (Map.Entry<String, String> next : cf) {
-
String key = next.getKey();
-
String value = next.getValue();
-
conf.set(key, value);
-
}
-
return conf;
}
是这么加的
—
Reply to this email directly, view it on GitHub
<#4283 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACSUBBCLR5OONOBNTKDT3T3VK33MVANCNFSM4VEFPIRA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
1 Summary
Data quality inspection is an important part of the data processing process. After data synchronization and data processing, it is usually necessary to check the accuracy of the data, such as comparing the difference in the number of data between the source table and the target table, or checking according to a certain rule that calculate a certain column and compare the standard value and the calculated value to judge. At present, there is no such type of data quality check in the task type of DS, so it is necessary to add a new data quality task type so that the data quality check task can be directly added when defining the workflow, so that the entire data processing process is more complete.
2 Requirements Analysis
For data quality inspection tasks, the core functions are rule management, specific task execution, and execution result alarms. To achieve a lightweight data quality, the following functions must be met:
2.1 Rule Manager
2.1.1 RuleType
2.1.2 Rule Implementation
2.1.3 Rule Definition and Parser
2.1.3.1 Rule Definition
The complete rules should include connector information, executed SQL statements, the type of comparison value, the type of inspection, etc., that is, the parameters needed to define a data quality task can be obtained through the rules
2.1.3.2 Rule Parser
The main responsibility of rule parser is to obtain an parameter that conforms to the execution of the data quality task by parsing the parameter value input by the user and the rule definition.
2.2 Task Execution Mode
Based on the existing task execution method of DolphinScheduler, a more appropriate way is to use Spark as the execution engine for data quality tasks, pass specific execution SQL to the Spark job to run through configuration, and write the execution results to the specified storage engine
2.1.2 Alert
Each rule configure alertrules, when the check result is abnormal, an alertoccurs. Use DS's alert module for alarm
3 Summary Design
3.1 Rule Manager Design
3.1.1 Rule Component Design
3.1.1.1 Single Rule
3.1.1.2 MultiTableAccuracyRule
3.1.1.3 ,MultiTableValueComparsionRule
3.1.2 Custom Rule
3.2 Task Execute Process Design
3.2.1 Execution Engine
3.2.2 Task Execution Process
3.3 Task Manager Design
Data quality tasks do not support separate definition and scheduled scheduling, which can be defined and scheduled in the workflow
3.4 Data Quality Task Definition UI Design
3.4.1 UI Generation Method
The data quality task definition UI interface will automatically generated by the front-end component according to a JSON string.
3.4.2 Task Define UI Prototype Diagram
3.4.3 Custom Rule UI Prototype Diagram
4 Detail Design
4.1 Database Design
4.1.1 RuleInfo
4.1.2 CheckResultInfo
4.1.3 CheckResultStatisticsInfo
4.2 Class Design
4.2.1 Rule Design
4.2.1.1Rule Related Model
4.2.1.2 RuleParser
1)Connector Parameter Parser
To get the information of datasource including url, database, table, username, password according the datasource_id and constructed information of connector
2)Replace the placeholders in executeSQL to construct an executeSQL list
3)Construct writer configuration, including construct writer connector configuration and saveSQL
Finally, it will be constructed into the json string parameter and passed to the Spark application
4.2.2 Task Design
4.2.2.1 DolphinScheduler Task Design
DataQualityParameter
DataQualityTask
4.2.2.2 Spark Data Quality Task Design
1)The data quality task is actually a Spark task. The main responsibilities of this task are as follows:
2)The execute mode has the follow options
5 Todo List
6 related issue and pr
issue: DataQuality Application
pr: DataQuality Common Entity
1 摘要
数据质量检查是数据处理流程中比较重要的环节,在数据同步和数据处理后通常是需要检查数据的准确性,例如比较源表和目标表之间的数据条数差,或者根据某个规则对某一列进行计算,将标准值和计算值进行比较判断。目前在 DS 的任务类型没有数据质量检查这样的类型,所以需要新增数据质量任务类型,以便于在定义工作流的时候可以直接添加数据质量检查任务,让整个数据处理流程更加的完整。
2 需求分析
对于数据质量检查任务来说,核心的功能就是规则管理、具体的任务执行以及执行结果告警,实现一个轻量级的数据质量需要满足以下功能:
2.1 规则管理
2.1.1 规则类型
2.1.2 规则实现方式
2.1.3 规则的定义和解析
2.1.3.1 规则定义
完整的规则应该包括 connector 信息、执行的 sql 语句、比较值的类型,检查的类型等,即通过规则可以获取定义一个数据质量任务所需要的参数
2.1.3.2 规则解析
规则解析主要职责是通过解析用户输入的参数值和规则定义得到一个符合数据质量任务运行的输入参数
2.2 任务的执行方式
基于 DolphinScheduler 现有的任务执行方式,比较合适的方式就是使用 Spark 作为数据质量任务的执行引擎,通过配置的方式将具体的执行 SQL 传入 Spark 作业来运行,并将执行的结果写到指定的存储引擎中
2.3 检查结果告警
每个规则都会配置告警规则,当检查结果为异常的话,则会进行告警。使用 DolphinScheduler 的告警模块进行告警
3 概要设计
3.1 规则管理设计
3.1.1 规则组成设计
3.1.1.1 单表规则
3.1.1.2 跨表准确性规则
3.1.1.3 跨表值比对规则
select ${statistics_name} as statistics_value,${comparsion_name} as coparsion_value from ${statistics_execute_sql} full join ${comparsion_execute_sql}
3.1.2 自定义规则
3.2 任务执行流程设计
3.2.1 执行引擎
3.2.2 任务执行流程
3.3 任务管理设计
数据质量任务不支持单独定义和定时调度,可以通过在工作流中定义和定时调度
3.4 数据质量任务定义 UI 设计
3.4.1 UI 页面生成方式
数据质量任务定义 UI 界面会根据不同规则的参数生成 JSON 串由前端组件自动生成
3.4.2 任务定义 UI 示意图
3.4.3 自定义规则界面 UI 示意图
4 详细设计
4.1 数据库设计
4.1.1 规则表
4.1.2 检查结果表
4.1.3 检查结果统计表
4.2 类设计
4.2.1 规则相关
4.2.1.1 规则实体
4.2.1.2 规则解析
1) 规则使用的流程分析
2)规则解析具体内容
根据 datasource_id 拿到相关的数据源信息,包括 url,database,table,username,password,构造 connector 配置
3)最终会构造成json 格式 的参数传给 Spark 应用
4.2.2 任务相关
4.2.2.1 DolphinScheduler 任务设计
DataQualityParameter
DataQualityTask
4.2.2.2 Spark 数据质量任务设计
1)数据质量任务实际上是一个 Spark 任务,这个任务的主要责任是如下:
2)运行方式可如下:
5 Todo List
6 相关 issue 和 pr
issue: DataQuality Application
pr: DataQuality Common Entity
The text was updated successfully, but these errors were encountered: