Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 14 additions & 16 deletions src/UserGuide/Master/User-Manual/Database-Programming.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@

## TRIGGER

### 1. Instructions
### Instructions

The trigger provides a mechanism for listening to changes in time series data. With user-defined logic, tasks such as alerting and data forwarding can be conducted.

Expand All @@ -49,7 +49,7 @@ There are currently two trigger events for the trigger, and other trigger events
- BEFORE INSERT: Fires before the data is persisted. **Please note that currently the trigger does not support data cleaning and will not change the data to be persisted itself.**
- AFTER INSERT: Fires after the data is persisted.

### 2. How to Implement a Trigger
### How to Implement a Trigger

You need to implement the trigger by writing a Java class, where the dependency shown below is required. If you use [Maven](http://search.maven.org/), you can search for them directly from the [Maven repository](http://search.maven.org/).

Expand Down Expand Up @@ -320,7 +320,7 @@ public class ClusterAlertingExample implements Trigger {
}
```

### 3. Trigger Management
### Trigger Management

You can create and drop a trigger through an SQL statement, and you can also query all registered triggers through an SQL statement.

Expand Down Expand Up @@ -450,7 +450,7 @@ During the process of creating and dropping triggers in the cluster, we maintain
| DROPPING | Intermediate state of executing `DROP TRIGGER`, the cluster is in the process of dropping the trigger. | NO |
| TRANSFERRING | The cluster is migrating the location of this trigger instance. | NO |

### 4. Notes
### Notes

- The trigger takes effect from the time of registration, and does not process the existing historical data. **That is, only insertion requests that occur after the trigger is successfully registered will be listened to by the trigger. **
- The fire process of trigger is synchronous currently, so you need to ensure the efficiency of the trigger, otherwise the writing performance may be greatly affected. **You need to guarantee concurrency safety of triggers yourself**.
Expand All @@ -460,7 +460,7 @@ During the process of creating and dropping triggers in the cluster, we maintain
- The trigger JAR package has a size limit, which must be less than min(`config_node_ratis_log_appender_buffer_size_max`, 2G), where `config_node_ratis_log_appender_buffer_size_max` is a configuration item. For the specific meaning, please refer to the IOTDB configuration item description.
- **It is better not to have classes with the same full class name but different function implementations in different JAR packages.** For example, trigger1 and trigger2 correspond to resources trigger1.jar and trigger2.jar respectively. If two JAR packages contain a `org.apache.iotdb.trigger.example.AlertListener` class, when `CREATE TRIGGER` uses this class, the system will randomly load the class in one of the JAR packages, which will eventually leads the inconsistent behavior of trigger and other issues.

### 5. Configuration Parameters
### Configuration Parameters

| Parameter | Meaning |
| ------------------------------------------------- | ------------------------------------------------------------ |
Expand All @@ -469,13 +469,13 @@ During the process of creating and dropping triggers in the cluster, we maintain

## CONTINUOUS QUERY (CQ)

### 1. Introduction
### Introduction

Continuous queries(CQ) are queries that run automatically and periodically on realtime data and store query results in other specified time series.

Users can implement sliding window streaming computing through continuous query, such as calculating the hourly average temperature of a sequence and writing it into a new sequence. Users can customize the `RESAMPLE` clause to create different sliding windows, which can achieve a certain degree of tolerance for out-of-order data.

### 2. Syntax
### Syntax

```sql
CREATE (CONTINUOUS QUERY | CQ) <cq_id>
Expand Down Expand Up @@ -540,15 +540,15 @@ END

##### `<every_interval>` is not zero

![4](https://alioss.timecho.com/docs/img/UserGuide/Process-Data/Continuous-Query/pic4.png?raw=true)
![](https://alioss.timecho.com/docs/img/UserGuide/Process-Data/Continuous-Query/pic4.png?raw=true)


- `TIMEOUT POLICY` specify how we deal with the cq task whose previous time interval execution is not finished while the next execution time has reached. The default value is `BLOCKED`.
- `BLOCKED` means that we will block and wait to do the current cq execution task until the previous time interval cq task finishes. If using `BLOCKED` policy, all the time intervals will be executed, but it may be behind the latest time interval.
- `DISCARD` means that we just discard the current cq execution task and wait for the next execution time and do the next time interval cq task. If using `DISCARD` policy, some time intervals won't be executed when the execution time of one cq task is longer than the `<every_interval>`. However, once a cq task is executed, it will use the latest time interval, so it can catch up at the sacrifice of some time intervals being discarded.


### 3. Examples of CQ
### Examples of CQ

The examples below use the following sample data. It's a real time data stream and we can assume that the data arrives on time.

Expand Down Expand Up @@ -931,7 +931,7 @@ At **2021-05-11T22:19:00.000+08:00**, `cq5` executes a query within the time ran
+-----------------------------+-------------------------------+-----------+
````

### 4. CQ Management
### CQ Management

#### Listing continuous queries

Expand Down Expand Up @@ -979,7 +979,7 @@ DROP CONTINUOUS QUERY s1_count_cq;
CQs can't be altered once they're created. To change a CQ, you must `DROP` and re`CREATE` it with the updated settings.


### 5. CQ Use Cases
### CQ Use Cases

#### Downsampling and Data Retention

Expand All @@ -1005,7 +1005,7 @@ SELECT avg(count_s1) from (select count(s1) as count_s1 from root.sg.d group by(

To get the same results:

**1. Create a CQ**
**Create a CQ**

This step performs the nested sub query in from clause of the query above. The following CQ automatically calculates the number of non-null values of `s1` at 30 minute intervals and writes those counts into the new `root.sg_count.d.count_s1` time series.

Expand All @@ -1019,7 +1019,7 @@ BEGIN
END
```

**2. Query the CQ results**
**Query the CQ results**

Next step performs the avg([...]) part of the outer query above.

Expand All @@ -1030,7 +1030,7 @@ SELECT avg(count_s1) from root.sg_count.d;
```


### 6. System Parameter Configuration
### System Parameter Configuration

| Name | Description | Data Type | Default Value |
| :------------------------------------------ | ------------------------------------------------------------ | --------- | ------------- |
Expand Down Expand Up @@ -1662,8 +1662,6 @@ This method is called by the framework. For a UDF instance, `beforeDestroy` will





### Maven Project Example

If you use Maven, you can build your own UDF project referring to our **udf-example** module. You can find the project [here](https://github.com/apache/iotdb/tree/master/example/udf).
Expand Down
28 changes: 14 additions & 14 deletions src/UserGuide/latest/User-Manual/Database-Programming.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@

## TRIGGER

### 1. Instructions
### Instructions

The trigger provides a mechanism for listening to changes in time series data. With user-defined logic, tasks such as alerting and data forwarding can be conducted.

Expand All @@ -49,7 +49,7 @@ There are currently two trigger events for the trigger, and other trigger events
- BEFORE INSERT: Fires before the data is persisted. **Please note that currently the trigger does not support data cleaning and will not change the data to be persisted itself.**
- AFTER INSERT: Fires after the data is persisted.

### 2. How to Implement a Trigger
### How to Implement a Trigger

You need to implement the trigger by writing a Java class, where the dependency shown below is required. If you use [Maven](http://search.maven.org/), you can search for them directly from the [Maven repository](http://search.maven.org/).

Expand Down Expand Up @@ -320,7 +320,7 @@ public class ClusterAlertingExample implements Trigger {
}
```

### 3. Trigger Management
### Trigger Management

You can create and drop a trigger through an SQL statement, and you can also query all registered triggers through an SQL statement.

Expand Down Expand Up @@ -450,7 +450,7 @@ During the process of creating and dropping triggers in the cluster, we maintain
| DROPPING | Intermediate state of executing `DROP TRIGGER`, the cluster is in the process of dropping the trigger. | NO |
| TRANSFERRING | The cluster is migrating the location of this trigger instance. | NO |

### 4. Notes
### Notes

- The trigger takes effect from the time of registration, and does not process the existing historical data. **That is, only insertion requests that occur after the trigger is successfully registered will be listened to by the trigger. **
- The fire process of trigger is synchronous currently, so you need to ensure the efficiency of the trigger, otherwise the writing performance may be greatly affected. **You need to guarantee concurrency safety of triggers yourself**.
Expand All @@ -460,7 +460,7 @@ During the process of creating and dropping triggers in the cluster, we maintain
- The trigger JAR package has a size limit, which must be less than min(`config_node_ratis_log_appender_buffer_size_max`, 2G), where `config_node_ratis_log_appender_buffer_size_max` is a configuration item. For the specific meaning, please refer to the IOTDB configuration item description.
- **It is better not to have classes with the same full class name but different function implementations in different JAR packages.** For example, trigger1 and trigger2 correspond to resources trigger1.jar and trigger2.jar respectively. If two JAR packages contain a `org.apache.iotdb.trigger.example.AlertListener` class, when `CREATE TRIGGER` uses this class, the system will randomly load the class in one of the JAR packages, which will eventually leads the inconsistent behavior of trigger and other issues.

### 5. Configuration Parameters
### Configuration Parameters

| Parameter | Meaning |
| ------------------------------------------------- | ------------------------------------------------------------ |
Expand All @@ -469,13 +469,13 @@ During the process of creating and dropping triggers in the cluster, we maintain

## CONTINUOUS QUERY (CQ)

### 1. Introduction
### Introduction

Continuous queries(CQ) are queries that run automatically and periodically on realtime data and store query results in other specified time series.

Users can implement sliding window streaming computing through continuous query, such as calculating the hourly average temperature of a sequence and writing it into a new sequence. Users can customize the `RESAMPLE` clause to create different sliding windows, which can achieve a certain degree of tolerance for out-of-order data.

### 2. Syntax
### Syntax

```sql
CREATE (CONTINUOUS QUERY | CQ) <cq_id>
Expand Down Expand Up @@ -540,15 +540,15 @@ END

##### `<every_interval>` is not zero

![4](https://alioss.timecho.com/docs/img/UserGuide/Process-Data/Continuous-Query/pic4.png?raw=true)
![](https://alioss.timecho.com/docs/img/UserGuide/Process-Data/Continuous-Query/pic4.png?raw=true)


- `TIMEOUT POLICY` specify how we deal with the cq task whose previous time interval execution is not finished while the next execution time has reached. The default value is `BLOCKED`.
- `BLOCKED` means that we will block and wait to do the current cq execution task until the previous time interval cq task finishes. If using `BLOCKED` policy, all the time intervals will be executed, but it may be behind the latest time interval.
- `DISCARD` means that we just discard the current cq execution task and wait for the next execution time and do the next time interval cq task. If using `DISCARD` policy, some time intervals won't be executed when the execution time of one cq task is longer than the `<every_interval>`. However, once a cq task is executed, it will use the latest time interval, so it can catch up at the sacrifice of some time intervals being discarded.


### 3. Examples of CQ
### Examples of CQ

The examples below use the following sample data. It's a real time data stream and we can assume that the data arrives on time.

Expand Down Expand Up @@ -931,7 +931,7 @@ At **2021-05-11T22:19:00.000+08:00**, `cq5` executes a query within the time ran
+-----------------------------+-------------------------------+-----------+
````

### 4. CQ Management
### CQ Management

#### Listing continuous queries

Expand Down Expand Up @@ -979,7 +979,7 @@ DROP CONTINUOUS QUERY s1_count_cq;
CQs can't be altered once they're created. To change a CQ, you must `DROP` and re`CREATE` it with the updated settings.


### 5. CQ Use Cases
### CQ Use Cases

#### Downsampling and Data Retention

Expand All @@ -1005,7 +1005,7 @@ SELECT avg(count_s1) from (select count(s1) as count_s1 from root.sg.d group by(

To get the same results:

**1. Create a CQ**
**Create a CQ**

This step performs the nested sub query in from clause of the query above. The following CQ automatically calculates the number of non-null values of `s1` at 30 minute intervals and writes those counts into the new `root.sg_count.d.count_s1` time series.

Expand All @@ -1019,7 +1019,7 @@ BEGIN
END
```

**2. Query the CQ results**
**Query the CQ results**

Next step performs the avg([...]) part of the outer query above.

Expand All @@ -1030,7 +1030,7 @@ SELECT avg(count_s1) from root.sg_count.d;
```


### 6. System Parameter Configuration
### System Parameter Configuration

| Name | Description | Data Type | Default Value |
| :------------------------------------------ | ------------------------------------------------------------ | --------- | ------------- |
Expand Down
Loading