Skip to content

Commit

Permalink
change sensor to measurement
Browse files Browse the repository at this point in the history
  • Loading branch information
qiaojialin committed Apr 3, 2020
1 parent 8592180 commit 93e63ed
Show file tree
Hide file tree
Showing 14 changed files with 33 additions and 33 deletions.
2 changes: 1 addition & 1 deletion README.md
Expand Up @@ -43,7 +43,7 @@ Main features of IoTDB are as follows:
2. Low cost on hardware. IoTDB can reach a high compression ratio of disk storage.
3. Efficient directory structure. IoTDB supports efficient organization for complex time series data structure from intelligent networking devices, organization for time series data from devices of the same type, fuzzy searching strategy for massive and complex directory of time series data.
4. High-throughput read and write. IoTDB supports millions of low-power devices' strong connection data access, high-speed data read and write for intelligent networking devices and mixed devices mentioned above.
5. Rich query semantics. IoTDB supports time alignment for time series data across devices and sensors, computation in time series field (frequency domain transformation) and rich aggregation function support in time dimension.
5. Rich query semantics. IoTDB supports time alignment for time series data across devices and measurements, computation in time series field (frequency domain transformation) and rich aggregation function support in time dimension.
6. Easy to get started. IoTDB supports SQL-Like language, JDBC standard API and import/export tools which is easy to use.
7. Seamless integration with state-of-the-practice Open Source Ecosystem. IoTDB supports analysis ecosystems such as, Hadoop, Spark, and visualization tool, such as, Grafana.

Expand Down
2 changes: 1 addition & 1 deletion example/kafka/readme.md
Expand Up @@ -57,7 +57,7 @@ Kafka: 0.8.2.0
The class is to send data from localhost to Kafka clusters.
 Firstly, you have to change the parameter of TOPIC in Constant.java to what you create:(for example : "Kafka-Test")
 > public final static String TOPIC = "Kafka-Test";
 The default format of data is "device,timestamp,value ". (for example : "sensor1,2017/10/24 19:30:00,60")
 The default format of data is "device,timestamp,value ". (for example : "measurement1,2017/10/24 19:30:00,60")
Then you need to create data in Constat.ALL_DATA
 Finally, run KafkaProducer.java
```
Expand Down
8 changes: 4 additions & 4 deletions example/rocketmq/readme.md
Expand Up @@ -26,15 +26,15 @@ The following basic concepts are involved in IoTDB:

* Device

A devices is an installation equipped with sensors in real scenarios. In IoTDB, all sensors should have their corresponding devices.
A devices is an installation equipped with measurements in real scenarios. In IoTDB, all measurements should have their corresponding devices.

* Sensor
* Measurement

A sensor is a detection equipment in an actual scene, which can sense the information to be measured, and can transform the sensed information into an electrical signal or other desired form of information output and send it to IoTDB. In IoTDB, all data and paths stored are organized in units of sensors.
A measurement is a detection equipment in an actual scene, which can sense the information to be measured, and can transform the sensed information into an electrical signal or other desired form of information output and send it to IoTDB. In IoTDB, all data and paths stored are organized in units of sensors.

* Storage Group

Storage groups are used to let users define how to organize and isolate different time series data on disk. Time series belonging to the same storage group will be continuously written to the same file in the corresponding folder. The file may be closed due to user commands or system policies, and hence the data coming next from these sensors will be stored in a new file in the same folder. Time series belonging to different storage groups are stored in different folders.
Storage groups are used to let users define how to organize and isolate different time series data on disk. Time series belonging to the same storage group will be continuously written to the same file in the corresponding folder. The file may be closed due to user commands or system policies, and hence the data coming next from these measurements will be stored in a new file in the same folder. Time series belonging to different storage groups are stored in different folders.
## Connector
> note:In this sample program, there are some update operations for historical data, so it is necessary to ensure the sequential transmission and consumption of data via RocketMQ. If there is no update operation in use, then there is no need to guarantee the order of data. IoTDB will process these data which may be disorderly.
Expand Down
2 changes: 1 addition & 1 deletion hadoop/README.md
Expand Up @@ -66,7 +66,7 @@ With this connector, you can

TSFInputFormat extract data from tsfile and format them into records of `MapWritable`.

Supposing that we want to extract data of the device named `d1` which has three sensors named `s1`, `s2`, `s3`.
Supposing that we want to extract data of the device named `d1` which has three measurements named `s1`, `s2`, `s3`.

`s1`'s type is `BOOLEAN`, `s2`'s type is `DOUBLE`, `s3`'s type is `TEXT`.

Expand Down
Expand Up @@ -668,7 +668,7 @@ public String getStorageGroupName() {
* ChunkMetadata of data on disk.
*
* @param deviceId device id
* @param measurementId sensor id
* @param measurementId measurements id
* @param dataType data type
* @param encoding encoding
* @return left: the chunk data in memory; right: the chunkMetadatas of data on disk
Expand Down
Expand Up @@ -29,7 +29,7 @@ public class RawDataQueryPlan extends QueryPlan {
private List<Path> deduplicatedPaths = new ArrayList<>();
private List<TSDataType> deduplicatedDataTypes = new ArrayList<>();
private IExpression expression = null;
private Map<String, Set<String>> deviceToSensors = new HashMap<>();
private Map<String, Set<String>> deviceToMeasurements = new HashMap<>();

public RawDataQueryPlan() {
super();
Expand All @@ -52,7 +52,7 @@ public List<Path> getDeduplicatedPaths() {
}

public void addDeduplicatedPaths(Path path) {
deviceToSensors.computeIfAbsent(path.getDevice(), key -> new HashSet<>()).add(path.getMeasurement());
deviceToMeasurements.computeIfAbsent(path.getDevice(), key -> new HashSet<>()).add(path.getMeasurement());
this.deduplicatedPaths.add(path);
}

Expand All @@ -61,9 +61,9 @@ public void addDeduplicatedPaths(Path path) {
* measurements of current device.
*/
public void setDeduplicatedPaths(List<Path> deduplicatedPaths) {
deviceToSensors.clear();
deviceToMeasurements.clear();
deduplicatedPaths.forEach(
path -> deviceToSensors.computeIfAbsent(path.getDevice(), key -> new HashSet<>())
path -> deviceToMeasurements.computeIfAbsent(path.getDevice(), key -> new HashSet<>())
.add(path.getMeasurement()));
this.deduplicatedPaths = deduplicatedPaths;
}
Expand All @@ -81,8 +81,8 @@ public void setDeduplicatedDataTypes(
this.deduplicatedDataTypes = deduplicatedDataTypes;
}

public Set<String> getAllSensorsInDevice(String device) {
return deviceToSensors.getOrDefault(device, Collections.emptySet());
public Set<String> getAllMeasurementsInDevice(String device) {
return deviceToMeasurements.getOrDefault(device, Collections.emptySet());
}

}
Expand Up @@ -102,7 +102,7 @@ protected TimeGenerator getTimeGenerator(IExpression expression, QueryContext co
protected IReaderByTimestamp getReaderByTime(Path path, RawDataQueryPlan queryPlan,
TSDataType dataType, QueryContext context, TsFileFilter fileFilter)
throws StorageEngineException, QueryProcessException {
return new SeriesReaderByTimestamp(path, queryPlan.getAllSensorsInDevice(path.getDevice()), dataType, context,
return new SeriesReaderByTimestamp(path, queryPlan.getAllMeasurementsInDevice(path.getDevice()), dataType, context,
QueryResourceManager.getInstance().getQueryDataSource(path, context, null), fileFilter);
}

Expand Down
Expand Up @@ -87,7 +87,7 @@ protected void initGroupBy(QueryContext context, GroupByPlan groupByPlan)
if (!pathExecutors.containsKey(path)) {
//init GroupByExecutor
pathExecutors.put(path,
getGroupByExecutor(path, groupByPlan.getAllSensorsInDevice(path.getDevice()), dataTypes.get(i), context, timeFilter, null));
getGroupByExecutor(path, groupByPlan.getAllMeasurementsInDevice(path.getDevice()), dataTypes.get(i), context, timeFilter, null));
resultIndexes.put(path, new ArrayList<>());
}
resultIndexes.get(path).add(i);
Expand Down
Expand Up @@ -88,7 +88,7 @@ public QueryDataSet executeWithoutValueFilter(QueryContext context, AggregationP
Map<Path, List<Integer>> pathToAggrIndexesMap = groupAggregationsBySeries(selectedSeries);
AggregateResult[] aggregateResultList = new AggregateResult[selectedSeries.size()];
for (Map.Entry<Path, List<Integer>> entry : pathToAggrIndexesMap.entrySet()) {
List<AggregateResult> aggregateResults = aggregateOneSeries(entry, aggregationPlan.getAllSensorsInDevice(entry.getKey().getDevice()), timeFilter, context);
List<AggregateResult> aggregateResults = aggregateOneSeries(entry, aggregationPlan.getAllMeasurementsInDevice(entry.getKey().getDevice()), timeFilter, context);
int index = 0;
for (int i : entry.getValue()) {
aggregateResultList[i] = aggregateResults.get(index);
Expand All @@ -109,7 +109,7 @@ public QueryDataSet executeWithoutValueFilter(QueryContext context, AggregationP
*/
protected List<AggregateResult> aggregateOneSeries(
Map.Entry<Path, List<Integer>> pathToAggrIndexes,
Set<String> sensors,
Set<String> measurements,
Filter timeFilter, QueryContext context)
throws IOException, QueryProcessException, StorageEngineException {
List<AggregateResult> aggregateResultList = new ArrayList<>();
Expand All @@ -123,11 +123,11 @@ protected List<AggregateResult> aggregateOneSeries(
.getAggrResultByName(aggregations.get(i), tsDataType);
aggregateResultList.add(aggregateResult);
}
aggregateOneSeries(seriesPath, sensors, context, timeFilter, tsDataType, aggregateResultList, null);
aggregateOneSeries(seriesPath, measurements, context, timeFilter, tsDataType, aggregateResultList, null);
return aggregateResultList;
}

public static void aggregateOneSeries(Path seriesPath, Set<String> sensors, QueryContext context, Filter timeFilter,
public static void aggregateOneSeries(Path seriesPath, Set<String> measurements, QueryContext context, Filter timeFilter,
TSDataType tsDataType, List<AggregateResult> aggregateResultList, TsFileFilter fileFilter)
throws StorageEngineException, IOException, QueryProcessException {

Expand All @@ -140,7 +140,7 @@ public static void aggregateOneSeries(Path seriesPath, Set<String> sensors, Quer
// update filter by TTL
timeFilter = queryDataSource.updateFilterUsingTTL(timeFilter);

IAggregateReader seriesReader = new SeriesAggregateReader(seriesPath, sensors,
IAggregateReader seriesReader = new SeriesAggregateReader(seriesPath, measurements,
tsDataType, context, queryDataSource, timeFilter, null, null);
aggregateFromReader(seriesReader, aggregateResultList);
}
Expand Down Expand Up @@ -269,7 +269,7 @@ protected TimeGenerator getTimeGenerator(QueryContext context, RawDataQueryPlan

protected IReaderByTimestamp getReaderByTime(Path path, RawDataQueryPlan queryPlan, TSDataType dataType,
QueryContext context) throws StorageEngineException, QueryProcessException {
return new SeriesReaderByTimestamp(path, queryPlan.getAllSensorsInDevice(path.getDevice()), dataType, context,
return new SeriesReaderByTimestamp(path, queryPlan.getAllMeasurementsInDevice(path.getDevice()), dataType, context,
QueryResourceManager.getInstance().getQueryDataSource(path, context, null), null);
}

Expand Down
Expand Up @@ -89,7 +89,7 @@ public QueryDataSet execute(QueryContext context, FillQueryPlan fillQueryPlan)
} else {
fill = typeIFillMap.get(dataType).copy();
}
configureFill(fill, dataType, path, fillQueryPlan.getAllSensorsInDevice(path.getDevice()), context, queryTime);
configureFill(fill, dataType, path, fillQueryPlan.getAllMeasurementsInDevice(path.getDevice()), context, queryTime);

TimeValuePair timeValuePair = fill.getFillResult();
if (timeValuePair == null || timeValuePair.getValue() == null) {
Expand Down
Expand Up @@ -100,7 +100,7 @@ protected List<ManagedSeriesReader> initManagedSeriesReader(QueryContext context
.getQueryDataSource(path, context, timeFilter);
timeFilter = queryDataSource.updateFilterUsingTTL(timeFilter);

ManagedSeriesReader reader = new SeriesRawDataBatchReader(path, queryPlan.getAllSensorsInDevice(path.getDevice()), dataType, context,
ManagedSeriesReader reader = new SeriesRawDataBatchReader(path, queryPlan.getAllMeasurementsInDevice(path.getDevice()), dataType, context,
queryDataSource, timeFilter, null, null);
readersOfSelectedSeries.add(reader);
}
Expand All @@ -122,7 +122,7 @@ public QueryDataSet executeWithValueFilter(QueryContext context, RawDataQueryPla
List<IReaderByTimestamp> readersOfSelectedSeries = new ArrayList<>();
for (int i = 0; i < deduplicatedPaths.size(); i++) {
Path path = deduplicatedPaths.get(i);
IReaderByTimestamp seriesReaderByTimestamp = getReaderByTimestamp(path, queryPlan.getAllSensorsInDevice(path.getDevice()),
IReaderByTimestamp seriesReaderByTimestamp = getReaderByTimestamp(path, queryPlan.getAllMeasurementsInDevice(path.getDevice()),
deduplicatedDataTypes.get(i), context);
readersOfSelectedSeries.add(seriesReaderByTimestamp);
}
Expand Down
Expand Up @@ -78,6 +78,6 @@ protected IBatchReader generateNewBatchReader(SingleSeriesExpression expression)
throw new IOException(e);
}

return new SeriesRawDataBatchReader(path, queryPlan.getAllSensorsInDevice(path.getDevice()), dataType, context, queryDataSource, null, filter, null);
return new SeriesRawDataBatchReader(path, queryPlan.getAllMeasurementsInDevice(path.getDevice()), dataType, context, queryDataSource, null, filter, null);
}
}
Expand Up @@ -35,14 +35,14 @@ public class ActiveTimeSeriesCounterTest {
private static final String TEST_SG_PREFIX = "root.sg_";
private static int testStorageGroupNum = 10;
private static String[] storageGroups = new String[testStorageGroupNum];
private static int[] sensorNum = new int[testStorageGroupNum];
private static int[] measurementNum = new int[testStorageGroupNum];
private static double totalSeriesNum = 0;

static {
for (int i = 0; i < testStorageGroupNum; i++) {
storageGroups[i] = TEST_SG_PREFIX + i;
sensorNum[i] = i + 1;
totalSeriesNum += sensorNum[i];
measurementNum[i] = i + 1;
totalSeriesNum += measurementNum[i];
}
}

Expand Down Expand Up @@ -79,7 +79,7 @@ public void testUpdateActiveRatio() throws Exception {
ExecutorService service = Executors.newFixedThreadPool(storageGroups.length);
CountDownLatch finished = new CountDownLatch(storageGroups.length);
for (int i = 0; i < storageGroups.length; i++) {
service.submit(new OfferThreads(storageGroups[i], sensorNum[i], finished));
service.submit(new OfferThreads(storageGroups[i], measurementNum[i], finished));
}
finished.await();
for (String storageGroup : storageGroups) {
Expand All @@ -92,7 +92,7 @@ public void testUpdateActiveRatio() throws Exception {
}
for (int i = 0; i < storageGroups.length; i++) {
double r = ActiveTimeSeriesCounter.getInstance().getActiveRatio(storageGroups[i]);
assertEquals(sensorNum[i] / totalSeriesNum, r, 0.001);
assertEquals(measurementNum[i] / totalSeriesNum, r, 0.001);
}
}

Expand Down
Expand Up @@ -161,7 +161,7 @@ public Map<Path, MeasurementSchema> getKnownSchema() {
* get chunks' metadata from memory.
*
* @param deviceId the device id
* @param measurementId the sensor id
* @param measurementId the measurement id
* @param dataType the value type
* @return chunks' metadata
*/
Expand All @@ -171,7 +171,7 @@ public List<ChunkMetadata> getVisibleMetadataList(String deviceId, String measur
List<ChunkMetadata> chunkMetadataList = new ArrayList<>();
if (metadatasForQuery.containsKey(deviceId) && metadatasForQuery.get(deviceId).containsKey(measurementId)) {
for (ChunkMetadata chunkMetaData : metadatasForQuery.get(deviceId).get(measurementId)) {
// filter: if adevice'sensor is defined as float type, and data has been persistent.
// filter: if a device'measurement is defined as float type, and data has been persistent.
// Then someone deletes the timeseries and recreate it with Int type. We have to ignore
// all the stale data.
if (dataType == null || dataType.equals(chunkMetaData.getDataType())) {
Expand Down

0 comments on commit 93e63ed

Please sign in to comment.