New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[IOTDB-749] Handle select * from root OOM #1367
Conversation
@@ -226,6 +226,10 @@ write_read_free_memory_proportion=6:3:1 | |||
# primitive array size (length of each array) in array pool | |||
primitive_array_size=128 | |||
|
|||
# allowed max numbers of deduplicated path in one query | |||
# it's just an advised value, the real limitation will be the smaller one between this and the one we calculated | |||
max_deduplicated_path_num=10000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reduce to 1000?
server/src/main/java/org/apache/iotdb/db/conf/IoTDBDescriptor.java
Outdated
Show resolved
Hide resolved
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information The version of Java (1.8.0_252) you have used to run this analysis is deprecated and we will stop accepting it from October 2020. Please update to at least Java 11. |
if (totalFreeMemoryForRead.addAndGet(-estimatedMemoryUsage) >= 0) { | ||
queryIdEstimatedMemoryMap.put(queryId, estimatedMemoryUsage); | ||
} else { | ||
totalFreeMemoryForRead.addAndGet(estimatedMemoryUsage); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why add memory here..
private final AtomicLong totalFreeMemoryForRead; | ||
|
||
// estimated size for one point memory size, the unit is byte | ||
private static final long POINT_ESTIMATED_SIZE = 16L; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only works for non-string..
I register 10_000_000 timeseries by: public static void main(String[] args) throws ClassNotFoundException, SQLException {
Class.forName("org.apache.iotdb.jdbc.IoTDBDriver");
try (Connection connection = DriverManager
.getConnection("jdbc:iotdb://127.0.0.1:6667/", "root", "root");
Statement statement = connection.createStatement()) {
int total = 10_000_000;
try {
statement.execute("SET STORAGE GROUP TO root.sg1");
for (int i = 0; i < total; i++) {
statement.execute(
"CREATE TIMESERIES root.sg1.d1.s" + i
+ " WITH DATATYPE=INT64, ENCODING=RLE, COMPRESSOR=SNAPPY");
if (i % 10_000 == 0) {
System.out.println((double) i / total * 100 + "%");
}
}
} catch (IoTDBSQLException e) {
System.out.println(e.getMessage());
}
} catch (IoTDBSQLException e) {
System.out.println(e.getMessage());
}
} And then call |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Look good to me~
@@ -233,6 +233,10 @@ write_read_free_memory_proportion=6:3:1 | |||
# primitive array size (length of each array) in array pool | |||
primitive_array_size=128 | |||
|
|||
# allowed max numbers of deduplicated path in one query | |||
# it's just an advised value, the real limitation will be the smaller one between this and the one we calculated | |||
max_deduplicated_path_num=10000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my perspective, 1000 is better. XD
@@ -323,7 +327,7 @@ merge_throughput_mb_per_sec=16 | |||
meta_data_cache_enable=true | |||
# Read memory Allocation Ratio: ChunkMetadataCache, ChunkCache, TimeSeriesMetadataCache and Free Memory Used in Query. | |||
# The parameter form is a:b:c:d, where a, b, c and d are integers. for example: 1:1:1:1 , 6:10:5:15 | |||
chunkmeta_chunk_timeseriesmeta_free_memory_proportion=1:1:1:7 | |||
chunkmeta_chunk_timeseriesmeta_free_memory_proportion=1:1:1:3:4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the value has 5 parameters, maybe the annotation and the attribute need to change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with that. The comments need to be modified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are conflicts now, please fix it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Please fix some small issues.
@@ -323,7 +327,7 @@ merge_throughput_mb_per_sec=16 | |||
meta_data_cache_enable=true | |||
# Read memory Allocation Ratio: ChunkMetadataCache, ChunkCache, TimeSeriesMetadataCache and Free Memory Used in Query. | |||
# The parameter form is a:b:c:d, where a, b, c and d are integers. for example: 1:1:1:1 , 6:10:5:15 | |||
chunkmeta_chunk_timeseriesmeta_free_memory_proportion=1:1:1:7 | |||
chunkmeta_chunk_timeseriesmeta_free_memory_proportion=1:1:1:3:4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with that. The comments need to be modified.
/** | ||
* Make 'SLIMIT&SOFFSET' take effect by trimming the suffixList and aggregations of the | ||
* selectOperator. | ||
* | ||
* @param seriesLimit is ensured to be positive integer | ||
* @param seriesOffset is ensured to be non-negative integer | ||
*/ | ||
private void slimitTrim(SelectOperator select, int seriesLimit, int seriesOffset) | ||
private void slimitTrim(SelectOperator select, int seriesOffset) | ||
throws LogicalOptimizeException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my perspective, limit is done in concatselect() while this method is only for trimming the offset
of aggregations.
If I'm right, please modify the method name and java doc of it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix
/** | ||
* Is dynamic parameter adapter enable. | ||
*/ | ||
private boolean enableParameterAdapter = true; | ||
private boolean enableParameterAdapter = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why set it to false?
@@ -777,18 +777,26 @@ boolean checkStorageGroupByPath(PartialPath path) { | |||
* | |||
* @param prefixPath a prefix path or a full path, may contain '*'. | |||
*/ | |||
List<PartialPath> getAllTimeseriesPathWithAlias(PartialPath prefixPath) throws MetadataException { | |||
Pair<List<PartialPath>, Integer> getAllTimeseriesPathWithAlias(PartialPath prefixPath, int limit, int offset) throws MetadataException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update the Javadoc...
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information The version of Java (1.8.0_252) you have used to run this analysis is deprecated and we will stop accepting it from October 2020. Please update to at least Java 11. |
No description provided.