Skip to content

Commit

Permalink
Group CQL same token range keys into a single query [cql-tests] [tp-t…
Browse files Browse the repository at this point in the history
…ests]

Fixes #3863

Signed-off-by: Oleksandr Porunov <alexandr.porunov@gmail.com>
  • Loading branch information
porunov committed Jul 14, 2023
1 parent 666a81c commit f40ad5b
Show file tree
Hide file tree
Showing 28 changed files with 1,639 additions and 193 deletions.
22 changes: 22 additions & 0 deletions docs/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -328,6 +328,28 @@ and optimize this method execution to execute all the slice queries for all the
(for example, by using asynchronous programming, slice queries grouping, multi-threaded execution, or any other technique which
is efficient for the respective storage adapter).

##### Added possibility to group multiple slice queries together via CQL storage backend

Starting from JanusGraph 1.0.0 CQL storage implementation now groups queries which fetch properties with `Cardinality.SINGLE` together
into the same CQL query. The behaviour can be changed by setting configuration `storage.cql.grouping.slice-allowed = false`.

CQL storage implementation has also ability to group queries to different partition keys together if they belong to the same
token range. This behaviour is disabled by default, but can be enabled via `storage.cql.grouping.keys-allowed = true`.
Notice, that by enabling keys grouping feature it increases the return size of such queries because each CQL query which groups
multiple keys together needs to return those keys per each row. Moreover, it could potentially lead to less balanced load
on the storage cluster. However, it reduces the amount of CQL queries sent which may positively influence throughput in some cases
as well as pricing point of some Serverless deployments. We recommend to benchmark each use-case before enabling keys grouping (`storage.cql.grouping.keys-allowed`).

Notice, ScyllaDB storage backend has a limit on distinct clustering key restrictions per query which is set to `20` by default.
It is required to ensure that `storage.cql.grouping.keys-limit` or `storage.cql.grouping.slice-limit` is less or equal to the
amount of distinct clustering key restrictions is allowed on the storage backend. On ScyllaDB it's possible
to configure this restriction using [max-partition-key-restrictions-per-query](https://enterprise.docs.scylladb.com/branch-2022.2/faq.html#how-can-i-change-the-maximum-number-of-in-restrictions)
configuration option (default to `100`).
On AstraDB side it is needed to be asked on AstraDB side to be changed via `partition_keys_in_select_failure_threshold` and `in_select_cartesian_product_failure_threshold` threshold
configurations (https://docs.datastax.com/en/astra-serverless/docs/plan/planning.html#_cassandra_yaml).

See additional properties to control grouping configurations under the namespace `storage.cql.grouping`.

##### Removal of deprecated classes/methods/functionalities

###### Methods
Expand Down
36 changes: 32 additions & 4 deletions docs/configs/janusgraph-cfg.md
Original file line number Diff line number Diff line change
Expand Up @@ -461,10 +461,6 @@ CQL storage backend options
| storage.cql.request-timeout | Timeout for CQL requests in milliseconds. See DataStax Java Driver option `basic.request.timeout` for more information. | Long | 12000 | MASKABLE |
| storage.cql.session-leak-threshold | The maximum number of live sessions that are allowed to coexist in a given VM until the warning starts to log for every new session. If the value is less than or equal to 0, the feature is disabled: no warning will be issued. See DataStax Java Driver option `advanced.session-leak.threshold` for more information. | Integer | (no default value) | MASKABLE |
| storage.cql.session-name | Default name for the Cassandra session | String | JanusGraph Session | MASKABLE |
| storage.cql.slice-grouping-allowed | If `true` this allows multiple Slice queries which are allowed to be performed as non-range queries (i.e. direct equality operation) to be grouped together into a single CQL query via `IN` operator. Notice, currently only operations to fetch properties with Cardinality.SINGLE are allowed to be performed as non-range queries (edges fetching or properties with Cardinality SET or LIST won't be grouped together).
If this option if `false` then each Slice query will be executed in a separate asynchronous CQL query even when grouping is allowed. | Boolean | true | MASKABLE |
| storage.cql.slice-grouping-limit | Maximum amount of grouped together slice queries into a single CQL query.
This option is used only when `storage.cql.slice-grouping-allowed` is `true`. | Integer | 100 | MASKABLE |
| storage.cql.speculative-retry | The speculative retry policy. One of: NONE, ALWAYS, <X>percentile, <N>ms. | String | (no default value) | FIXED |
| storage.cql.ttl-enabled | Whether TTL should be enabled or not. Must be turned off if the storage does not support TTL. Amazon Keyspace, for example, does not support TTL by default unless otherwise enabled. | Boolean | true | LOCAL |
| storage.cql.use-external-locking | True to prevent JanusGraph from using its own locking mechanism. Setting this to true eliminates redundant checks when using an external locking mechanism outside of JanusGraph. Be aware that when use-external-locking is set to true, that failure to employ a locking algorithm which locks all columns that participate in a transaction upfront and unlocks them when the transaction ends, will result in a 'read uncommitted' transaction isolation level guarantee. If set to true without an appropriate external locking mechanism in place side effects such as dirty/non-repeatable/phantom reads should be expected. | Boolean | false | MASKABLE |
Expand All @@ -482,6 +478,38 @@ Configuration options for CQL executor service which is used to process deserial
| storage.cql.executor-service.max-pool-size | Maximum pool size for executor service. Ignored for `fixed` and `cached` executor services. May be ignored if custom executor service is used (depending on the implementation of the executor service). | Integer | 2147483647 | LOCAL |
| storage.cql.executor-service.max-shutdown-wait-time | Max shutdown wait time in milliseconds for executor service threads to be finished during shutdown. After this time threads will be interrupted (signalled with interrupt) without any additional wait time. | Long | 60000 | LOCAL |

### storage.cql.grouping
Configuration options for controlling CQL queries grouping


| Name | Description | Datatype | Default Value | Mutability |
| ---- | ---- | ---- | ---- | ---- |
| storage.cql.grouping.keys-allowed | If `true` this allows multiple partition keys to be grouped together into a single CQL query via `IN` operator based on the keys grouping strategy provided (usually grouping is done by same token-ranges, but may also involve shard ids for custom implementations).
Notice, that any CQL query grouped with more than 1 key will require to return a row key for any column fetched.
This option is useful when less amount of CQL queries is desired to be sent for read requests in expense of fetching more data (partition key per each fetched value).
Notice, different storage backends may have different way of executing multi-partition `IN` queries (including, but not limited to how the checksum queries are sent for different consistency levels, processing node CPU usage, disk access pattern, etc.). Thus, a proper benchmarking is needed to determine if keys grouping is useful or not per case by case scenario.
This option can be enabled only for storage backends which support `PER PARTITION LIMIT`.
If this option is `false` then each partition key will be executed in a separate asynchronous CQL query even when multiple keys from the same token range are queried.
Notice, the default grouping strategy is not taking into account shards. Thus, this might be inefficient with ScyllaDB storage backend. ScyllaDB specific keys grouping strategy should be implemented after the resolution of the [ticket #232](https://github.com/scylladb/java-driver/issues/232). | Boolean | false | MASKABLE |
| storage.cql.grouping.keys-class | Full class path of the keys grouping execution strategy. The class should implement `org.janusgraph.diskstorage.cql.strategy.GroupedExecutionStrategy` interface and have a public constructor with two arguments `org.janusgraph.diskstorage.configuration.Configuration` and `com.datastax.oss.driver.api.core.CqlSession`.
Shortcuts available:
- `default` - groups partition keys which belong to the same token range. Notice, this strategy is not taking into account shards. Thus, this might be inefficient with ScyllaDB storage backend.

This option takes effect only when `storage.cql.grouping.keys-allowed` is `true`. | String | default | MASKABLE |
| storage.cql.grouping.keys-limit | Maximum amount of the keys which belong to the same token range to be grouped together into a single CQL query. If more keys for the same token range are queried, they are going to be grouped into separate CQL queries.
Notice, for ScyllaDB this option should not exceed the maximum number of distinct clustering key restrictions per query which can be changed by ScyllaDB configuration option `max-partition-key-restrictions-per-query` (https://enterprise.docs.scylladb.com/branch-2022.2/faq.html#how-can-i-change-the-maximum-number-of-in-restrictions). For AstraDB this limit is set to 20 and usually it's fixed. However, you can ask customer support for a possibility to change the default threshold to your desired configuration via `partition_keys_in_select_failure_threshold` and `in_select_cartesian_product_failure_threshold` threshold configurations (https://docs.datastax.com/en/astra-serverless/docs/plan/planning.html#_cassandra_yaml).
Ensure that your storage backends allows more IN selectors than the one set via this configuration.
This option takes effect only when `storage.cql.grouping.keys-allowed` is `true`. | Integer | 20 | MASKABLE |
| storage.cql.grouping.keys-min | Minimum amount of keys to consider for grouping. Grouping will be skipped for any multi-key query which has less than this amount of keys (i.e. a separate CQL query will be executed for each key in such case).
Usually this configuration should always be set to `2`. It is useful to increase the value only in cases when queries with more keys should not be grouped, but be performed separately to increase parallelism in expense of the network overhead.
This option takes effect only when `storage.cql.grouping.keys-allowed` is `true`. | Integer | 2 | MASKABLE |
| storage.cql.grouping.slice-allowed | If `true` this allows multiple Slice queries which are allowed to be performed as non-range queries (i.e. direct equality operation) to be grouped together into a single CQL query via `IN` operator. Notice, currently only operations to fetch properties with Cardinality.SINGLE are allowed to be performed as non-range queries (edges fetching or properties with Cardinality SET or LIST won't be grouped together).
If this option if `false` then each Slice query will be executed in a separate asynchronous CQL query even when grouping is allowed. | Boolean | true | MASKABLE |
| storage.cql.grouping.slice-limit | Maximum amount of grouped together slice queries into a single CQL query.
Notice, for ScyllaDB this option should not exceed the maximum number of distinct clustering key restrictions per query which can be changed by ScyllaDB configuration option `max-partition-key-restrictions-per-query` (https://enterprise.docs.scylladb.com/branch-2022.2/faq.html#how-can-i-change-the-maximum-number-of-in-restrictions). For AstraDB this limit is set to 20 and usually it's fixed. However, you can ask customer support for a possibility to change the default threshold to your desired configuration via `partition_keys_in_select_failure_threshold` and `in_select_cartesian_product_failure_threshold` threshold configurations (https://docs.datastax.com/en/astra-serverless/docs/plan/planning.html#_cassandra_yaml).
Ensure that your storage backends allows more IN selectors than the one set via this configuration.
This option is used only when `storage.cql.grouping.slice-allowed` is `true`. | Integer | 20 | MASKABLE |

### storage.cql.internal
Advanced configuration of internal DataStax driver. Notice, all available configurations will be composed in the order. Non specified configurations will be skipped. By default only base configuration is enabled (which has the smallest priority. It means that you can overwrite any configuration used in base programmatic configuration by using any other configuration type). The configurations are composed in the next order (sorted by priority in descending order): `file-configuration`, `resource-configuration`, `string-configuration`, `url-configuration`, `base-programmatic-configuration` (which is controlled by `base-programmatic-configuration-enabled` property). Configurations with higher priority always overwrite configurations with lower priority. I.e. if the same configuration parameter is used in both `file-configuration` and `string-configuration` the configuration parameter from `file-configuration` will be used and configuration parameter from `string-configuration` will be ignored. See available configuration options and configurations structure here: https://docs.datastax.com/en/developer/java-driver/4.13/manual/core/configuration/reference/

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@ public enum EntryMetaData {

TTL(Integer.class, false, data -> data instanceof Integer && ((Integer) data) >= 0L),
VISIBILITY(String.class, true, data -> data instanceof String && StringEncoding.isAsciiString((String) data)),
TIMESTAMP(Long.class, false, data -> data instanceof Long);
TIMESTAMP(Long.class, false, data -> data instanceof Long),
ROW_KEY(StaticBuffer.class, false, data -> data instanceof StaticBuffer);

public static final java.util.Map<EntryMetaData,Object> EMPTY_METADATA = Collections.emptyMap();

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,9 @@

import java.util.Collection;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionException;
import java.util.concurrent.ExecutionException;
Expand Down Expand Up @@ -63,14 +65,17 @@ public static <T> T get(CompletableFuture<T> future){
public static <K,V> Map<K,V> unwrap(Map<K,CompletableFuture<V>> futureMap) throws Throwable{
Map<K, V> resultMap = new HashMap<>(futureMap.size());
Throwable firstException = null;
Set<Throwable> uniqueExceptions = null;
for(Map.Entry<K, CompletableFuture<V>> entry : futureMap.entrySet()){
try{
resultMap.put(entry.getKey(), entry.getValue().get());
} catch (Throwable throwable){
Throwable rootException = unwrapExecutionException(throwable);

Check warning on line 73 in janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java

View check run for this annotation

Codecov / codecov/patch

janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java#L73

Added line #L73 was not covered by tests
if(firstException == null){
firstException = throwable;
} else {
firstException.addSuppressed(throwable);
firstException = rootException;
uniqueExceptions = new HashSet<>(1);

Check warning on line 76 in janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java

View check run for this annotation

Codecov / codecov/patch

janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java#L75-L76

Added lines #L75 - L76 were not covered by tests
} else if(firstException != rootException && uniqueExceptions.add(rootException)){

Check warning on line 77 in janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java#L77

Use equals() to compare object references.
firstException.addSuppressed(rootException);

Check warning on line 78 in janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java

View check run for this annotation

Codecov / codecov/patch

janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java#L78

Added line #L78 was not covered by tests
}
}
}
Expand All @@ -84,14 +89,17 @@ public static <K,V> Map<K,V> unwrap(Map<K,CompletableFuture<V>> futureMap) throw
public static <K,V,R> Map<K,Map<V, R>> unwrapMapOfMaps(Map<K, Map<V, CompletableFuture<R>>> futureMap) throws Throwable{
Map<K, Map<V, R>> resultMap = new HashMap<>(futureMap.size());
Throwable firstException = null;
Set<Throwable> uniqueExceptions = null;
for(Map.Entry<K, Map<V, CompletableFuture<R>>> entry : futureMap.entrySet()){
try{
resultMap.put(entry.getKey(), unwrap(entry.getValue()));
} catch (Throwable throwable){
Throwable rootException = unwrapExecutionException(throwable);

Check warning on line 97 in janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java

View check run for this annotation

Codecov / codecov/patch

janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java#L97

Added line #L97 was not covered by tests
if(firstException == null){
firstException = throwable;
} else {
firstException.addSuppressed(throwable);
firstException = rootException;
uniqueExceptions = new HashSet<>(1);

Check warning on line 100 in janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java

View check run for this annotation

Codecov / codecov/patch

janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java#L99-L100

Added lines #L99 - L100 were not covered by tests
} else if(firstException != rootException && uniqueExceptions.add(rootException)){
firstException.addSuppressed(rootException);

Check warning on line 102 in janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java

View check run for this annotation

Codecov / codecov/patch

janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java#L102

Added line #L102 was not covered by tests
}
}
}
Expand All @@ -104,14 +112,17 @@ public static <K,V,R> Map<K,Map<V, R>> unwrapMapOfMaps(Map<K, Map<V, Completable

public static <V> void awaitAll(Collection<CompletableFuture<V>> futureCollection) throws Throwable{
Throwable firstException = null;
Set<Throwable> uniqueExceptions = null;
for(CompletableFuture<V> future : futureCollection){
try{
future.get();
} catch (Throwable throwable){
Throwable rootException = unwrapExecutionException(throwable);

Check warning on line 120 in janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java

View check run for this annotation

Codecov / codecov/patch

janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java#L120

Added line #L120 was not covered by tests
if(firstException == null){
firstException = throwable;
} else {
firstException.addSuppressed(throwable);
firstException = rootException;
uniqueExceptions = new HashSet<>(1);

Check warning on line 123 in janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java

View check run for this annotation

Codecov / codecov/patch

janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java#L122-L123

Added lines #L122 - L123 were not covered by tests
} else if(firstException != rootException && uniqueExceptions.add(rootException)){
firstException.addSuppressed(rootException);

Check warning on line 125 in janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java

View check run for this annotation

Codecov / codecov/patch

janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/CompletableFutureUtil.java#L125

Added line #L125 was not covered by tests
}
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -654,6 +654,8 @@ public static MetaDataSerializer getSerializer(EntryMetaData meta) {
return LongSerializer.INSTANCE;
case VISIBILITY:
return ASCIIStringSerializer.INSTANCE;
case ROW_KEY:
return StaticBufferSerializer.INSTANCE;
default: throw new AssertionError("Unexpected meta data: " + meta);
}
}
Expand Down Expand Up @@ -736,4 +738,38 @@ public String read(byte[] data, int startPos) {
}
}

private enum StaticBufferSerializer implements MetaDataSerializer<StaticBuffer> {

INSTANCE;

private static final StaticBuffer EMPTY_STATIC_BUFFER = StaticArrayBuffer.of(new byte[0]);

@Override
public int getByteLength(StaticBuffer value) {
return value.length() + 4;
}

@Override
public void write(byte[] data, int startPos, StaticBuffer value) {
int length = value.length();
StaticArrayBuffer.putInt(data, startPos, length);
if(length > 0){
startPos+=4;

Check warning on line 757 in janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/StaticArrayEntryList.java

View check run for this annotation

Codacy Production / Codacy Static Code Analysis

janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/StaticArrayEntryList.java#L757

Avoid reassigning parameters such as 'startPos'
for(int i=0; i<length; i++){
data[startPos++] = value.getByte(i);
}
}
}

@Override
public StaticBuffer read(byte[] data, int startPos) {
int bufSize = StaticArrayBuffer.getInt(data, startPos);
if(bufSize == 0){
return EMPTY_STATIC_BUFFER;

Check warning on line 768 in janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/StaticArrayEntryList.java

View check run for this annotation

Codecov / codecov/patch

janusgraph-core/src/main/java/org/janusgraph/diskstorage/util/StaticArrayEntryList.java#L768

Added line #L768 was not covered by tests
}
startPos+=4;
return new StaticArrayBuffer(data, startPos, startPos+bufSize);
}
}

}
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,15 @@
import io.vavr.Tuple3;
import org.janusgraph.diskstorage.EntryMetaData;
import org.janusgraph.diskstorage.StaticBuffer;
import org.janusgraph.diskstorage.util.StaticArrayBuffer;
import org.janusgraph.diskstorage.util.StaticArrayEntry.GetColVal;

import java.nio.ByteBuffer;

public class CQLColValGetter implements GetColVal<Tuple3<StaticBuffer, StaticBuffer, Row>, StaticBuffer> {

private static final StaticArrayBuffer EMPTY_KEY = StaticArrayBuffer.of(new byte[0]);

private final EntryMetaData[] schema;

CQLColValGetter(final EntryMetaData[] schema) {
Expand Down Expand Up @@ -50,6 +55,9 @@ public Object getMetaData(final Tuple3<StaticBuffer, StaticBuffer, Row> tuple, f
return tuple._3.getLong(CQLKeyColumnValueStore.WRITETIME_COLUMN_NAME);
case TTL:
return tuple._3.getInt(CQLKeyColumnValueStore.TTL_COLUMN_NAME);
case ROW_KEY:
ByteBuffer rawKey = tuple._3.getByteBuffer(CQLKeyColumnValueStore.KEY_COLUMN_NAME);
return rawKey == null ? EMPTY_KEY : StaticArrayBuffer.of(rawKey);
default:
throw new UnsupportedOperationException("Unsupported meta data: " + metaData);
}
Expand Down
Loading

0 comments on commit f40ad5b

Please sign in to comment.