Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

string terms is very slow when there are millions of buckets #30117

Closed
pakein opened this issue Apr 25, 2018 · 3 comments · Fixed by #30166
Closed

string terms is very slow when there are millions of buckets #30117

pakein opened this issue Apr 25, 2018 · 3 comments · Fixed by #30166
Assignees

Comments

@pakein
Copy link

pakein commented Apr 25, 2018

elasticsearch 6.1.3

In GlobalOrdinalsStringTermsAggregator
When there are levels of aggregation, parent agg and valueCount both more than 100 thousands bucket

the loop may be explode
for (long globalTermOrd = 0; globalTermOrd < valueCount; ++globalTermOrd)

My temporary resolution is, loop by bucketOrds

            private static Field keysField;
            static {
                try {
                   keysField = LongHash.class.getDeclaredField("keys");
                   keysField.setAccessible(true);
                 } catch (Exception e) {
                LOGGER.error(e.getMessage(), e);
            }
     
           if (bucketOrds != null && bucketOrds.size() < valueCount && bucketCountThresholds.getMinDocCount() > 0 ) {
                try {
                    loop = false;
                    LongArray keys = ((LongArray) keysField.get(bucketOrds));
                    for (long i = 0; i < keys.size(); i++) {
                        //i: bucketOrd
                        int bucketDocCount = bucketDocCount(i);
                        if ( bucketDocCount == 0) {
                            continue;
                        }           
                        long globalTermOrd = keys.get(i);
                        if (includeExclude != null && !acceptedGlobalOrdinals.get(globalTermOrd)) {
                            continue;
                        }
                        otherDocCount += bucketDocCount;
                        spare.globalOrd = globalTermOrd;
                        spare.bucketOrd = i;
                        spare.docCount = bucketDocCount;
                        if (bucketCountThresholds.getShardMinDocCount() <= spare.docCount) {
                            spare = ordered.insertWithOverflow(spare);
                            if (spare == null) {
                                spare = new OrdBucket(-1, 0, null, showTermDocCountError, 0);
                            }
                        }

                    }

                } catch (IllegalAccessException e) {
                    LOGGER.error(e.getMessage(), e);
                    loop = true;
                }
            }
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-search-aggs

@jpountz
Copy link
Contributor

jpountz commented Apr 25, 2018

I agree this is a performance bug and defeats the purpose of using a hash table to collect matching ordinals.

@jimczi
Copy link
Contributor

jimczi commented Apr 26, 2018

Great catch @pakein ! I modified your change a bit and opened #30166.

@jimczi jimczi removed the help wanted adoptme label Apr 26, 2018
@jimczi jimczi self-assigned this Apr 26, 2018
jimczi added a commit that referenced this issue Apr 27, 2018
The global ordinals terms aggregator has an option to remap global ordinals to
dense ordinal that match the request. This mode is automatically picked when the terms
aggregator is a child of another bucket aggregator or when it needs to defer buckets to an
aggregation that is used in the ordering of the terms.
Though when building the final buckets, this aggregator loops over all possible global ordinals
rather than using the hash map that was built to remap the ordinals.
For fields with high cardinality this is highly inefficient and can lead to slow responses even
when the number of terms that match the query is low.
This change fixes this performance issue by using the hash table of matching ordinals to perform
the pruning of the final buckets for the terms and significant_terms aggregation.
I ran a simple benchmark with 1M documents containing 0 to 10 keywords randomly selected among 1M unique terms.
This field is used to perform a multi-level terms aggregation using rally to collect the response times.
The aggregation below is an example of a two-level terms aggregation that was used to perform the benchmark:

```
"aggregations":{
   "1":{
      "terms":{
         "field":"keyword"
      },
      "aggregations":{
         "2":{
            "terms":{
               "field":"keyword"
            }
         }
      }
   }
}
```

| Levels of aggregation | 50th percentile ms (master) | 50th percentile ms (patch) |
| --- | --- | --- |
| 2 | 640.41ms | 577.499ms |
| 3 | 2239.66ms | 600.154ms |
| 4 | 14141.2ms | 703.512ms |

Closes #30117
jimczi added a commit that referenced this issue Apr 27, 2018
The global ordinals terms aggregator has an option to remap global ordinals to
dense ordinal that match the request. This mode is automatically picked when the terms
aggregator is a child of another bucket aggregator or when it needs to defer buckets to an
aggregation that is used in the ordering of the terms.
Though when building the final buckets, this aggregator loops over all possible global ordinals
rather than using the hash map that was built to remap the ordinals.
For fields with high cardinality this is highly inefficient and can lead to slow responses even
when the number of terms that match the query is low.
This change fixes this performance issue by using the hash table of matching ordinals to perform
the pruning of the final buckets for the terms and significant_terms aggregation.
I ran a simple benchmark with 1M documents containing 0 to 10 keywords randomly selected among 1M unique terms.
This field is used to perform a multi-level terms aggregation using rally to collect the response times.
The aggregation below is an example of a two-level terms aggregation that was used to perform the benchmark:

```
"aggregations":{
   "1":{
      "terms":{
         "field":"keyword"
      },
      "aggregations":{
         "2":{
            "terms":{
               "field":"keyword"
            }
         }
      }
   }
}
```

| Levels of aggregation | 50th percentile ms (master) | 50th percentile ms (patch) |
| --- | --- | --- |
| 2 | 640.41ms | 577.499ms |
| 3 | 2239.66ms | 600.154ms |
| 4 | 14141.2ms | 703.512ms |

Closes #30117
jimczi added a commit that referenced this issue May 16, 2018
The global ordinals terms aggregator has an option to remap global ordinals to
dense ordinal that match the request. This mode is automatically picked when the terms
aggregator is a child of another bucket aggregator or when it needs to defer buckets to an
aggregation that is used in the ordering of the terms.
Though when building the final buckets, this aggregator loops over all possible global ordinals
rather than using the hash map that was built to remap the ordinals.
For fields with high cardinality this is highly inefficient and can lead to slow responses even
when the number of terms that match the query is low.
This change fixes this performance issue by using the hash table of matching ordinals to perform
the pruning of the final buckets for the terms and significant_terms aggregation.
I ran a simple benchmark with 1M documents containing 0 to 10 keywords randomly selected among 1M unique terms.
This field is used to perform a multi-level terms aggregation using rally to collect the response times.
The aggregation below is an example of a two-level terms aggregation that was used to perform the benchmark:

```
"aggregations":{
   "1":{
      "terms":{
         "field":"keyword"
      },
      "aggregations":{
         "2":{
            "terms":{
               "field":"keyword"
            }
         }
      }
   }
}
```

| Levels of aggregation | 50th percentile ms (master) | 50th percentile ms (patch) |
| --- | --- | --- |
| 2 | 640.41ms | 577.499ms |
| 3 | 2239.66ms | 600.154ms |
| 4 | 14141.2ms | 703.512ms |

Closes #30117
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants