Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internal: Filter cache size limit not honored for 32GB or over #6268

Closed
danp60 opened this issue May 21, 2014 · 12 comments
Closed

Internal: Filter cache size limit not honored for 32GB or over #6268

danp60 opened this issue May 21, 2014 · 12 comments

Comments

@danp60
Copy link

danp60 commented May 21, 2014

Hi,

We are running an elasticsearch 1.1.1 6 node cluster with 256GB of ram, and using 96GB JVM heap sizes. I've noticed that when I set the filter cache size to 32GB or over with this command:

curl -XPUT "http://localhost:9200/_cluster/settings" -d'
{
    "transient" : {
       "indices.cache.filter.size" : "50%"
    }
}'

The field cache size keeps growing above and beyond the indicated limit. The relevant node stats show that the filter cache size is about 69GB in size, which is over the configured limit of 48GB

"filter_cache" : {
    "memory_size_in_bytes" : 74550217274,
    "evictions" : 8665179
},

I've enable debug logging on the node itself and it looks like the cache itself is getting created with the correct values:

[2014-05-21 00:31:57,215][DEBUG][indices.cache.filter     ] [ess02-006] using [node] weighted filter cache with size [50%], actual_size [47.9gb], expire [null], clean_interval [1m]

Whats strange is that when I set the limit to 31.9GB, the limit is enforced, which leads me to believe there is some sort of overflow going on.

Thanks,
Daniel

@danp60
Copy link
Author

danp60 commented May 21, 2014

Hi,
I dug a little deeper into the caching logic, and I think I have found the root cause. The class IndicesFilterCache sets concurrencyLevel to a hardcoded 16:

private void buildCache() {
    CacheBuilder<WeightedFilterCache.FilterCacheKey, DocIdSet> cacheBuilder = CacheBuilder.newBuilder()
            .removalListener(this)
            .maximumWeight(sizeInBytes).weigher(new WeightedFilterCache.FilterCacheValueWeigher());

    // defaults to 4, but this is a busy map for all indices, increase it a bit
    cacheBuilder.concurrencyLevel(16);

    if (expire != null) {
        cacheBuilder.expireAfterAccess(expire.millis(), TimeUnit.MILLISECONDS);
    }

    cache = cacheBuilder.build();
}

https://github.com/elasticsearch/elasticsearch/blob/9ed34b5a9e9769b1264bf04d9b9a674794515bc6/src/main/java/org/elasticsearch/indices/cache/filter/IndicesFilterCache.java#L116

In the Guava libraries, the eviction code is as follows:

void evictEntries() {
    if (!map.evictsBySize()) {
        return;
    }

    drainRecencyQueue();
    while (totalWeight > maxSegmentWeight) {
        ReferenceEntry<K, V> e = getNextEvictable();
        if (!removeEntry(e, e.getHash(), RemovalCause.SIZE)) {
            throw new AssertionError();
        }
    }
}

https://code.google.com/p/guava-libraries/source/browse/guava/src/com/google/common/cache/LocalCache.java#2659

Since totalWeight is an int and maxSegmentWeight is a long set to maxWeight / concurrencyLevel, when maxWeight is 32GB or above, then the value of maxSegmentWeight will be set to above the maximum value of int and the check

while (totalWeight > maxSegmentWeight) {

will always fail.

@jpountz
Copy link
Contributor

jpountz commented May 22, 2014

Wow, good catch! I think it would make sense to file a bug to Guava?

@jpountz jpountz self-assigned this May 22, 2014
@nik9000
Copy link
Member

nik9000 commented May 22, 2014

Wow, good catch! I think it would make sense to file a bug to Guava?

Indeed!

I'd file that with Guava but also clamp the size of the cache in Elasticsearch to 32GB - 1 for the time being.

As an aside I imagine 96GB heaps cause super long pause times on hot spot.

@jpountz
Copy link
Contributor

jpountz commented May 22, 2014

+1

@nik9000
Copy link
Member

nik9000 commented May 22, 2014

I've got the code open and have a few free moments so I can work on it if no one else wants it.

@jpountz
Copy link
Contributor

jpountz commented May 22, 2014

That works for me, feel free to ping me when it's ready and you want a review.

@kimchy
Copy link
Member

kimchy commented May 22, 2014

Wow, good catch! I think it would make sense to file a bug to Guava?

Huge ++!. @danp60 when you file the bug in guava, can you link back to it here?

@nik9000
Copy link
Member

nik9000 commented May 22, 2014

I imagine you've already realized it but the work around is to force the cache size under 32GB.

@jpountz
Copy link
Contributor

jpountz commented May 22, 2014

Indeed. I think that's not too bad a workaround though since I would expect such a large filter cache to be quite wasteful compared to leaving the memory to the operating system so that it can do a better job with the filesystem cache.

@danp60
Copy link
Author

danp60 commented May 22, 2014

@jpountz
Copy link
Contributor

jpountz commented May 22, 2014

@danp60 Thanks!

nik9000 added a commit to nik9000/elasticsearch that referenced this issue May 22, 2014
Guava's caches have overflow issues around 32GB with our default segment
count of 16 and weight of 1 unit per byte.  We give them 100MB of headroom
so 31.9GB.

This limits the sizes of both the field data and filter caches, the two
large guava caches.

Closes elastic#6268
jpountz pushed a commit that referenced this issue May 22, 2014
Guava's caches have overflow issues around 32GB with our default segment
count of 16 and weight of 1 unit per byte.  We give them 100MB of headroom
so 31.9GB.

This limits the sizes of both the field data and filter caches, the two
large guava caches.

Closes #6268
jpountz pushed a commit that referenced this issue May 22, 2014
Guava's caches have overflow issues around 32GB with our default segment
count of 16 and weight of 1 unit per byte.  We give them 100MB of headroom
so 31.9GB.

This limits the sizes of both the field data and filter caches, the two
large guava caches.

Closes #6268
jpountz pushed a commit that referenced this issue May 22, 2014
Guava's caches have overflow issues around 32GB with our default segment
count of 16 and weight of 1 unit per byte.  We give them 100MB of headroom
so 31.9GB.

This limits the sizes of both the field data and filter caches, the two
large guava caches.

Closes #6268
@jpountz
Copy link
Contributor

jpountz commented May 28, 2014

The bug has been fixed upstream.

@jpountz jpountz changed the title Filter cache size limit not honored 32GB or over Filter cache: size limit not honored 32GB or over May 30, 2014
@clintongormley clintongormley changed the title Filter cache: size limit not honored 32GB or over Internal: Filter cache size limit not honored for 32GB or over Jul 16, 2014
jpountz added a commit to jpountz/elasticsearch that referenced this issue Sep 4, 2014
17.0 and earlier versions were affected by the following bug
https://code.google.com/p/guava-libraries/issues/detail?id=1761
which caused caches that are configured with weights that are greater than
32GB to actually be unbounded. This is now fixed.

Relates to elastic#6268
jpountz added a commit that referenced this issue Sep 4, 2014
17.0 and earlier versions were affected by the following bug
https://code.google.com/p/guava-libraries/issues/detail?id=1761
which caused caches that are configured with weights that are greater than
32GB to actually be unbounded. This is now fixed.

Relates to #6268
Close #7593
jpountz added a commit that referenced this issue Sep 4, 2014
17.0 and earlier versions were affected by the following bug
https://code.google.com/p/guava-libraries/issues/detail?id=1761
which caused caches that are configured with weights that are greater than
32GB to actually be unbounded. This is now fixed.

Relates to #6268
Close #7593
jpountz added a commit that referenced this issue Sep 8, 2014
17.0 and earlier versions were affected by the following bug
https://code.google.com/p/guava-libraries/issues/detail?id=1761
which caused caches that are configured with weights that are greater than
32GB to actually be unbounded. This is now fixed.

Relates to #6268
Close #7593
mute pushed a commit to mute/elasticsearch that referenced this issue Jul 29, 2015
Guava's caches have overflow issues around 32GB with our default segment
count of 16 and weight of 1 unit per byte.  We give them 100MB of headroom
so 31.9GB.

This limits the sizes of both the field data and filter caches, the two
large guava caches.

Closes elastic#6268
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants