Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OnHeap:MappingCount statistic is reporting more entries than maximum allowed #2414

Closed
mathieucarbou opened this issue Jul 3, 2018 · 3 comments
Labels

Comments

@mathieucarbou
Copy link
Member

mathieucarbou commented Jul 3, 2018

If we configure a cache with 100 entries (i.e. heap) in TinyPounder.
We pounding it hard (max value)
And we immediately go to look at the sizing information (OnHeap:MappingCount vs # of max entries allowed).
We can see during a short period of time that the MappingCount reported is 101 whereas the max number of entries is 100.

@henri-tremblay told me that we shouldn't have this behaviour since eviction is done before adding new values

CC @anthonydahanne @boyusun-SAG

I do not have a unit test to automatically reproduce that. I just did with the TinyPounder and management console.

@chrisdennis
Copy link
Member

If you could provide a reproducible test case that would be great... otherwise I'm struggling for motivation.

@mathieucarbou
Copy link
Member Author

Take this class:

public class MappingCount {
  public static void main(String[] args) throws IOException {
    StatisticsService statisticsService = new DefaultStatisticsService();

    try (CacheManager cacheManager = newCacheManagerBuilder().using(statisticsService).using(new DefaultManagementRegistryConfiguration().setCacheManagerAlias("my-cache-manager"))
      .withCache("cache", newCacheConfigurationBuilder(Integer.class, Integer.class, newResourcePoolsBuilder().heap(100, EntryUnit.ENTRIES).offheap(1, MemoryUnit.MB)).build())
      .build(true)) {

      Cache<Integer, Integer> cache = cacheManager.getCache("cache", Integer.class, Integer.class);
      int TOTAL_ENTRIES = 1_000_000;
      for (int i = 0; i < TOTAL_ENTRIES; i++) {
        cache.put(i, i);
      }

      new Thread() {
        {setDaemon(true);}

        @Override
        public void run() {
          while (!Thread.currentThread().isInterrupted()) {
            System.out.println(statisticsService.getCacheStatistics("cache").getTierStatistics().get("OnHeap").getMappings());
            try {
              sleep(200);
            } catch (InterruptedException e) {
              Thread.currentThread().interrupt();
            }
          }
        }
      }.start();

      Random rnd = new Random();
      CountDownLatch start = new CountDownLatch(1);
      for (int i = 0; i < Runtime.getRuntime().availableProcessors(); i++) {
        new Thread() {
          {setDaemon(true);}

          @Override
          public void run() {
            try {
              start.await();
            } catch (InterruptedException e) {
              Thread.currentThread().interrupt();
            }
            while (!Thread.currentThread().isInterrupted()) {
              cache.get(rnd.nextInt(TOTAL_ENTRIES));
              try {
                sleep(5);
              } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
              }
            }
          }
        }.start();
      }

      start.countDown();
      System.in.read();
    }
  }
}

On my computer it prints something like:

0
5
8
7
10
11
13
17
23
24
25
26
27
27
27
30
32
30
34
37
37
40
43
44
46
52
51
57
62
61
61
63
65
69
70
71
75
78
82
86
85
86
87
90
91
93
96
98
100
100
100
100
101
100
100
100
100
100
100
100
100
100
99
100
100
100

But if you comment sleep(5); and replaces it by yield() I have:

0
100
101
99
100
101
98
99
99
100
98
99
102
98
99
99
100
100

And if I completely remove any sleep or yield to keep the cache.get() busy, then:

0
103
99
103
100
102
104
101
102
100
103
102
103

it often goes above the max number of entries.

@chrisdennis
Copy link
Member

Code inspection shows that to not be true. Capacity enforcement is performed after the entries are put in the heap. This means max entries can rise above threshold by the number of threads accessing the cache.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants