New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

carbon-cache cpu usage jumps to 150% when setting MAX_CACHE_SIZE != inf #167

Closed
mabrek opened this Issue Oct 21, 2013 · 32 comments

Comments

Projects
None yet
@mabrek

mabrek commented Oct 21, 2013

I've set
MAX_CACHE_SIZE=1000000
USE_FLOW_CONTROL=False
and noticed jump in carbon-cache cpu usage (150%)
When I reverted it back to default
MAX_CACHE_SIZE=inf
USE_FLOW_CONTROL=true
cpu usage dropped to 15-30%

version 0.9.12

@johnseekins

This comment has been minimized.

Show comment
Hide comment
@johnseekins

johnseekins Oct 30, 2013

Contributor

+1

We've been seeing the same thing.

Contributor

johnseekins commented Oct 30, 2013

+1

We've been seeing the same thing.

@johnseekins

This comment has been minimized.

Show comment
Hide comment
@johnseekins

johnseekins Nov 26, 2013

Contributor

We've actually moved to head (as of ~1 month ago) to see if that helped with the issue, and we are still seeing it.
As an additional note on this issue, as the CPU rises, the memory for the cache process also rises (we've seen cache processes that have >40GB of memory in use). stracing the cache process shows a number of very slow futex commands.
Not sure if that would help, but solving this problem would be wonderful.

Contributor

johnseekins commented Nov 26, 2013

We've actually moved to head (as of ~1 month ago) to see if that helped with the issue, and we are still seeing it.
As an additional note on this issue, as the CPU rises, the memory for the cache process also rises (we've seen cache processes that have >40GB of memory in use). stracing the cache process shows a number of very slow futex commands.
Not sure if that would help, but solving this problem would be wonderful.

@avishai-ish-shalom

This comment has been minimized.

Show comment
Hide comment
@avishai-ish-shalom

This comment has been minimized.

Show comment
Hide comment
@avishai-ish-shalom

avishai-ish-shalom Jan 26, 2014

this patch resolves the. btw, megacarbon seems to be free of this.

--- /opt/graphite/lib/carbon/cache.py   2014-01-26 21:53:17.000000000 +0000
+++ /opt/graphite/lib/carbon/cache.py.patched   2014-01-26 21:54:10.060772690 +0000
@@ -23,6 +23,7 @@

 class _MetricCache(defaultdict):
   def __init__(self, defaultfactory=deque, method="sorted"):
+    self.size = 0
     self.method = method
     if self.method == "sorted":
       self.queue = self.gen_queue()
@@ -39,11 +40,8 @@
       while queue:
         yield queue.pop()[0]

-  @property
-  def size(self):
-    return reduce(lambda x, y: x + len(y), self.values(), 0)
-
   def store(self, metric, datapoint):
+    self.size += 1
     self[metric].append(datapoint)
     if self.isFull():
       log.msg("MetricCache is full: self.size=%d" % self.size)
@@ -59,11 +57,13 @@
       raise KeyError(metric)
     elif not metric and self.method == "max":
       metric = max(self.items(), key=lambda x: len(x[1]))[0]
+      datapoints = (metric, super(_MetricCache, self).pop(metric))
     elif not metric and self.method == "naive":
-      return self.popitem()
+      datapoints = self.popitem()
     elif not metric and self.method == "sorted":
       metric = self.queue.next()
-    datapoints = (metric, super(_MetricCache, self).pop(metric))
+      datapoints = (metric, super(_MetricCache, self).pop(metric))
+    self.size -= len(datapoints[1])
     return datapoints

   @property

avishai-ish-shalom commented Jan 26, 2014

this patch resolves the. btw, megacarbon seems to be free of this.

--- /opt/graphite/lib/carbon/cache.py   2014-01-26 21:53:17.000000000 +0000
+++ /opt/graphite/lib/carbon/cache.py.patched   2014-01-26 21:54:10.060772690 +0000
@@ -23,6 +23,7 @@

 class _MetricCache(defaultdict):
   def __init__(self, defaultfactory=deque, method="sorted"):
+    self.size = 0
     self.method = method
     if self.method == "sorted":
       self.queue = self.gen_queue()
@@ -39,11 +40,8 @@
       while queue:
         yield queue.pop()[0]

-  @property
-  def size(self):
-    return reduce(lambda x, y: x + len(y), self.values(), 0)
-
   def store(self, metric, datapoint):
+    self.size += 1
     self[metric].append(datapoint)
     if self.isFull():
       log.msg("MetricCache is full: self.size=%d" % self.size)
@@ -59,11 +57,13 @@
       raise KeyError(metric)
     elif not metric and self.method == "max":
       metric = max(self.items(), key=lambda x: len(x[1]))[0]
+      datapoints = (metric, super(_MetricCache, self).pop(metric))
     elif not metric and self.method == "naive":
-      return self.popitem()
+      datapoints = self.popitem()
     elif not metric and self.method == "sorted":
       metric = self.queue.next()
-    datapoints = (metric, super(_MetricCache, self).pop(metric))
+      datapoints = (metric, super(_MetricCache, self).pop(metric))
+    self.size -= len(datapoints[1])
     return datapoints

   @property
@SEJeff

This comment has been minimized.

Show comment
Hide comment
@SEJeff

SEJeff Jan 27, 2014

Member

@avishai-ish-shalom Do you mind sending a pull request with this change so that you can get proper attribution for it?

I quite strongly prefer pull requests over inline patches in issues.

Member

SEJeff commented Jan 27, 2014

@avishai-ish-shalom Do you mind sending a pull request with this change so that you can get proper attribution for it?

I quite strongly prefer pull requests over inline patches in issues.

@avishai-ish-shalom

This comment has been minimized.

Show comment
Hide comment
@avishai-ish-shalom

avishai-ish-shalom Jan 27, 2014

Pull request #207 , 1 commit based on tag 0.9.12 - github shows diff with master by default so it looks much more then it is.

avishai-ish-shalom commented Jan 27, 2014

Pull request #207 , 1 commit based on tag 0.9.12 - github shows diff with master by default so it looks much more then it is.

@mleinart

This comment has been minimized.

Show comment
Hide comment
@mleinart

mleinart Jan 27, 2014

Member

FWIW this is basically how it was done in 0.9.11. The cache is popped from
a thread so a lock is required to keep the count accurate and prevent it
from having an invalid value (negative)
On Jan 27, 2014 9:15 AM, "Avishai Ish-Shalom" notifications@github.com
wrote:

Pull request #207 #207 ,
1 commit based on tag 0.9.12 - github shows diff with master by default so
it looks much more then it is.


Reply to this email directly or view it on GitHubhttps://github.com//issues/167#issuecomment-33375436
.

Member

mleinart commented Jan 27, 2014

FWIW this is basically how it was done in 0.9.11. The cache is popped from
a thread so a lock is required to keep the count accurate and prevent it
from having an invalid value (negative)
On Jan 27, 2014 9:15 AM, "Avishai Ish-Shalom" notifications@github.com
wrote:

Pull request #207 #207 ,
1 commit based on tag 0.9.12 - github shows diff with master by default so
it looks much more then it is.


Reply to this email directly or view it on GitHubhttps://github.com//issues/167#issuecomment-33375436
.

@avishai-ish-shalom

This comment has been minimized.

Show comment
Hide comment
@avishai-ish-shalom

avishai-ish-shalom Jan 27, 2014

i doubt locking is necessary, it's ok to miss the target temporarily and
negative value (although incorrect) doesn't actually create misbehavior.
The way it was done in 0.9.11 is much better than iterating (via reduce) on
the entire dict on every metric store, so if you prefer to revert to the
0.9.11 that's also a great solution.

On Mon, Jan 27, 2014 at 5:28 PM, Michael Leinartas <notifications@github.com

wrote:

FWIW this is basically how it was done in 0.9.11. The cache is popped from
a thread so a lock is required to keep the count accurate and prevent it
from having an invalid value (negative)
On Jan 27, 2014 9:15 AM, "Avishai Ish-Shalom" notifications@github.com
wrote:

Pull request #207 #207
,
1 commit based on tag 0.9.12 - github shows diff with master by default
so
it looks much more then it is.


Reply to this email directly or view it on GitHub<
https://github.com/graphite-project/carbon/issues/167#issuecomment-33375436>

.


Reply to this email directly or view it on GitHubhttps://github.com//issues/167#issuecomment-33376838
.

avishai-ish-shalom commented Jan 27, 2014

i doubt locking is necessary, it's ok to miss the target temporarily and
negative value (although incorrect) doesn't actually create misbehavior.
The way it was done in 0.9.11 is much better than iterating (via reduce) on
the entire dict on every metric store, so if you prefer to revert to the
0.9.11 that's also a great solution.

On Mon, Jan 27, 2014 at 5:28 PM, Michael Leinartas <notifications@github.com

wrote:

FWIW this is basically how it was done in 0.9.11. The cache is popped from
a thread so a lock is required to keep the count accurate and prevent it
from having an invalid value (negative)
On Jan 27, 2014 9:15 AM, "Avishai Ish-Shalom" notifications@github.com
wrote:

Pull request #207 #207
,
1 commit based on tag 0.9.12 - github shows diff with master by default
so
it looks much more then it is.


Reply to this email directly or view it on GitHub<
https://github.com/graphite-project/carbon/issues/167#issuecomment-33375436>

.


Reply to this email directly or view it on GitHubhttps://github.com//issues/167#issuecomment-33376838
.

@drawks

This comment has been minimized.

Show comment
Hide comment
@drawks

drawks Jan 27, 2014

Member

I wrote the ugly code in question. I don't doubt that it isn't particularly performant and honestly I never run any of the throttling code in production. That said the intention was to implement the size calculation in a way that didn't require a lock. I don't think we should really be choosing between a value which is "mostly correct most of the time" and one which is very slow all of the time but also always correct. I never intended my solution to have any staying power, but I'd really hope that whatever final solution was implemented would be both fast and accurate.

Member

drawks commented Jan 27, 2014

I wrote the ugly code in question. I don't doubt that it isn't particularly performant and honestly I never run any of the throttling code in production. That said the intention was to implement the size calculation in a way that didn't require a lock. I don't think we should really be choosing between a value which is "mostly correct most of the time" and one which is very slow all of the time but also always correct. I never intended my solution to have any staying power, but I'd really hope that whatever final solution was implemented would be both fast and accurate.

@avishai-ish-shalom

This comment has been minimized.

Show comment
Hide comment
@avishai-ish-shalom

avishai-ish-shalom Jan 27, 2014

  1. the performance of the 'reduce' version is horrible, it is responsible for 70-80% of carbon's cpu usage on 2 different systems i work with.
  2. We shouldn't be discussing whether the "value" of the size variable is correct, we should be discussing whether the implementation as a whole provides the expected behavior. The expected behavior is "don't store more points then settings.MAX_CACHE_SIZE". Since we are dealing with multithreaded code we can choose to have the size value error up or error down. choosing to error up will achieve the expected behavior at the cost of reduced efficiency (we store less than we could)
  3. If we insist on "correct" values of size, then locking the size update + dict mutation performs better then the reduce code
  4. Another alternative is to signal the service thread to pop the metric into a queue and consume the metric from that queue. this will guarantee thread isolation and "correct" calculations, personally i feel this is a bit of over-engineering.

avishai-ish-shalom commented Jan 27, 2014

  1. the performance of the 'reduce' version is horrible, it is responsible for 70-80% of carbon's cpu usage on 2 different systems i work with.
  2. We shouldn't be discussing whether the "value" of the size variable is correct, we should be discussing whether the implementation as a whole provides the expected behavior. The expected behavior is "don't store more points then settings.MAX_CACHE_SIZE". Since we are dealing with multithreaded code we can choose to have the size value error up or error down. choosing to error up will achieve the expected behavior at the cost of reduced efficiency (we store less than we could)
  3. If we insist on "correct" values of size, then locking the size update + dict mutation performs better then the reduce code
  4. Another alternative is to signal the service thread to pop the metric into a queue and consume the metric from that queue. this will guarantee thread isolation and "correct" calculations, personally i feel this is a bit of over-engineering.
@avishai-ish-shalom

This comment has been minimized.

Show comment
Hide comment
@avishai-ish-shalom

avishai-ish-shalom Jan 27, 2014

Another point. when working in languages like python, never assume that if you don't see an explicit thread lock then locks aren't present. A simple test would prove that multi-threaded access to a python dict generates a large number of slow futex calls.

avishai-ish-shalom commented Jan 27, 2014

Another point. when working in languages like python, never assume that if you don't see an explicit thread lock then locks aren't present. A simple test would prove that multi-threaded access to a python dict generates a large number of slow futex calls.

@drawks

This comment has been minimized.

Show comment
Hide comment
@drawks

drawks Jan 27, 2014

Member

Its been a while since I've looked at this particular code, but as I said, the point was to factor out the lock AND the intention, of my changes, was to prevent the throttle enabling code from causing performance impact in the case where the throttle wasn't used.

To that , albeit limited, end the code works.

I agree that erroring up does cause the limit code to behave in a semantically correct way, although since the size of the error cannot be anticipated then semantically correct may or may not be the same as what is expected by the user. I.E. if I set a MAX I don't expect that the actual high water mark is held artificially below maximum by an arbitrary margin that is affected by some non-obvious relationship between number of metrics and frequency of reciept of those metric from clients.

It's also worth noting that I think this particular throttle is bogus because using number of metric datapoints as a gauge of struct size is probably not really what most user want. "SIZE" != aggregate number of datapoints in my mind. Typically I would expect size to represent the actual amount of memory that the cache is using and MAX to mean that the growth of the struct is bounded at that MAX size... As I said before I don't use any of the throttling features personally....

Member

drawks commented Jan 27, 2014

Its been a while since I've looked at this particular code, but as I said, the point was to factor out the lock AND the intention, of my changes, was to prevent the throttle enabling code from causing performance impact in the case where the throttle wasn't used.

To that , albeit limited, end the code works.

I agree that erroring up does cause the limit code to behave in a semantically correct way, although since the size of the error cannot be anticipated then semantically correct may or may not be the same as what is expected by the user. I.E. if I set a MAX I don't expect that the actual high water mark is held artificially below maximum by an arbitrary margin that is affected by some non-obvious relationship between number of metrics and frequency of reciept of those metric from clients.

It's also worth noting that I think this particular throttle is bogus because using number of metric datapoints as a gauge of struct size is probably not really what most user want. "SIZE" != aggregate number of datapoints in my mind. Typically I would expect size to represent the actual amount of memory that the cache is using and MAX to mean that the growth of the struct is bounded at that MAX size... As I said before I don't use any of the throttling features personally....

@drawks

This comment has been minimized.

Show comment
Hide comment
@drawks

drawks Jan 27, 2014

Member

At any rate, I make no claim to my code in this case being better than reality has shown. I still think a solution here should have obvious and expected behavior. Not just semantically correct behavior.

Member

drawks commented Jan 27, 2014

At any rate, I make no claim to my code in this case being better than reality has shown. I still think a solution here should have obvious and expected behavior. Not just semantically correct behavior.

@avishai-ish-shalom

This comment has been minimized.

Show comment
Hide comment
@avishai-ish-shalom

avishai-ish-shalom Jan 27, 2014

In that case, let's start by defining the obvious and expected behavior we
want. after that we can nail the implementation.

avishai-ish-shalom commented Jan 27, 2014

In that case, let's start by defining the obvious and expected behavior we
want. after that we can nail the implementation.

@drawks drawks closed this Jan 27, 2014

@drawks drawks reopened this Jan 27, 2014

@drawks

This comment has been minimized.

Show comment
Hide comment
@drawks

drawks Jan 27, 2014

Member

whoops... "close" != "cancel edit window"

Member

drawks commented Jan 27, 2014

whoops... "close" != "cancel edit window"

@andyfeller

This comment has been minimized.

Show comment
Hide comment
@andyfeller

andyfeller Jun 19, 2014

Is there an ETA on when this regression is going to be reverted or a patch release will be created?
People have been negatively impacted for nearly a year with no official fix.

andyfeller commented Jun 19, 2014

Is there an ETA on when this regression is going to be reverted or a patch release will be created?
People have been negatively impacted for nearly a year with no official fix.

@obfuscurity

This comment has been minimized.

Show comment
Hide comment
@obfuscurity

obfuscurity Jul 23, 2014

Member

I would love to see this performance hit resolved in some manner in the next 0.9.x release. Anyone think reverting to the 0.9.11 behavior (at least for 0.9.x) makes the most sense? /cc @esc

Member

obfuscurity commented Jul 23, 2014

I would love to see this performance hit resolved in some manner in the next 0.9.x release. Anyone think reverting to the 0.9.11 behavior (at least for 0.9.x) makes the most sense? /cc @esc

@drawks

This comment has been minimized.

Show comment
Hide comment
@drawks

drawks Jul 23, 2014

Member

Honestly at this point I really don't care at all how this is resolved as long as the solution doesn't require calculation of size in configurations that don't actually use size, i.e. if the tunable is set to INF then there should be no perf impact.

I still think my above comment about this being sort of a bogus throttle stands.

Member

drawks commented Jul 23, 2014

Honestly at this point I really don't care at all how this is resolved as long as the solution doesn't require calculation of size in configurations that don't actually use size, i.e. if the tunable is set to INF then there should be no perf impact.

I still think my above comment about this being sort of a bogus throttle stands.

@pcn

This comment has been minimized.

Show comment
Hide comment
@pcn

pcn Jul 23, 2014

Anyone think reverting to the 0.9.11 behavior (at least for 0.9.x) makes the most sense?

👍

pcn commented Jul 23, 2014

Anyone think reverting to the 0.9.11 behavior (at least for 0.9.x) makes the most sense?

👍

@andyfeller

This comment has been minimized.

Show comment
Hide comment
@andyfeller

andyfeller commented Jul 23, 2014

Agreed.

@drawks

This comment has been minimized.

Show comment
Hide comment
@drawks

drawks Jul 23, 2014

Member

I'm curious what everyone who wants this feature to "work" hopes to accomplish by using it. Not a troll, just honestly curious what is to be gained from limiting the number of data points that can be in the cache.

Member

drawks commented Jul 23, 2014

I'm curious what everyone who wants this feature to "work" hopes to accomplish by using it. Not a troll, just honestly curious what is to be gained from limiting the number of data points that can be in the cache.

@pcn

This comment has been minimized.

Show comment
Hide comment
@pcn

pcn Jul 23, 2014

In a cloud environment where you can have hundreds of new instances per day with hundreds of metrics per instance, inf will lead to large amounts of cached garbage. The memory usage grows and the cache stalls out. This is the only mechanism I'm aware of for putting a limit on that growth. It would be much better if it was governed by memory usage than by # of data points, though.

pcn commented Jul 23, 2014

In a cloud environment where you can have hundreds of new instances per day with hundreds of metrics per instance, inf will lead to large amounts of cached garbage. The memory usage grows and the cache stalls out. This is the only mechanism I'm aware of for putting a limit on that growth. It would be much better if it was governed by memory usage than by # of data points, though.

@avishai-ish-shalom

This comment has been minimized.

Show comment
Hide comment
@avishai-ish-shalom

avishai-ish-shalom Jul 23, 2014

Another possible solution is to use deque(maxlen=max_size) and dispense with checking the size of the cache. This will silently discard data points but will give good performance and accurate size control.

avishai-ish-shalom commented Jul 23, 2014

Another possible solution is to use deque(maxlen=max_size) and dispense with checking the size of the cache. This will silently discard data points but will give good performance and accurate size control.

@pcn

This comment has been minimized.

Show comment
Hide comment
@pcn

pcn Jul 23, 2014

This will silently discard data points but will give good performance and accurate size control

Correct me if I'm thinking of something else, but I think there's a tradeoff: a deque gives good performance for push or pop on the left and right, but part of what the cache does is seek and remove from the middle, where a deque is O(n) for mutations.

Also, currently the cache should block rather than lose data points - this gives a relay, aggregator, or data source the opportunity to back off - respond to backpressure.

pcn commented Jul 23, 2014

This will silently discard data points but will give good performance and accurate size control

Correct me if I'm thinking of something else, but I think there's a tradeoff: a deque gives good performance for push or pop on the left and right, but part of what the cache does is seek and remove from the middle, where a deque is O(n) for mutations.

Also, currently the cache should block rather than lose data points - this gives a relay, aggregator, or data source the opportunity to back off - respond to backpressure.

@avishai-ish-shalom

This comment has been minimized.

Show comment
Hide comment
@avishai-ish-shalom

avishai-ish-shalom Jul 23, 2014

Not quite. The MetricCache class is a subclass of defaultdict which defaults to using deque for values. When datapoints are popped from the cache the entire deque object is popped for a metric and drained into the backend (whisper, network), so no meddling in the middle of the queue.
You raise a good point regarding responding to back-pressure; perhaps we can implement both methods and let the user configure it. Losing metrics may be more sensible in the context of carbon cache whereas blocking makes sense in the context of a relay or aggregator.

avishai-ish-shalom commented Jul 23, 2014

Not quite. The MetricCache class is a subclass of defaultdict which defaults to using deque for values. When datapoints are popped from the cache the entire deque object is popped for a metric and drained into the backend (whisper, network), so no meddling in the middle of the queue.
You raise a good point regarding responding to back-pressure; perhaps we can implement both methods and let the user configure it. Losing metrics may be more sensible in the context of carbon cache whereas blocking makes sense in the context of a relay or aggregator.

@drawks

This comment has been minimized.

Show comment
Hide comment
@drawks

drawks Jul 23, 2014

Member

@avishai-ish-shalom That [deque(maxlen=max_size)] would give you a max number of points per metric name though which doesn't account things like disparate reporting rates or large numbers of new metric names.

Also I think, not that I'm super familiar with the code path, but I /think/ that if you enable USE_FLOW_CONTROL that the daemon will stop accepting metrics instead of accepting and dropping them.

https://github.com/graphite-project/carbon/blob/0.9.x/lib/carbon/service.py#L131

Member

drawks commented Jul 23, 2014

@avishai-ish-shalom That [deque(maxlen=max_size)] would give you a max number of points per metric name though which doesn't account things like disparate reporting rates or large numbers of new metric names.

Also I think, not that I'm super familiar with the code path, but I /think/ that if you enable USE_FLOW_CONTROL that the daemon will stop accepting metrics instead of accepting and dropping them.

https://github.com/graphite-project/carbon/blob/0.9.x/lib/carbon/service.py#L131

@avishai-ish-shalom

This comment has been minimized.

Show comment
Hide comment
@avishai-ish-shalom

avishai-ish-shalom Jul 23, 2014

maxlen per metric can work by pattern matching metric names. however it seems that in light of the points made this will give little real world benefit, so forget i said anything ;)

avishai-ish-shalom commented Jul 23, 2014

maxlen per metric can work by pattern matching metric names. however it seems that in light of the points made this will give little real world benefit, so forget i said anything ;)

@obfuscurity

This comment has been minimized.

Show comment
Hide comment
@obfuscurity

obfuscurity Aug 14, 2014

Member

Adding this to the 0.9.13 milestone. It seems that we haven't decided on a proper fix yet but I want to make sure we find some sort of solution for the next release.

Member

obfuscurity commented Aug 14, 2014

Adding this to the 0.9.13 milestone. It seems that we haven't decided on a proper fix yet but I want to make sure we find some sort of solution for the next release.

@pkittenis

This comment has been minimized.

Show comment
Hide comment
@pkittenis

pkittenis Sep 9, 2014

Hi,

Have applied and tested the above patch on 0.9.12 which seems to work as intended with flow control on, meaning clients are blocked if queue is full until it no longer is.

As others like @pcn have found, carbon-cache's memory usage can blow up with max cache size set to inf and lots of metrics going through the caches.

In our setup with ~150k metrics/minute passing through each carbon-cache instance there is a ~30k queue size and memory usage goes over 2GB in less than 24 hours.

Can't seem to make a pull request on the 0.9.12 tag which makes sense as it's readonly. Here's the compare against 0.9.12

https://github.com/thomsonreuters/carbon/compare/graphite-project:0.9.12...queue_size_fix

pkittenis commented Sep 9, 2014

Hi,

Have applied and tested the above patch on 0.9.12 which seems to work as intended with flow control on, meaning clients are blocked if queue is full until it no longer is.

As others like @pcn have found, carbon-cache's memory usage can blow up with max cache size set to inf and lots of metrics going through the caches.

In our setup with ~150k metrics/minute passing through each carbon-cache instance there is a ~30k queue size and memory usage goes over 2GB in less than 24 hours.

Can't seem to make a pull request on the 0.9.12 tag which makes sense as it's readonly. Here's the compare against 0.9.12

https://github.com/thomsonreuters/carbon/compare/graphite-project:0.9.12...queue_size_fix

@jssjr

This comment has been minimized.

Show comment
Hide comment
@jssjr

jssjr Sep 25, 2014

Member

@pkittenis - Can you open a PR with that changeset against the 0.9.x branch? Thanks.

Member

jssjr commented Sep 25, 2014

@pkittenis - Can you open a PR with that changeset against the 0.9.x branch? Thanks.

@jssjr

This comment has been minimized.

Show comment
Hide comment
@jssjr

jssjr Sep 25, 2014

Member

@pkittenis - Scratch that request, @avishai-ish-shalom opened #266 which has those changes rebased against 0.9.x.

Member

jssjr commented Sep 25, 2014

@pkittenis - Scratch that request, @avishai-ish-shalom opened #266 which has those changes rebased against 0.9.x.

jssjr added a commit that referenced this issue Sep 26, 2014

Merge pull request #266 from avishai-ish-shalom/issues/167
issue #167: Don't iterate on the cache for every call to size
@jssjr

This comment has been minimized.

Show comment
Hide comment
@jssjr

jssjr Sep 26, 2014

Member

This should be addressed with #266. Now that that is merged, I'm going to close this out. Feel free to drop a comment and we'll re-open if needed.

Going into 0.10.x I don't want this to be a thing. I'd rather handle max cache size differently. We can handle the discussion around how in a new issue.

Member

jssjr commented Sep 26, 2014

This should be addressed with #266. Now that that is merged, I'm going to close this out. Feel free to drop a comment and we'll re-open if needed.

Going into 0.10.x I don't want this to be a thing. I'd rather handle max cache size differently. We can handle the discussion around how in a new issue.

@jssjr jssjr closed this Sep 26, 2014

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment