Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose a health check endpoint? #69

Closed
duanshiqiang opened this issue Apr 17, 2019 · 27 comments
Closed

Expose a health check endpoint? #69

duanshiqiang opened this issue Apr 17, 2019 · 27 comments

Comments

@duanshiqiang
Copy link

Currently we are facing issues that the collector process stops working after running for some time (few days) and the /metrics response having "ruby_collector_working 0".

All of our workloads are running in kubernetes and we are running the collector process in a sidecar container. So if the collector web server (WEBrick process) can have a health check endpoint like '/healthz' then we can use it as the container livenessProbe so kubernetes can help us restart the container when the collector stops working.

@SamSaffron
Copy link
Member

This is a bit odd, we have not seen collectors die across our infrastructure at all.

This metric is there to inform the central server that the collector has no metrics. My guess is that this would happen if you restarted the collector container and somehow your old containers were no longer able to talk to it.

I think the remediation here is actually to restart the entire pod.

Quick question first, do you have IPv6 stability across container destroy/create? Are you communicating via v4? If so do you have port stability?

@duanshiqiang
Copy link
Author

Hi @SamSaffron Thanks for your reply. We are using ipv4 only in our infrastructure.

We have subclassed PrometheusExporter::Server::WebCollector and running the collector with command like this:

bundle exec prometheus_exporter -a lib/prometheus/web_with_tenant_collector.rb

What we are observing is that the memory usage of the prometheus_exporter can go up even to over 1 GB which is really strange.

Also what you are implying in your reply is that you are running the prometheus_exporter as a central server? We are running it as a sidecar of each puma server (we are having over 90 pods) which I guess is not recommended. And I do find that since some metrics are of type summary, especially those percentiles metrcs, we are not able to write promql to recalculate it among all the server instances which is quite sad.

One thing we are afraid of running the prometheus_expoter as a central server is that there's a single point of failure and we found if the prometheus_expoter is down, our puma server will be flooded by error logs like:

Prometheus Exporter, failed to send message Cannot assign requested address - connect(2) for "localhost" port 9394

@SamSaffron
Copy link
Member

SamSaffron commented Apr 17, 2019

1GB in a collector is a big concern that would mean some list is ever growing, summary metrics do have a roof on them, but I guess if you keep adding metrics problems can arise. Or maybe have a summary with 10000 different label permutations stuff can get strange.

Can you look at one of your collectors say at 512MB and see how many metrics you have reported? then maybe do a ruby heap dump via rbtrace to see where the leak is.

You can run as many collectors as you need but certainly there is a memory impact, that said it should be small in the big scheme.

This is one of our collectors on a very busy server that has been running since March 27 (RSS is merely 36872)

discour+   109  0.7  0.0 317384 36872 ?        Sl   Mar26 233:13 ruby /var/www/discourse/plugins/discourse-prometheus/bin/collector 9405 79

Let's first work out what your memory issue is prior to attempting workarounds here.

@Fivell
Copy link
Contributor

Fivell commented Jun 13, 2019

@duanshiqiang we also noticed memory leak in exporter, do u have any updates ?

@Fivell
Copy link
Contributor

Fivell commented Jun 27, 2019

@SamSaffron

This is one of our collectors on a very busy server that has been running since March 27 (RSS is merely 36872)

are you using last version or maybe some previous one ?

@SamSaffron
Copy link
Member

SamSaffron commented Jun 27, 2019 via email

@Fivell
Copy link
Contributor

Fivell commented Jul 1, 2019

@SamSaffron thanks for your article, I started exporter and launched ab script for both metrics endpoint and some rails app endpoint .
After rss became bigger

ps x -o rss,vsz,command | grep prometheus_exporter
 71504  4380208 ruby bin/prometheus_exporter

I started to repeat everything from your guide.
I have dump now and can see generations.

generation  objects 61981
generation 28 objects 2227
generation 29 objects 3697
generation 30 objects 1044
generation 31 objects 2761
generation 32 objects 2
generation 190 objects 22
generation 428 objects 44
generation 429 objects 22
generation 430 objects 22
generation 431 objects 44
generation 432 objects 22
generation 434 objects 26
generation 435 objects 44
generation 436 objects 22
generation 437 objects 22
generation 438 objects 44
generation 439 objects 22
generation 440 objects 22
generation 441 objects 88
generation 442 objects 374
generation 443 objects 264
generation 515 objects 22
generation 779 objects 22
generation 1041 objects 22
generation 1250 objects 22
generation 1508 objects 22
generation 1751 objects 22
generation 1808 objects 22
generation 1940 objects 22
generation 2186 objects 22
generation 2415 objects 22
generation 2527 objects 22
generation 2528 objects 22
generation 2529 objects 22
generation 2530 objects 44
generation 2531 objects 22
generation 2532 objects 22
generation 2533 objects 22
generation 2534 objects 22
generation 2664 objects 22
generation 2914 objects 22
generation 3171 objects 22
generation 3312 objects 1
generation 3315 objects 395
generation 3316 objects 6

So i picked up 3315 generation for detailed analyzing.

/Users/*****************/.rvm/gems/ruby-2.5.5@api3/bundler/gems/prometheus_exporter-0df4dcd79d65/lib/prometheus_exporter.rb:15 * 110
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:532 * 33
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httputils.rb:151 * 24
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/server.rb:286 * 21
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httputils.rb:204 * 12
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httputils.rb:149 * 12
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/utils.rb:222 * 12
/Users/*****************/.rvm/gems/ruby-2.5.5@api3/bundler/gems/prometheus_exporter-0df4dcd79d65/bin/prometheus_exporter:86 * 10
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:196 * 9
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:478 * 9
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpservlet/abstract.rb:103 * 9
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:195 * 9
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/server.rb:290 * 9
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/server.rb:170 * 7
eval:1 * 7
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httputils.rb:144 * 6
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/utils.rb:187 * 6
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:255 * 6
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpserver.rb:260 * 6
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpserver.rb:186 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httputils.rb:33 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/uri/generic.rb:335 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/uri/rfc3986_parser.rb:76 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/uri/rfc3986_parser.rb:44 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/uri/rfc3986_parser.rb:41 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:542 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpresponse.rb:102 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:431 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:430 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:429 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:428 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:426 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/utils.rb:236 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/utils.rb:233 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:176 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:173 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:168 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:166 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpserver.rb:72 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpresponse.rb:108 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpresponse.rb:106 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpserver.rb:71 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/socket.rb:1313 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:290 * 3
/Users/*****************/.rvm/gems/ruby-2.5.5@api3/bundler/gems/prometheus_exporter-0df4dcd79d65/lib/prometheus_exporter/metric/counter.rb:23 * 3
/Users/*****************/.rvm/gems/ruby-2.5.5@api3/bundler/gems/prometheus_exporter-0df4dcd79d65/lib/prometheus_exporter.rb:15 * 110
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:532 * 33
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httputils.rb:151 * 24
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/server.rb:286 * 21
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httputils.rb:204 * 12
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httputils.rb:149 * 12
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/utils.rb:222 * 12
/Users/*****************/.rvm/gems/ruby-2.5.5@api3/bundler/gems/prometheus_exporter-0df4dcd79d65/bin/prometheus_exporter:86 * 10
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:196 * 9
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:478 * 9
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpservlet/abstract.rb:103 * 9
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:195 * 9
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/server.rb:290 * 9
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/server.rb:170 * 7
eval:1 * 7
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httputils.rb:144 * 6
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/utils.rb:187 * 6
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:255 * 6
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpserver.rb:260 * 6
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpserver.rb:186 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httputils.rb:33 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/uri/generic.rb:335 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/uri/rfc3986_parser.rb:76 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/uri/rfc3986_parser.rb:44 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/uri/rfc3986_parser.rb:41 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:542 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpresponse.rb:102 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:431 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:430 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:429 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:428 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:426 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/utils.rb:236 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/utils.rb:233 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:176 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:173 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:168 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:166 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpserver.rb:72 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpresponse.rb:108 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpresponse.rb:106 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httpserver.rb:71 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/socket.rb:1313 * 3
/Users/*****************/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/webrick/httprequest.rb:290 * 3
/Users/*****************/.rvm/gems/ruby-2.5.5@api3/bundler/gems/prometheus_exporter-0df4dcd79d65/lib/prometheus_exporter/metric/counter.rb:23 * 3

lib/prometheus_exporter.rb:15 * 110 points to Oj.compat_load(obj)

Can you please suggest what next to do ?

@SamSaffron
Copy link
Member

3315 is not a good one to pic ...

generation 1250 objects 22
generation 1508 objects 22

This looks like a steady object leak of sorts, what are the objects?

Note rss there is 71504 which is not enormous but could be better.

@Fivell
Copy link
Contributor

Fivell commented Jul 2, 2019

@SamSaffron , they all are pointed to prometheus_exporter-0df4dcd79d65/lib/prometheus_exporter.rb:15 * 22

@SamSaffron
Copy link
Member

SamSaffron commented Jul 2, 2019 via email

@Fivell
Copy link
Contributor

Fivell commented Jul 2, 2019

@SamSaffron https://gist.github.com/Fivell/224a9cda0fadf1ff323a730c5a0d36dc , as I told i just call several endpoints thousands times using ab

@Fivell
Copy link
Contributor

Fivell commented Jul 2, 2019

@SamSaffron I also repeated that on staging env
after 1 day logs are full of

Jul 02 13:08:11 billing-prometheus-exrpoter-debug bundle[1579]: Generating Prometheus metrics text timed out

was created as separate issue(#77)

metrics output

# HELP ruby_collector_working Is the master process collector able to collect metrics
# TYPE ruby_collector_working gauge
ruby_collector_working 0


# HELP ruby_collector_rss total memory used by collector process
# TYPE ruby_collector_rss gauge
ruby_collector_rss 571297792


# HELP ruby_collector_metrics_total Total metrics processed by exporter web.
# TYPE ruby_collector_metrics_total counter
ruby_collector_metrics_total 85085


# HELP ruby_collector_sessions_total Total send_metric sessions processed by exporter web.
# TYPE ruby_collector_sessions_total counter
ruby_collector_sessions_total 13350


# HELP ruby_collector_bad_metrics_total Total mis-handled metrics by collector.
# TYPE ruby_collector_bad_metrics_total counter
ruby_collector_bad_metrics_total 0


and heap data analyzing is also pointed to

/root/prometheus_exporter/lib/prometheus_exporter.rb:15 * 1339

as u can see ruby_collector_rss 571297792 from metrics, so i guess it is really leaking, i think @duanshiqiang will have same results

@Fivell
Copy link
Contributor

Fivell commented Jul 2, 2019

after trying json instead of oj have same results but /usr/lib/ruby/2.5.0/json/common.rb:156 * 1040

@SamSaffron found your heapy gem, very helpfull
results of dump analyzing https://gist.github.com/Fivell/8766b65d6e80620ca56edf0702f6ca9f

@SamSaffron
Copy link
Member

Fundamentally all the data is entering from a single entry point:

https://github.com/discourse/prometheus_exporter/blob/master/lib/prometheus_exporter/server/collector.rb#L26-L28

So what you can do here to create a 100% reproduction of the issue is log the "string" and "timestamp" to a file.

Then if we have the file we can replay it ... mock time.now and Process.clock_gettime(Process::CLOCK_MONOTONIC) and reproduce the issue 100% consistently.

Note that heapy dump is not showing anything terribly bad, yes there is bits of duplication but only 20mb or so are in Ruby heaps.

Also I wonder maybe we are somehow getting VSZ on your server vs RSS?

@Fivell
Copy link
Contributor

Fivell commented Jul 3, 2019

@SamSaffron sample attached
prometheus-exrpoter-debug.log

image

@SamSaffron
Copy link
Member

SamSaffron commented Jul 4, 2019 via email

@Fivell
Copy link
Contributor

Fivell commented Jul 4, 2019

@SamSaffron I'm not sure time is somehow affects.
FYI launched exporter locally with smaller frequency of PrometheusExporter::Instrumentation::Puma like this

after_worker_boot do
    require 'prometheus_exporter/instrumentation'
    PrometheusExporter::Instrumentation::Puma.start(frequency: 1)
  end
end

and have very soon this picture

 ps x -o rss,vsz,command | grep prometheus_exporter
196764  4646520 ruby /Users/****/.rvm/gems/ruby-2.5.5@api3/bin/prometheus_exporter  

log.txt.zip

@Fivell
Copy link
Contributor

Fivell commented Jul 8, 2019

@SamSaffron sample of this but without time mocking
https://github.com/Fivell/exp_memory_test

shows after 25m file processing next

Total allocated: 267388634 bytes (2373642 objects)
Total retained:  83211730 bytes (864334 objects)

allocated memory by gem
-----------------------------------
 220305928  json
  47082706  other

allocated memory by file
-----------------------------------
 220305928  /Users/igorfedoronchuk/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/json/common.rb
  47082706  test.rb

allocated memory by location
-----------------------------------
 217064088  /Users/igorfedoronchuk/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/json/common.rb:156
  43770282  test.rb:12
   3312344  test.rb:11
   3241840  /Users/igorfedoronchuk/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/json/common.rb:155
        80  test.rb:13

allocated memory by class
-----------------------------------
 111253154  String
  94661728  JSON::Ext::Parser
  58161328  Hash
   3303920  Array
      8424  File
        80  Proc

allocated objects by gem
-----------------------------------
   1959164  json
    414478  other

allocated objects by file
-----------------------------------
   1959164  /Users/igorfedoronchuk/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/json/common.rb
    414478  test.rb

allocated objects by location
-----------------------------------
   1878118  /Users/igorfedoronchuk/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/json/common.rb:156
    331878  test.rb:12
     82599  test.rb:11
     81046  /Users/igorfedoronchuk/.rvm/rubies/ruby-2.5.5/lib/ruby/2.5.0/json/common.rb:155
         1  test.rb:13

allocated objects by class
-----------------------------------
   1964376  String
    245620  Hash
     82598  Array
     81046  JSON::Ext::Parser
         1  File
         1  Proc

retained memory by gem
-----------------------------------
  83211730  json

any suggestions?

@SamSaffron
Copy link
Member

SamSaffron commented Jul 8, 2019 via email

@SamSaffron
Copy link
Member

@eviltrout I can not fix this... but it is a very big mess that impacts every consumer of this gem that uses Puma. (it does not impact Discourse, but is very not good)

Can you get someone to fix this and push a new gem out there? I just gave you push on the gem.

Fix is pretty straightforward, follow same pattern process collector uses so array does not grow forever.

@Fivell
Copy link
Contributor

Fivell commented Jul 9, 2019

@SamSaffron started testing on staging with this didww@809b382

@eviltrout
Copy link
Contributor

@SamSaffron I'll handle it, no worries.

@Fivell
Copy link
Contributor

Fivell commented Jul 9, 2019

@eviltrout 👍 , testing attached PR today, seems everything is good now

@eviltrout
Copy link
Contributor

I reviewed @Fivell's PR and also noticed the unicorn collector has the same problem. It's been fixed in master and I'm releasing a new version right now.

@Fivell
Copy link
Contributor

Fivell commented Jul 9, 2019

@eviltrout thanks will finally test tomorrow, we have all of them, unicorns and pumas, one thing I can't figure out is that if discourse run on unicorn and unicorn collector has the same problem, how it can be that according to @SamSaffron

This is one of our collectors on a very busy server that has been running since March 27 (RSS is merely 36872)

@eviltrout
Copy link
Contributor

@Fivell I reached out to operations when I found the unicorn problem because I had the same suspicion. It turns out the unicorn monitoring as part of this gem is not used by the discourse-prometheus plugin, so that one did not affect us either.

@Fivell
Copy link
Contributor

Fivell commented Jul 9, 2019

@eviltrout thanks for clarifications! Good luck

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

4 participants