You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was playing with this lib yesterday, mostly to be able to provide metrics about certain aspects of the application but I've also tried Collector as well. Collector provides the same metrics that you can get with Nginx-Lua so I'll stay with it since it is generic solution for every HTTP app and I think it is little bit less overhead being collected with Lua from Nginx side.
However the question I would like to do is about knowing how you guys are dealing with the collector tracking URI's since it collect a bunch of data that makes scraper takes a lot of seconds to complete. For example in Staging with ~3 people accessing the application the scrape time is about 20s I cannot imagine how much it could be in Production. Easy fix is don't track URI but you loose the ability to identify slow endpoints, of-course is possible to use Logs or implement something else to get the slowest requests but will be really nice to keep that data in Prometheus.
Can you please share thoughts and experiences about Collector?
The text was updated successfully, but these errors were encountered:
For setups using paths which include unbounded dimensions (e.g. a user ID in a path like /users/5432/followers), the default collector configuration can indeed be harmful. For these cases it's possible to provide a custom label builder. At SoundCloud we use this to replace such dynamic parts of a path with a placeholder (e.g. /users/:id/followers).
Hi,
I was playing with this lib yesterday, mostly to be able to provide metrics about certain aspects of the application but I've also tried Collector as well. Collector provides the same metrics that you can get with Nginx-Lua so I'll stay with it since it is generic solution for every HTTP app and I think it is little bit less overhead being collected with Lua from Nginx side.
However the question I would like to do is about knowing how you guys are dealing with the collector tracking URI's since it collect a bunch of data that makes scraper takes a lot of seconds to complete. For example in Staging with ~3 people accessing the application the scrape time is about 20s I cannot imagine how much it could be in Production. Easy fix is don't track URI but you loose the ability to identify slow endpoints, of-course is possible to use Logs or implement something else to get the slowest requests but will be really nice to keep that data in Prometheus.
Can you please share thoughts and experiences about Collector?
The text was updated successfully, but these errors were encountered: