Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can one monitor all Verk hosts using a single VerkWeb instance? #26

Closed
keyan opened this issue Oct 25, 2016 · 4 comments
Closed

How can one monitor all Verk hosts using a single VerkWeb instance? #26

keyan opened this issue Oct 25, 2016 · 4 comments

Comments

@keyan
Copy link
Collaborator

keyan commented Oct 25, 2016

I have several instances running a Verk-backed application and I’d like to see the stats for the machines in aggregate. I’m having a couple issues.

  1. Is there a way to run a single VerkWeb node that gets stats for all the processors? When I tried running VerkWeb on it’s own I get an argument error:

    18:36:07.483 [error] #PID<0.337.0> running VerkWeb.Endpoint terminated
    Server: localhost:4000 (http)
    Request: GET /
    ** (exit) an exception was raised:
    ** (ArgumentError) argument error
        (stdlib) :ets.select(:queue_stats, [{{:"$1", :"$2", :"$3", :"$4", :_, :_}, [], [{{:"$1", :"$2", :"$3", :"$4"}}]}])
        (verk) lib/verk/queue_stats_counters.ex:23: Verk.QueueStatsCounters.all/0
        (verk) lib/verk/queue_stats.ex:22: Verk.QueueStats.all/0
        (verk_web) web/controllers/page_controller.ex:6: VerkWeb.PageController.index/2
        (verk_web) web/controllers/page_controller.ex:1: VerkWeb.PageController.action/2
        (verk_web) web/controllers/page_controller.ex:1: VerkWeb.PageController.phoenix_controller_pipeline/2
        (verk_web) lib/phoenix/router.ex:261: VerkWeb.Router.dispatch/2
        (verk_web) web/router.ex:1: VerkWeb.Router.do_call/2
    
  2. Unfortunately it looks like VerkWeb relies on the :queue_stats ets table to get queue specific stats, rather than using the data in Redis. The “All-time stats” numbers are coming from Redis and are reporting correctly, but queue specific stats are only showing jobs processed by the node on which that VerkWeb instance is running. Is this expected behavior?

@edgurgel
Copy link
Owner

Yes, it's expected behaviour!

We don't store data about all the processes inside Redis so it's not shared between VerkWeb instances. After benchmarking multiple instances with 10000+ processes each we noticed that the update of the processes state was affecting the performance so we decided to keep it in memory. Also it wouldn't be possible to inspect current state/stacktrace of a process if the pid is not running on the same node (as we have no connection between Verk instances)

Just all time stats (as they are flushed from time to time with predictable performance) and the jobs (retries, dead jobs, queue) are stored inside Redis.

If we had connection between Verk nodes we could gather the whole information and show to the user.

@keyan
Copy link
Collaborator Author

keyan commented Oct 25, 2016

Hi Eduardo, thanks for your response. That makes sense. But I'm still unclear about this:

[...] the jobs (retries, dead jobs, queue) are stored inside Redis

Then why don't we use the values from Redis rather than :ets for processed/failed for each queue? For example I am seeing this:

screen shot 2016-10-24 at 6 57 39 pm

It would be nice to have the persistent values from Redis for queue specific failed/processed stats. Would you be okay with a change like that? Did I explain my case well?

@edgurgel
Copy link
Owner

Oh I see what you mean!

Yeah I just noticed we didn't update Verk Web to get the stats from Redis for processed & failed.

I'm not sure about the actual implementation but this change would be awesome to have! 👍

@keyan
Copy link
Collaborator Author

keyan commented Oct 25, 2016

Okay, I'll take a stab! Thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants