Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unable to view named process in grafana #9

Closed
lakshmanvvs opened this issue May 10, 2017 · 12 comments
Closed

unable to view named process in grafana #9

lakshmanvvs opened this issue May 10, 2017 · 12 comments

Comments

@lakshmanvvs
Copy link

I'm using the docker image for this, but unable to view any data points for named_process.
What could be missing?

@ncabatoff
Copy link
Owner

I can speculate, but I'm more likely to give you something useful if you provide the command line you use to launch the container and the output of your GET /metrics call.

@lakshmanvvs
Copy link
Author

This is a part of my docker compose file for the process exporter

processexporter:
    image: lakshmanvvs/process-exporter
    container_name: processexporter
    volumes:
      - /proc:/host/proc
    command:
      - '-procfs=/host/proc'
      - '-procnames=chromium-browse,bash,prometheus,gvim,upstart:-user'
      - '-namemapping=upstart,(-user)'
    restart: unless-stopped
    expose:
      - 9256
    ports:
      - 9256:9256    
    networks:
      - monitor-net
    labels:
      org.label-schema.group: "monitoring"

@lakshmanvvs
Copy link
Author

Output from localhost:9256/metrics

# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 11
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 1.410072e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.410072e+06
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.443405e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 3393
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 169984
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 1.410072e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 925696
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 1.92512e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 10364
# HELP go_memstats_heap_released_bytes_total Total number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes_total counter
go_memstats_heap_released_bytes_total 0
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 2.850816e+06
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 0
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 302
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 13757
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 2400
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 28880
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 32768
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.473924e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 535211
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 294912
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 294912
# HELP go_memstats_sys_bytes Number of bytes obtained by system. Sum of all system allocations.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 5.34348e+06
# HELP http_request_duration_microseconds The HTTP request latencies in microseconds.
# TYPE http_request_duration_microseconds summary
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} NaN
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} NaN
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} NaN
http_request_duration_microseconds_sum{handler="prometheus"} 0
http_request_duration_microseconds_count{handler="prometheus"} 0
# HELP http_request_size_bytes The HTTP request sizes in bytes.
# TYPE http_request_size_bytes summary
http_request_size_bytes{handler="prometheus",quantile="0.5"} NaN
http_request_size_bytes{handler="prometheus",quantile="0.9"} NaN
http_request_size_bytes{handler="prometheus",quantile="0.99"} NaN
http_request_size_bytes_sum{handler="prometheus"} 0
http_request_size_bytes_count{handler="prometheus"} 0
# HELP http_response_size_bytes The HTTP response sizes in bytes.
# TYPE http_response_size_bytes summary
http_response_size_bytes{handler="prometheus",quantile="0.5"} NaN
http_response_size_bytes{handler="prometheus",quantile="0.9"} NaN
http_response_size_bytes{handler="prometheus",quantile="0.99"} NaN
http_response_size_bytes_sum{handler="prometheus"} 0
http_response_size_bytes_count{handler="prometheus"} 0
# HELP namedprocess_scrape_errors non-permission scrape errors
# TYPE namedprocess_scrape_errors counter
namedprocess_scrape_errors 0
# HELP namedprocess_scrape_permission_errors permission scrape errors (unreadable files under /proc)
# TYPE namedprocess_scrape_permission_errors counter
namedprocess_scrape_permission_errors 0
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.09
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 9
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 6.889472e+06
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.49451208815e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.247232e+07

@ncabatoff
Copy link
Owner

I tried running it with a docker-compose based on what you provided and wasn't able to reproduce the problem.

I don't know what the problem is, but I suggest getting it working without Docker first, for simplicity. You might also try using a config file that just captures everything, e.g.

process_names:
  - cmdline: 
    - .+

I don't recommend running it that way long-term, but it could help identify where the problem lies.

@flixr
Copy link
Contributor

flixr commented Jun 26, 2017

I'm actually having a similar problem on a Nvidia Jetson TK1 board (ARMv7) with kernel 3.10.40 running it "natively" so without docker:
I don't see any processes, neither with just specifying bash or so nor with the "catch all" cmdline posted above.
No errors either...
I tried with the released binary 0.1.0 as well as with latest master...

Any ideas?

@ncabatoff
Copy link
Owner

@flixr have you tried running the tests? I assume you have since you say you ran latest master, and you probably ran 'make' to build it, which runs the tests, but I just wanted to be sure. Assuming you have run make on the ARM machine, the tests in read_test.go passing indicate it likely isn't a problem actually collecting metrics from /proc.

Have you tried querying the exporter without going through Grafana? Just e.g. curl -s localhost:9256/metrics ? Is the metric namedprocess_scrape_errors == 0?

If those ideas don't yield a solution, please open a new issue. I don't assume your problem and @lakshmanvvs's are the same, and I think it would be less confusing if we kept them separate until such time as we determine otherwise.

@flixr
Copy link
Contributor

flixr commented Jun 27, 2017

Yes, the tests passed and there were no scrape_errors.
Opened new issue #12

@ncabatoff
Copy link
Owner

Closing due to inactivity and because I couldn't reproduce the problem.

@ouadi
Copy link

ouadi commented Jan 1, 2018

I have run through the same error. For my case, namedprocess_scrape_permission_errors was greater that 0 and the cause of the issue was, indeed, a permission issue: process-exporter was started with a user that doesn't have the right to access other processes stats.

In my case, I have resolved the issue by running the process with root.

@ncabatoff
Copy link
Owner

@ouadi That's the expected behaviour.

@underguiz
Copy link

@ncabatoff I'm trying to figure out if process-exporter should run as root since I couldn't find this information anywhere. Considering the answer you gave to @ouadi I can assume it needs to be run as root.

@ncabatoff
Copy link
Owner

It's best to run it as root for general purpose monitoring, but if you just need to monitor a particular user's processes you can run it as that user.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants