Skip to content
This repository has been archived by the owner on Jan 25, 2024. It is now read-only.

No caddy_ metrics exposed when using proxies #17

Closed
metalmatze opened this issue Sep 8, 2016 · 20 comments
Closed

No caddy_ metrics exposed when using proxies #17

metalmatze opened this issue Sep 8, 2016 · 20 comments

Comments

@metalmatze
Copy link

Hey,

I'd really like to use this plugin! Exactly what I need.

I download caddy like https://caddyserver.com/download/build?os=linux&arch=amd64&features=prometheus and build a docker image with it.

The Caddyfile includes only hosts like this:

foo.example.com {
    tls mail@example.com
    prometheus 0.0.0.0:9180
    proxy / foo.example.rancher.internal:1234 {
        transparent
    }
}

The metrics are exposed and also scraped.
Nevertheless there are not metrics with a caddy_ prefix.

Let me know if you need more info.

Thanks.

@miekg
Copy link
Owner

miekg commented Sep 8, 2016

So without the proxy it does work?

On 8 Sep 2016 9:47 pm, "Matthias Loibl" notifications@github.com wrote:

Hey,

I'd really like to use this plugin! Exactly what I need.

I download caddy like https://caddyserver.com/
download/build?os=linux&arch=amd64&features=prometheus and build a docker
image with it.

The Caddyfile includes only hosts like this:

foo.example.com {
tls mail@example.com
prometheus 0.0.0.0:9180
proxy / foo.example.rancher.internal:1234 {
transparent
}
}

The metrics are exposed and also scraped.
Nevertheless there are not metrics with a caddy_ prefix.

Let me know if you need more info.

Thanks.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#17, or mute the thread
https://github.com/notifications/unsubscribe-auth/AAVkWw-C5GF_Be94XcNDuTGHNZ1vbAnSks5qoHRDgaJpZM4J4b2F
.

@metalmatze
Copy link
Author

Actually, I don't know. All my applications are inside containers and are only proxied to.
I'd have to put some html file into the caddy container and test it.

@miekg
Copy link
Owner

miekg commented Sep 9, 2016

I can't really tell what (and if) there is a problem here. Can you curl the
metrics and show they don't contain what you expect?

On 8 Sep 2016 10:26 pm, "Matthias Loibl" notifications@github.com wrote:

Actually, I don't know. All my applications are inside containers and are
only proxied to.
I'd have to put some html file into the caddy container and test it.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#17 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAVkW7okTNG4623VN85YHhwjiaRMYRmWks5qoH14gaJpZM4J4b2F
.

@metalmatze
Copy link
Author

Ok, so I've added a test host like:

test.example.com:80 {
    tls off
    prometheus 0.0.0.0:9180
    root /root/test
}

After letting caddy run for 5min and hitting the site a few times (and others as well) I have t he following metrics.

# HELP caddy_http_request_count_total Counter of HTTP(S) requests made.
# TYPE caddy_http_request_count_total counter
caddy_http_request_count_total{family="1",host="test.example.com",proto="1.1"} 41
# HELP caddy_http_request_duration_seconds Histogram of the time (in seconds) each request took.
# TYPE caddy_http_request_duration_seconds histogram
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="0.005"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="0.01"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="0.025"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="0.05"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="0.1"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="0.25"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="0.5"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="1"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="2.5"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="5"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="10"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="15"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="20"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="30"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="60"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="120"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="180"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="240"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="480"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="960"} 41
caddy_http_request_duration_seconds_bucket{family="1",host="test.example.com",proto="1.1",le="+Inf"} 41
caddy_http_request_duration_seconds_sum{family="1",host="test.example.com",proto="1.1"} 0.006996537999999998
caddy_http_request_duration_seconds_count{family="1",host="test.example.com",proto="1.1"} 41
# HELP caddy_http_response_size_bytes Size of the returns response in bytes.
# TYPE caddy_http_response_size_bytes histogram
caddy_http_response_size_bytes_bucket{host="test.example.com",le="0"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="500"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="1000"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="2000"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="3000"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="4000"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="5000"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="10000"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="20000"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="30000"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="50000"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="100000"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="500000"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="1e+06"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="2e+06"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="3e+06"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="4e+06"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="5e+06"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="1e+07"} 41
caddy_http_response_size_bytes_bucket{host="test.example.com",le="+Inf"} 41
caddy_http_response_size_bytes_sum{host="test.example.com"} 0
caddy_http_response_size_bytes_count{host="test.example.com"} 41
# HELP caddy_http_response_status_count_total Counter of response status codes.
# TYPE caddy_http_response_status_count_total counter
caddy_http_response_status_count_total{host="test.example.com",status="200"} 41
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 2.9376e-05
go_gc_duration_seconds{quantile="0.25"} 3.5323e-05
go_gc_duration_seconds{quantile="0.5"} 4.1581e-05
go_gc_duration_seconds{quantile="0.75"} 7.5259e-05
go_gc_duration_seconds{quantile="1"} 0.000118981
go_gc_duration_seconds_sum 0.000525148
go_gc_duration_seconds_count 9
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 25
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 4.124304e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.45262e+07
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.445159e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 77422
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 503808
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 4.124304e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 3.555328e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 4.931584e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 13824
# HELP go_memstats_heap_released_bytes_total Total number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes_total counter
go_memstats_heap_released_bytes_total 0
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 8.486912e+06
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.4734118823908885e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 313
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 91246
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 1200
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 43680
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 65536
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.194304e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 502737
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 950272
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 950272
# HELP go_memstats_sys_bytes Number of bytes obtained by system. Sum of all system allocations.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.1970808e+07
# HELP http_request_duration_microseconds The HTTP request latencies in microseconds.
# TYPE http_request_duration_microseconds summary
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} 6451.506
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} 9795.398
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} 10574.086
http_request_duration_microseconds_sum{handler="prometheus"} 157535.27499999997
http_request_duration_microseconds_count{handler="prometheus"} 22
# HELP http_request_size_bytes The HTTP request sizes in bytes.
# TYPE http_request_size_bytes summary
http_request_size_bytes{handler="prometheus",quantile="0.5"} 479
http_request_size_bytes{handler="prometheus",quantile="0.9"} 479
http_request_size_bytes{handler="prometheus",quantile="0.99"} 479
http_request_size_bytes_sum{handler="prometheus"} 8988
http_request_size_bytes_count{handler="prometheus"} 22
# HELP http_requests_total Total number of HTTP requests made.
# TYPE http_requests_total counter
http_requests_total{code="200",handler="prometheus",method="get"} 22
# HELP http_response_size_bytes The HTTP response sizes in bytes.
# TYPE http_response_size_bytes summary
http_response_size_bytes{handler="prometheus",quantile="0.5"} 1744
http_response_size_bytes{handler="prometheus",quantile="0.9"} 1792
http_response_size_bytes{handler="prometheus",quantile="0.99"} 1801
http_response_size_bytes_sum{handler="prometheus"} 37995
http_response_size_bytes_count{handler="prometheus"} 22
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.47
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 524288
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 12
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.6515072e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.47341165227e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 2.2294528e+07

What's wrong with the metrics:

  1. There's only one host in the metrics which is not a proxy, but serves static files.
  2. caddy_http_response_status_count_total{host="test.example.com",status="200"} 41 In reality those should be 404 and not 200. Caddy returned a 404 to me.

I know that we tried this at work too and had the same issues.

@miekg
Copy link
Owner

miekg commented Sep 9, 2016

[ Quoting notifications@github.com in "Re: [miekg/caddy-prometheus] No cad..." ]

Ok, so I've added a test host like:

test.example.com:80 {
  tls off
  prometheus 0.0.0.0:9180
  root /root/test
}
  1. There's only one host in the metrics which is not a proxy, but serves static files.

Which looking at the above config file makes sense.

  1. caddy_http_response_status_count_total{host="test.example.com",status="200"} 41 In reality those should be 404 and not 200. Caddy returned a 404 to me.

There is no magic rewriting going on. hmm, just looking at handler.go, I use
rw.Status() in the metrics, think I should use the returned status instead....

/Miek

Miek Gieben

@metalmatze
Copy link
Author

Refering to 1: This is just an excerpt from my Caddyfile. There are like 15 - 20 more hosts overall and they're up and running because I can use the applications behind that 😊

@miekg
Copy link
Owner

miekg commented Sep 9, 2016

[ Quoting notifications@github.com in "Re: [miekg/caddy-prometheus] No cad..." ]

Refering to 1: This is just an excerpt from my Caddyfile. There are like 15 - 20 more hosts overall and they're up and running because I can use the applications behind that 😊

I still have no good idea on what your problem is. Can you boil it down to
(small) Caddyfiles and exactly say what you see and what you expect to
see?

/Miek

Miek Gieben

@metalmatze
Copy link
Author

Ok, sorry for that.
The problem for me is that I have 15-20 hosts (just like the one in my first post of this issue) inside caddy that are proxied.
In the metrics non of the hosts are visible, not metrics are shown. They should appear in the metrics as well. But they don't.

That's about it. 😊

@ulrichSchreiner
Copy link

hi,

i have the same problem. my caddyfile looks like this:

public.server1:443 {
  prometheus
  log stdout
  gzip
  tls {
        max_certs 10
  }
  proxy / http://localhost:8080 {
    transparent
  }
}
public.server2:443 {
  prometheus
  log stdout
  gzip
  tls {
        max_certs 10
  }
  proxy / http://localhost:8081 {
    transparent
  }
}

caddy works fine, i can access my public-servers with their name and they proxy the requests to the local running processes.

but i do not get any metrics with a caddy_ prefix.

@miekg
Copy link
Owner

miekg commented Sep 17, 2016

Can this be an plugin ordering problem? I.e. proxy comes after prometheus?

@ulrichSchreiner
Copy link

hi,

that's right. when i move the plugin to this position:

    "expvar",
    "prometheus", // github.com/miekg/caddy-prometheus
    "proxy",

i get metrics. but i really do not know which is the right position. this was only a test.

and this is my Caddyfile for testing:

usc-xps:8080 {
  tls off
  prometheus
  proxy / https://github.com
}

where usc-xps is the name of my notebook :-)

@miekg
Copy link
Owner

miekg commented Sep 18, 2016

[ Quoting notifications@github.com in "Re: [miekg/caddy-prometheus] No cad..." ]

hi,

that's right. when i move the plugin to this position:

  "expvar",
  "prometheus", // github.com/miekg/caddy-prometheus
  "proxy",

i get metrics. but i really do not know which is the right position. this was only a test.

It needs to be before anything you want to measure, I've put it way up in the
beginning. I think even on spot 0.
I wonder why the default caddy download does...

@metalmatze
Copy link
Author

So who's in charge of the official downloads? Probably @mholt right? 😊

@miekg
Copy link
Owner

miekg commented Sep 18, 2016

Yes @mholt

@ulrichSchreiner
Copy link

anything i can do to get the reordering?

@miekg
Copy link
Owner

miekg commented Sep 20, 2016

[ Quoting notifications@github.com in "Re: [miekg/caddy-prometheus] No cad..." ]

anything i can do to get the reordering?

Use the source, Luke... edit the file where the ordering is defined and put this
one before proxy, recompile, done.

/Miek

Miek Gieben

@ulrichSchreiner
Copy link

yes master :-)

but it would be great to have the official caddyserver also with another ordering, because my playbooks for my servers download the official binaries. so i wonder if i should file a issue/pr to caddy?

@miekg
Copy link
Owner

miekg commented Sep 20, 2016

[ Quoting notifications@github.com in "Re: [miekg/caddy-prometheus] No cad..." ]

yes master :-)

:-)

but it would be great to have the official caddyserver also with another ordering, because my playbooks for my servers download the official binaries. so i wonder if i should file a issue/pr to caddy?

Yes, please do.

/Miek

Miek Gieben

@metalmatze
Copy link
Author

This issue was fixed with the new caddy version 0.9.2.

@BasixKOR
Copy link

BasixKOR commented Aug 5, 2018

Hello, I think I have the same issue. Here is my Caddyfile:

https://git.basix.tech, :2003 {
	prometheus
	proxy / 172.19.0.2:3000 {
		insecure_skip_verify
		transparent
	}
	errors stdout
}

I followed the official Docker custom plugin guide and got this problem. Should I report this to abiosoft/caddy-docker?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants