Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exposed metrics are incorrect #32

Open
Alinda3 opened this issue Jul 23, 2021 · 18 comments
Open

Exposed metrics are incorrect #32

Alinda3 opened this issue Jul 23, 2021 · 18 comments

Comments

@Alinda3
Copy link

Alinda3 commented Jul 23, 2021

Hi,
I am trying to use this exporter to get the cloudflare metrics as shown in the repo. I used docker image lablabs/cloudflare_exporter set env for CF_API_EMAIL,CF_API_KEY,CF_ZONES but I do not see the metrics related with Cloudflare. I see these kind of metrics:

go_memstats_lookups_total 0
go_memstats_mallocs_total 144710

go_memstats_mcache_inuse_bytes 9600

go_memstats_mcache_sys_bytes 16384

I am not sure what I am doing wrong. Any suggestions will be appreciated.

Thank you.

@martinhaus
Copy link
Member

@Alinda3 Hi, can you post the log output from the exporter?

@Alinda3
Copy link
Author

Alinda3 commented Jul 23, 2021

Hi, logs are like this:
time="2021-07-23 18:48:16" level=info msg="Beginning to serve on port:8080 metrics path /metrics" time="2021-07-23 18:48:18" level=info msg="Filtering zone: defbad1911a87cadfcedda61db090 mark.cloud" time="2021-07-23 18:49:16" level=info msg="Filtering zone: defbad1911a87cadfcedda61db090 mark.cloud" time="2021-07-23 18:50:16" level=info msg="Filtering zone: defbad1911a87cadfcedda61db090 mark.cloud"

@aksenk
Copy link

aksenk commented Oct 12, 2021

Hello. I have a similar problem.

I tried to run with CF_API_KEY and CF_API_EMAIL environment and only with CF_API_TOKEN.
I tried to run with and without environment CF_ZONES.

But I always can not see cloudflare metrics.

I see following defaults metrics:
go_* (like go_memstats_alloc_bytes_total)
process_* (like process_cpu_seconds_total)
promhttp_metric_handler_requests_in_flight
promhttp_metric_handler_requests_total

In logs I see only:
time="2021-10-12 12:14:05" level=info msg="Beginning to serve on port:8080, metrics path /metrics"

Cloudflare account have free and pro plan domains.

@garretcoffman
Copy link

I'm having a similar issue but what is weird is when I do a curl on the localhost:port/metrics I get all the metrics but prometheus isn't seeing them.

@alexkorotysh
Copy link

alexkorotysh commented Nov 29, 2021

I have the same problem, I have got only go metrics on metrics path "/metrics"

/metrics
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.8679e-05
go_gc_duration_seconds{quantile="0.25"} 7.4606e-05
go_gc_duration_seconds{quantile="0.5"} 0.000106982
go_gc_duration_seconds{quantile="0.75"} 0.000119734
go_gc_duration_seconds{quantile="1"} 0.000253345
go_gc_duration_seconds_sum 0.001464768
go_gc_duration_seconds_count 14
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 11
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.17.1"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 3.486344e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.387528e+07
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 4227
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 83190
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 3.2851146168064272e-06
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 5.14756e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 3.486344e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 2.90816e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 4.988928e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 9405
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 2.62144e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 7.897088e+06
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.638185740803824e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 92595
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 4800
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 81192
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 114688
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 6.100384e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 962277
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 491520
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 491520
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.4633744e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 6
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.27
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 12
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.8010112e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.63818442004e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 7.2945664e+08
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes -1
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 23
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0

Pod output
time="2021-11-29 11:13:40" level=info msg="Beginning to serve on port:8080, metrics path /metrics"
time="2021-11-29 11:13:41" level=info msg="Filtering zone: XXXXXXXXXXXXXXXXXXXX domain.com"

@slyshovsv
Copy link

Hi, @martinhaus, got the same here, tested on tags 0.0.6 - 0.0.9, also built the exporter locally, no cloudflare_* metrics as well, I used CF_API_TOKEN and CF_ZONES envs. If you had it working correctly, please specify the code version you used (or maybe share your build?).

@tete2soja
Copy link

tete2soja commented Jan 26, 2022

Hello.

I have the same issue using the docker image. Have any idea for a possible fix? I'm using free plan and FREE_TIER set to true.
I tested using docker image from the start version until 0.0.9 and latest image.

Thanks

@szandala
Copy link

Same here.
Anyone knows what is wrong?

@slyshovsv
Copy link

Unfortunately, I am not familiar with Go to a point where I can create a PR that will work for everyone, but still gonna write it here, might be useful. What I figured out is that Cloudflare changed the names of some of the metrics as well as which metrics are available for different tiers.

I managed to make the exporter somewhat usable with a free plan by updating the graphql queries in the cloudflare.go file to the ones that I need and that are available for me (check here). Then, do not forget to rename/update the references in prometheus.go, especially the addHTTPGroups function. you can use comment out calls to things like fetchZoneColocationAnalytics since it is not available for free plans anyway.

Hence, now I am using this chart with my own custom image.

@marcioa6
Copy link

marcioa6 commented Feb 10, 2022

I was having the same issue, I think it breaks for some reason it can not gather info on a zone.
That fixed it for me, setting manually all zones I want to scan:

cloudflare_exporter -cf_api_token="XXXXXX" -listen=:8080 -cf_zones="AAAAA,BBBBB,CCCCC,DDDDD"

@tete2soja
Copy link

You have a paid account?
Even with all zone listed, I have no data and time="2022-02-10 19:46:12" level=error msg="graphql: not authorized for that account" error.

@marcioa6
Copy link

You have a paid account? Even with all zone listed, I have no data and time="2022-02-10 19:46:12" level=error msg="graphql: not authorized for that account" error.

yes I have mixed free and paid accounts I realized that the ones that failed are free.

Add it to your start up -free_tier=true

cloudflare_exporter -cf_api_token="XXXXXX" -listen=:8080 -cf_zones="AAAAA,BBBBB,CCCCC,DDDDD" -free_tier=true

@tete2soja
Copy link

I tried and same error:

$ docker run --rm -p 8888:8081 -e CF_API_TOKEN="" -e FREE_TIER=true -e CF_ZONES="AAAA,BBBB" -e LISTEN=:8081 ghcr.io/lablabs/cloudflare_exporter
time="2022-02-10 19:46:09" level=info msg="Beginning to serve on port:8081, metrics path /metrics"
time="2022-02-10 19:46:12" level=info msg="Filtering zone: AAAA"
time="2022-02-10 19:46:12" level=info msg="Filtering zone: BBBB"
time="2022-02-10 19:46:12" level=error msg="graphql: not authorized for that account"

@marcioa6
Copy link

That's weird, I would check for the API token at this point, see if has the proper access.
curl -X GET "https://api.cloudflare.com/client/v4/user/tokens/verify" -H "Authorization: Bearer TOKEN" -H "Content-Type:application/json"

{
"result":{
"id":" xxxx ",
"status":"active",
"not_before":"2022-02-10T00:00:00Z",
"expires_on":"2022-02-12T23:59:59Z"
},
"success":true,
"errors":[

],
"messages":[
{
"code":10000,
"message":"This API Token is valid and active",
"type":null
}
]
}

@DjoleLepi
Copy link

DjoleLepi commented May 19, 2023

EDIT: I see now that recent issues state that metrics are only available for the CF pro-plan

I am also facing this issue.

Running cloudflare-exporter 0.0.14 on kubernetes with the following config

/ # env
KUBERNETES_PORT=tcp://10.43.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=cloudflare-exporter-6cf88d946c-z5nfv
SHLVL=1
HOME=/root
CF_API_TOKEN=***
CF_API_KEY=
CLOUDFLARE_EXPORTER_SERVICE_PORT_HTTP=8080
TERM=xterm
FREE_TIER=true
KUBERNETES_PORT_443_TCP_ADDR=10.43.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
CLOUDFLARE_EXPORTER_SERVICE_HOST=10.43.110.134
CLOUDFLARE_EXPORTER_PORT_8080_TCP_ADDR=10.43.110.134
CF_ZONES=zoneid
CF_API_EMAIL=
CLOUDFLARE_EXPORTER_PORT_8080_TCP_PORT=8080
CLOUDFLARE_EXPORTER_PORT_8080_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.43.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
CLOUDFLARE_EXPORTER_SERVICE_PORT=8080
CLOUDFLARE_EXPORTER_PORT=tcp://10.43.110.134:8080
KUBERNETES_SERVICE_HOST=10.43.0.1
PWD=/
CLOUDFLARE_EXPORTER_PORT_8080_TCP=tcp://10.43.110.134:8080

When I do a wget localhost:8080/metrics I don't get cloudflare_ metrics (only go_, process_, promhttp_)

Logs seem to be fine:

time="2023-05-19 10:06:53" level=info msg="Beginning to serve on port:8080, metrics path /metrics"
time="2023-05-19 10:06:56" level=info msg="Filtering zone: zoneid domain.net"
time="2023-05-19 10:07:55" level=info msg="Filtering zone: zoneid domain.net"
time="2023-05-19 10:08:55" level=info msg="Filtering zone: zoneid domain.net"
time="2023-05-19 10:09:55" level=info msg="Filtering zone: zoneid domain.net"
time="2023-05-19 10:10:55" level=info msg="Filtering zone: zoneid domain.net"

@tomich
Copy link

tomich commented Aug 25, 2023

Same issue here. I tried with my work account and it seems that this exporter does not work when there is one or more free tier zones. Even with FREE_TIER=true. My personal account only has free tier zones so this exporter is not of use to me.

I can see the metrics if I use GraphQL and API access directly with Cloudflare. And documentation claims metrics and logs are available to free tier accounts (only limited in retention). So I don't know what gives. I think it may be a code error in this exporter.

If anyone needs to add CF to grafana ATM you can use GraphQL to directly query CF (retention on CF obviously). But you have to build your graphQL queries by hand.

So, the TLDR version:

If you have only free_tier zones--> This exporter does not work.
If you have SOME free_tier zones and you have issues--> You may declare your NON free tier zones in CF_ZONES and get metrics of PRO/Enterprise plan.
If you have only PRO/Enterprise zones--> Exporter works.

Most of the open issues in the issue tracker are related to this. I will reference this issue (as it is the oldest) so devs can close for duplicate if they see fit. Sorry if it generates a bit of noice with notifications, but I think it can helpl reduce the open issues a lot.

@sachasmart
Copy link

What a lunch bag let down.

@danielbjornadal
Copy link

We have paid/Enterprise and free zones, and still could not se any other than the go metrics. Both CF_API_TOKEN and CF_ZONES where set. CF_ZONES only contains our paid domains.

We had created our API_TOKEN before converting to a Enterprise license, so when we recreated the token, and only enabled the scopes defined here (https://github.com/lablabs/cloudflare-exporter?tab=readme-ov-file#api-token) it started to work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests