Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus cannot access haproxy site which has no port declaration #3169

Closed
WillCup opened this Issue Sep 14, 2017 · 3 comments

Comments

Projects
None yet
4 participants
@WillCup
Copy link

WillCup commented Sep 14, 2017

I use marathon-lb as proxy for my exporter running in marathon, then make the marathon-lb host accessible for prometheus server host.But prometheus cannot scrape the metrics from it.

docker run --name my-prometheus     -v /tmp/prometheus:/etc/prometheus     -p 9090:9090 prom/prometheus -alertmanager.url=http://10.2.19.112:9093 -config.file=/etc/prometheus/prometheus.yml -storage.local.path=/prometheus -web.console.libraries=/usr/share/prometheus/console_libraries -web.console.templates=/usr/share/prometheus/consoles

Target related json configuration file, based on file_sd_config:

[ {"targets": [ "dnode2"] }]

In prometheus server /targets page, I got the target info:


http://dnode2:80/metrics | DOWN | instance="dnode2:80" | 229ms ago | server returned HTTP status 503 Service Unavailable



[root@etl02 ~]# docker exec -it my-prometheus sh

/prometheus # wget dnode2
Connecting to dnode2 (10.1.5.190:80)
index.html           100% |************************************************************************************|  1016   0:00:00 ETA
/prometheus # ll
sh: ll: not found
/prometheus # ls
DIRTY                              archived_fingerprint_to_timerange  labelname_to_labelvalues
VERSION                            heads.db                           labelpair_to_fingerprints
archived_fingerprint_to_metric     index.html
/prometheus # cat index.html 
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 88866816.0
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 16347136.0
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1505366315.95
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.31
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 7.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 65536.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="2",minor="7",patchlevel="13",version="2.7.13"} 1.0
/prometheus # rm -vf index.html 
removed 'index.html'
/prometheus # wget http://dnode2:80/metrics
Connecting to dnode2:80 (10.1.5.190:80)
wget: server returned error: HTTP/1.0 503 Service Unavailable
/prometheus # wget http://dnode2:80/metrics
Connecting to dnode2:80 (10.1.5.190:80)
wget: server returned error: HTTP/1.0 503 Service Unavailable
/prometheus # wget http://dnode2:80
Connecting to dnode2:80 (10.1.5.190:80)
wget: server returned error: HTTP/1.0 503 Service Unavailable
/prometheus # wget http://dnode2
Connecting to dnode2 (10.1.5.190:80)
index.html           100% |************************************************************************************|  1016   0:00:00 ETA

As above I can wget dnode2 successfully, but dnode2:80 failed. Why?

wget dnode2:80 can also be executed successfully on the physical host.

[root@etl02 ~]# wget dnode2:80
--2017-09-14 02:35:23--  http://dnode2/
Resolving dnode2 (dnode2)... 10.1.5.190
Connecting to dnode2 (dnode2)|10.1.5.190|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/plain]
Saving to: ‘index.html’

    [ <=>                                                                                        ] 1,016       --.-K/s   in 0s      

2017-09-14 02:35:24 (69.1 MB/s) - ‘index.html’ saved [1016]

[root@etl02 ~]# cat index.html 
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 88875008.0
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 16429056.0
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1505366315.95
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 4.26
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 7.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 65536.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="2",minor="7",patchlevel="13",version="2.7.13"} 1.0

Anybody help me, I need prometheus server can scrape the exporter metrics successfully.

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Sep 14, 2017

It makes more sense to ask questions like this on the prometheus-users mailing list rather than in a GitHub issue. On the mailing list, more people are available to potentially respond to your question, and the whole community can benefit from the answers provided.

@modeyang

This comment has been minimized.

Copy link

modeyang commented Feb 26, 2018

has the issue solved? @WillCup , confront the same issue within haproxy backend .

@junneyang

This comment has been minimized.

Copy link

junneyang commented Oct 8, 2018

+1

@lock lock bot locked and limited conversation to collaborators Apr 6, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.