-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
connection refused while scraping for kube scheduler metrics #35959
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Is the metrics port exposed on the scheduler pod? You shouldn't need a service if the scheduler is running in-cluster. |
In our environment, this is reproducible with build 0.111.0 and not reproducible with 0.110.0. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
Component(s)
receiver/prometheus
Describe the issue you're reporting
I have a 3 node k8s cluster.
I am using otel as daemonset with the following config:
extensions:
# The health_check extension is mandatory for this chart.
# Without the health_check extension the collector will fail the readiness and liveliness probes.
# The health_check extension can be modified, but should never be removed.
health_check: {}
memory_ballast: {}
bearertokenauth:
token: "XXXXXX"
processors:
receivers:
exporters:
logging: {}
prometheusremotewrite:
endpoint: "xxxxxxx"
resource_to_telemetry_conversion:
enabled: true
tls:
insecure: true
auth:
authenticator: bearertokenauth
service:
telemetry:
metrics:
address: ${env:MY_POD_IP}:8888
logs:
level: debug
extensions:
- health_check
- bearertokenauth
pipelines:
I get the below error.
2024-10-23T12:45:56.402Z debug scrape/scrape.go:1331 Scrape failed {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "scrape_pool": "kube-scheduler", "target": "https://100.xx.xx.xx:10259/metrics", "error": "Get "[https://100.xx.xx.xx:10259/metrics](https://100.xx.xx.xx:10259/metrics%5C)": dial tcp 100.xx.xx.xx:10259: connect: connection refused"}
I have the kube controller as three pods running on one node each on a 3 node cluster in the kube-system namespace.
DO I need a k8s service of type nodeport to get this to work?
I tried to login to the node, and run the curl -kvv https://100.xx.xx.xx:10259/metrics, I get connection refused, but it does work with
curl -kvv https://localhost:10259/metrics
The text was updated successfully, but these errors were encountered: