Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure discovery panics since #4202 #4447

Closed
sylr opened this Issue Aug 1, 2018 · 6 comments

Comments

Projects
None yet
3 participants
@sylr
Copy link
Contributor

sylr commented Aug 1, 2018

I ran prometheus:master in my azure based kubernetes cluster to test #4202

Here the error I got:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x895413]

goroutine 126 [running]:
github.com/prometheus/prometheus/discovery/azure.mapFromVM(...)
        /go/src/github.com/prometheus/prometheus/discovery/azure/azure.go:436
github.com/prometheus/prometheus/discovery/azure.(*azureClient).getVMs(0xc428bc2000, 0xc42788b6b0, 0x24, 0xc42788b6e0, 0x24, 0xc42788b650)
        /go/src/github.com/prometheus/prometheus/discovery/azure/azure.go:357 +0x2b3
github.com/prometheus/prometheus/discovery/azure.(*Discovery).refresh(0xc426ac2cc0, 0xc427392570, 0x0, 0x0)
        /go/src/github.com/prometheus/prometheus/discovery/azure/azure.go:240 +0x31e
github.com/prometheus/prometheus/discovery/azure.(*Discovery).Run(0xc426ac2cc0, 0x1ebcfa0, 0xc420082fc0, 0xc42368dbc0)
        /go/src/github.com/prometheus/prometheus/discovery/azure/azure.go:134 +0xbb
created by github.com/prometheus/prometheus/discovery.(*Manager).startProvider
        /go/src/github.com/prometheus/prometheus/discovery/manager.go:133 +0x116
@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Aug 1, 2018

@johscheuer Could you take a look please.

Can you share the relevant bit of your config please?

@sylr

This comment has been minimized.

Copy link
Contributor Author

sylr commented Aug 1, 2018

- job_name: azure
  scrape_interval: 30s
  scrape_timeout: 20s
  metrics_path: /metrics
  scheme: http
  azure_sd_configs:
  - port: 9100
    subscription_id: XXXXXXXXXXXXXXXXXXXXXXXX
    tenant_id: XXXXXXXXXXXXXXXXXXXXXXXX
    client_id: XXXXXXXXXXXXXXXXXXXXXXXX
    client_secret: <secret>
    refresh_interval: 5m
  relabel_configs:
  - source_labels: [__meta_azure_machine_tag_prometheus_io_scrape]
    separator: ;
    regex: '[Tt]rue'
    replacement: $1
    action: keep
  - separator: ;
    regex: __meta_azure_(machine_(name|location|resource_group))
    replacement: ${1}
    action: labelmap
@johscheuer

This comment has been minimized.

Copy link
Contributor

johscheuer commented Aug 1, 2018

I found the error, I will create a PR with the fix tomorrow (this, was introduced when we removed the pointers).

Probably we should add some unit tests for the Azure discovery.

@johscheuer

This comment has been minimized.

Copy link
Contributor

johscheuer commented Aug 2, 2018

I created an PR with the according fix: #4450

Sorry for introducing this bug! In my tests all VMs had tags so I didn't encounter this bug.

@sylr

This comment has been minimized.

Copy link
Contributor Author

sylr commented Aug 2, 2018

So far so good.

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 22, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 22, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.