Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Append failed, no token found #3710

Closed
njbartlett opened this Issue Jan 19, 2018 · 5 comments

Comments

Projects
None yet
3 participants
@njbartlett
Copy link

njbartlett commented Jan 19, 2018

What did you do?

Connected Prometheus to a scrape target

What did you expect to see?

Some data

What did you see instead? Under which circumstances?

level=warn ts=2018-01-19T14:07:13.805719Z caller=scrape.go:673 component="target manager" scrape_pool=atlas_osr2_m1 target=http://10.196.102.19:8089/ msg="append failed" err="no token found"

... on every scrape.

Environment

  • System information:

Darwin 17.3.0 x86_64

  • Prometheus version:
prometheus, version 2.0.0 (branch: HEAD, revision: 0a74f98628a0463dddc90528220c94de5032d1a0)
  build user:       root@615b82cb36b6
  build date:       20171108-07:15:39
  go version:       go1.9.2
  • Alertmanager version:

N/A

  • Prometheus configuration file:
# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']
  - job_name: 'atlas_osr2_m1'
    metrics_path: '/'
    static_configs:
      - targets: ['10.196.102.19:8089']
  • Alertmanager configuration file:

N/A

  • Logs:
$ ./prometheus
level=info ts=2018-01-19T13:59:07.846139Z caller=main.go:215 msg="Starting Prometheus" version="(version=2.0.0, branch=HEAD, revision=0a74f98628a0463dddc90528220c94de5032d1a0)"
level=info ts=2018-01-19T13:59:07.846724Z caller=main.go:216 build_context="(go=go1.9.2, user=root@615b82cb36b6, date=20171108-07:15:39)"
level=info ts=2018-01-19T13:59:07.846741Z caller=main.go:217 host_details=(darwin)
level=info ts=2018-01-19T13:59:07.849558Z caller=web.go:380 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2018-01-19T13:59:07.849316Z caller=targetmanager.go:71 component="target manager" msg="Starting target manager..."
level=info ts=2018-01-19T13:59:07.849257Z caller=main.go:314 msg="Starting TSDB"
level=info ts=2018-01-19T13:59:07.901968Z caller=main.go:326 msg="TSDB started"
level=info ts=2018-01-19T13:59:07.902035Z caller=main.go:394 msg="Loading configuration file" filename=prometheus.yml
level=info ts=2018-01-19T13:59:07.904385Z caller=main.go:371 msg="Server is ready to receive requests."
level=warn ts=2018-01-19T13:59:13.815807Z caller=scrape.go:673 component="target manager" scrape_pool=atlas_osr2_m1 target=http://10.196.102.19:8089/ msg="append failed" err="no token found"
level=warn ts=2018-01-19T13:59:28.804872Z caller=scrape.go:673 component="target manager" scrape_pool=atlas_osr2_m1 target=http://10.196.102.19:8089/ msg="append failed" err="no token found"
level=warn ts=2018-01-19T13:59:43.802955Z caller=scrape.go:673 component="target manager" scrape_pool=atlas_osr2_m1 target=http://10.196.102.19:8089/ msg="append failed" err="no token found"
level=warn ts=2018-01-19T13:59:58.802595Z caller=scrape.go:673 component="target manager" scrape_pool=atlas_osr2_m1 target=http://10.196.102.19:8089/ msg="append failed" err="no token found"
level=warn ts=2018-01-19T14:00:13.804051Z caller=scrape.go:673 component="target manager" scrape_pool=atlas_osr2_m1 target=http://10.196.102.19:8089/ msg="append failed" err="no token found"
level=warn ts=2018-01-19T14:00:28.804156Z caller=scrape.go:673 component="target manager" scrape_pool=atlas_osr2_m1 target=http://10.196.102.19:8089/ msg="append failed" err="no token found"
...
@njbartlett

This comment has been minimized.

Copy link
Author

njbartlett commented Jan 19, 2018

Metrics output attached. Note that cat metrics.txt | ./promtool check metrics produces no output and no error return.

metrics.txt

@marsty

This comment has been minimized.

Copy link

marsty commented Jan 26, 2018

has the problem ,are you solved?

@denyo

This comment has been minimized.

Copy link

denyo commented Jan 26, 2018

I had something similar two days ago where the data source was an ASP.NET API. What I did to make it work was

  • add App.Metrics.Formatters.Prometheus to the .NET project
  • configure the WebHost with endpointsOptions.MetricsTextEndpointOutputFormatter = new MetricsPrometheusTextOutputFormatter();
  • change the data source to metrics_path: '/metrics-text'
@njbartlett

This comment has been minimized.

Copy link
Author

njbartlett commented Jan 26, 2018

I found that the source of the problem was in my custom HTTP connector. In my project, the HTTPServer connector provided by Prometheus does not work due to strange classloading issues with Java's built-in com.sun.net.httpserver.HttpServer class (it interacts badly with the classloader installed by PowerMock in our unit test suite, resulting in java.lang.VerifyError... this is a PowerMock or JVM issue, nothing to do with Prometheus). Because of this I wrote my own connector using NanoHTTPD. I found that the connector should NOT use keep-alive on the HTTP response, even though the client requests keep-alive. Anyway the reported problem is now fixed.

@njbartlett njbartlett closed this Jan 26, 2018

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.