Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange behaviour of Counter #3860

Closed
sigma420 opened this Issue Feb 19, 2018 · 6 comments

Comments

Projects
None yet
2 participants
@sigma420
Copy link

sigma420 commented Feb 19, 2018

What did you do?
Use client_java to model a Counter

What did you expect to see?
The Counter value should always increase over time

What did you see instead? Under which circumstances?
The Counter value flip flopped increasing and decreasing continuously. When viewed over a day,overall the counter value increased.

Environment
linux prometheus 2.0

  • System information:

    insert output of uname -srm here
    [ec2-user@xxxxxx ~]$ uname -srm
    Linux 3.10.0-514.6.1.el7.x86_64 x86_64

  • Prometheus version:

    insert output of prometheus --version here
    prometheus, version 2.0.0 (branch: HEAD, revision: 0a74f98)
    build user: root@615b82cb36b6
    build date: 20171108-07:11:59

    go version: go1.9.2

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Feb 19, 2018

Are you scraping through any form of load balancer? This is usually due to scraping multiple instances as one target, which doesn't work.

@sigma420

This comment has been minimized.

Copy link
Author

sigma420 commented Feb 19, 2018

hi Brian,

Thanks for the quick response. The targets are individual hosts where the Java app is running. In the graphic attached in the issue, I am trying to plot the metric from a single host xxxx:80. This is listed as a separate target in the config file. Please see the list of targets below.

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Feb 19, 2018

If there is indeed only one JVM backing that target, then the issue is likely incorrect usage of the client library. What does this metric look like in your code?

@sigma420

This comment has been minimized.

Copy link
Author

sigma420 commented Feb 20, 2018

Hi Brian,

It is defined as follows

public final class Metrics {
/**
static {
DefaultExports.initialize();
} **/

public static final Counter executeAutomatedWorkflowAsyncTotal = Counter
        .build()
        .name("tnd_nbi_executeAutomatedWorkflowAsync_total")
        .help("Total number of nbi executeAutomatedWorkflow request ")
        .labelNames("workflow","testname","usergroup","userid","channel").register();

And used as follows

if (null != params.get("testIdentifier")) {
Metrics.executeAutomatedWorkflowAsyncFailureTotal.labels(wfname,params.get("testIdentifier"),params.get("userDescriptor"),params.get("requestorID"),params.get("channelID")).inc();
}

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Feb 20, 2018

Aside from having all your metrics abstracted away in a different class being discouraged (keep them in the same file they are used), that looks okay. You've got a loadbalancer or similar in here somewhere.

@sigma420

This comment has been minimized.

Copy link
Author

sigma420 commented Feb 22, 2018

Hi Brian,

Thanks so much for the feedback. It was a load balancing issue.

@sigma420 sigma420 closed this Feb 22, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.