New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"If you see this your monitoring system is scraping the wrong fields" #56

Closed
Kaeltis opened this Issue Dec 4, 2017 · 12 comments

Comments

Projects
None yet
5 participants
@Kaeltis
Copy link

Kaeltis commented Dec 4, 2017

Since upgrading my monitors from luminous 12.2.1 to 12.2.2 I'm seeing the following warning in ceph-dash:

'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'

@k0ste

This comment has been minimized.

Copy link
Contributor

k0ste commented Dec 9, 2017

{
    "fsid": "5532c4fd-60db-43ff-af9a-c4eb8523382b",
    "health": {
        "checks": {},
        "status": "HEALTH_OK",
        "summary": [
            {
                "severity": "HEALTH_WARN",
                "summary": "'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'"
            }
        ],
        "overall_status": "HEALTH_WARN"
    },
    "election_epoch": 118,
    "quorum": [
        0,
        1,
        2
    ],
    "quorum_names": [
        "ceph-mon0",
        "ceph-mon1",
        "ceph-mon2"
    ],
    "monmap": {
        "epoch": 3,
        "fsid": "5532c4fd-60db-43ff-af9a-c4eb8523382b",
        "modified": "2017-12-09 21:37:05.345612",
        "created": "2017-04-15 17:27:02.430465",
        "features": {
            "persistent": [
                "kraken",
                "luminous"
            ],
            "optional": []
        },
        "mons": [
            {
                "rank": 0,
                "name": "ceph-mon0",
                "addr": "172.16.16.2:6789/0",
                "public_addr": "172.16.16.2:6789/0"
            },
            {
                "rank": 1,
                "name": "ceph-mon1",
                "addr": "172.16.16.3:6789/0",
                "public_addr": "172.16.16.3:6789/0"
            },
            {
                "rank": 2,
                "name": "ceph-mon2",
                "addr": "172.16.16.4:6789/0",
                "public_addr": "172.16.16.4:6789/0"
            }
        ]
    },
    "osdmap": {
        "osdmap": {
            "epoch": 7847,
            "num_osds": 30,
            "num_up_osds": 30,
            "num_in_osds": 30,
            "full": false,
            "nearfull": false,
            "num_remapped_pgs": 0
        }
    },
    "pgmap": {
        "pgs_by_state": [
            {
                "state_name": "active+clean",
                "count": 784
            }
        ],
        "num_pgs": 784,
        "num_pools": 4,
        "num_objects": 2313880,
        "data_bytes": 9652735833645,
        "bytes_used": 21349082923008,
        "bytes_avail": 88637178560512,
        "bytes_total": 109986261483520,
        "read_bytes_sec": 476802,
        "write_bytes_sec": 3934857,
        "read_op_per_sec": 154,
        "write_op_per_sec": 543,
        "promote_op_per_sec": 0
    },
    "fsmap": {
        "epoch": 1,
        "by_rank": []
    },
    "mgrmap": {
        "epoch": 276,
        "active_gid": 99035003,
        "active_name": "ceph-mon1",
        "active_addr": "172.16.16.3:6800/6875",
        "available": true,
        "standbys": [
            {
                "gid": 99015591,
                "name": "ceph-mon2",
                "available_modules": [
                    "balancer",
                    "dashboard",
                    "influx",
                    "localpool",
                    "prometheus",
                    "restful",
                    "selftest",
                    "status",
                    "zabbix"
                ]
            },
            {
                "gid": 99035006,
                "name": "ceph-mon0",
                "available_modules": [
                    "balancer",
                    "dashboard",
                    "influx",
                    "localpool",
                    "prometheus",
                    "restful",
                    "selftest",
                    "status",
                    "zabbix"
                ]
            }
        ],
        "modules": [
            "balancer",
            "dashboard",
            "restful",
            "status"
        ],
        "available_modules": [
            "balancer",
            "dashboard",
            "influx",
            "localpool",
            "prometheus",
            "restful",
            "selftest",
            "status",
            "zabbix"
        ],
        "services": {
            "dashboard": "http://localhost:7000/"
        }
    },
    "servicemap": {
        "epoch": 0,
        "modified": "0.000000",
        "services": {}
    }
}

exec ceph tell mon.* injectargs "--mon_health_preluminous_compat_warning=false"

now ceph-dash happy

{
    "fsid": "5532c4fd-60db-43ff-af9a-c4eb8523382b",
    "health": {
        "checks": {},
        "status": "HEALTH_OK"
    },
    "election_epoch": 118,
    "quorum": [
        0,
        1,
        2
    ],
    "quorum_names": [
        "ceph-mon0",
        "ceph-mon1",
        "ceph-mon2"
    ],
    "monmap": {
        "epoch": 3,
        "fsid": "5532c4fd-60db-43ff-af9a-c4eb8523382b",
        "modified": "2017-12-09 21:37:05.345612",
        "created": "2017-04-15 17:27:02.430465",
        "features": {
            "persistent": [
                "kraken",
                "luminous"
            ],
            "optional": []
        },
        "mons": [
            {
                "rank": 0,
                "name": "ceph-mon0",
                "addr": "172.16.16.2:6789/0",
                "public_addr": "172.16.16.2:6789/0"
            },
            {
                "rank": 1,
                "name": "ceph-mon1",
                "addr": "172.16.16.3:6789/0",
                "public_addr": "172.16.16.3:6789/0"
            },
            {
                "rank": 2,
                "name": "ceph-mon2",
                "addr": "172.16.16.4:6789/0",
                "public_addr": "172.16.16.4:6789/0"
            }
        ]
    },
    "osdmap": {
        "osdmap": {
            "epoch": 7847,
            "num_osds": 30,
            "num_up_osds": 30,
            "num_in_osds": 30,
            "full": false,
            "nearfull": false,
            "num_remapped_pgs": 0
        }
    },
    "pgmap": {
        "pgs_by_state": [
            {
                "state_name": "active+clean",
                "count": 784
            }
        ],
        "num_pgs": 784,
        "num_pools": 4,
        "num_objects": 2313916,
        "data_bytes": 9652889047085,
        "bytes_used": 21349614301184,
        "bytes_avail": 88636647182336,
        "bytes_total": 109986261483520,
        "read_bytes_sec": 499828,
        "write_bytes_sec": 21563512,
        "read_op_per_sec": 49,
        "write_op_per_sec": 457
    },
    "fsmap": {
        "epoch": 1,
        "by_rank": []
    },
    "mgrmap": {
        "epoch": 276,
        "active_gid": 99035003,
        "active_name": "ceph-mon1",
        "active_addr": "172.16.16.3:6800/6875",
        "available": true,
        "standbys": [
            {
                "gid": 99015591,
                "name": "ceph-mon2",
                "available_modules": [
                    "balancer",
                    "dashboard",
                    "influx",
                    "localpool",
                    "prometheus",
                    "restful",
                    "selftest",
                    "status",
                    "zabbix"
                ]
            },
            {
                "gid": 99035006,
                "name": "ceph-mon0",
                "available_modules": [
                    "balancer",
                    "dashboard",
                    "influx",
                    "localpool",
                    "prometheus",
                    "restful",
                    "selftest",
                    "status",
                    "zabbix"
                ]
            }
        ],
        "modules": [
            "balancer",
            "dashboard",
            "restful",
            "status"
        ],
        "available_modules": [
            "balancer",
            "dashboard",
            "influx",
            "localpool",
            "prometheus",
            "restful",
            "selftest",
            "status",
            "zabbix"
        ],
        "services": {
            "dashboard": "http://localhost:7000/"
        }
    },
    "servicemap": {
        "epoch": 0,
        "modified": "0.000000",
        "services": {}
    }
}
@Kaeltis

This comment has been minimized.

Copy link
Author

Kaeltis commented Dec 9, 2017

Well, that only removes the warning. But the issue that old fields are getting scraped is still there I'm guessing.

@k0ste

This comment has been minimized.

Copy link
Contributor

k0ste commented Dec 9, 2017

@Kaeltis you are absolutely right. I'm just tells "what happened with ceph-dash if..." and with json.

@k0ste

This comment has been minimized.

Copy link
Contributor

k0ste commented Dec 9, 2017

Well, check_ceph_dash not working properly with 12.2.2 too. I patched it Crapworks/check_ceph_dash#5 now works for me.

@k0ste

This comment has been minimized.

Copy link
Contributor

k0ste commented Dec 9, 2017

@Kaeltis test #57 please.

@Kaeltis

This comment has been minimized.

Copy link
Author

Kaeltis commented Dec 9, 2017

It's working, thanks! :)

@Crapworks Crapworks closed this in #57 Dec 9, 2017

@Crapworks

This comment has been minimized.

Copy link
Owner

Crapworks commented Dec 9, 2017

@k0ste I've merged the PR, thanks for your work! I

@benlu36

This comment has been minimized.

Copy link

benlu36 commented Apr 25, 2018

My openATTIC version is 2.0.22.
ceph version 12.2.4, luminous (stable) cluster running on Ubuntu16.04.4 LTS.

My Ceph cluster has health status from "ceph -s" at command line, but openATTIC dashboard show messages for Ceph cluster below:

The Ceph cluster is not operating correctly

'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'

@Crapworks

This comment has been minimized.

Copy link
Owner

Crapworks commented Apr 25, 2018

Hi @benlu36,

you should probably contact the openATTIC developers then. This is ceph-dash, and I don't think these two have anything todo with each other.

Cheers,
Christian

@dazhi509

This comment has been minimized.

Copy link

dazhi509 commented Nov 28, 2018

@k0ste Hi, how to get message about cluster as detailed as you get, is there a comfortable way? i only find command like ceph -s, ceph osd stat, mon_status

@k0ste

This comment has been minimized.

Copy link
Contributor

k0ste commented Nov 28, 2018

@dazhi509

ceph status --format="json-pretty"
@dazhi509

This comment has been minimized.

Copy link

dazhi509 commented Nov 28, 2018

@k0ste thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment