Exporter for MySQL server metrics
Clone or download
#35 Compare This branch is 141 commits ahead, 36 commits behind prometheus:master.
askomorokhov and rnovikovP PMM-2913 custom queries (#39)
* PMM-2913 Draft commit before weekend.

* PMM-2913 Draft commit before weekend.

* PMM-2913 Improved custom query exporter. Added tests.

* PMM-2913 Updated custom queries.

* PMM-2913 Clean up.

* PMM-2913 Removed rudimental else. Added err check for flag.Set. Fixed no db test.

* PMM-2913 Update vendor from master.

* PMM-2913: Let's try to fix vendor dir.

* PMM-2913 Improved code to follow code convention.

* PMM-2913 Simplified code.

* PMM-2913 Enabled by default. (Do)commented query example file.

* PMM-2913 Updated comments in query example file queries-mysqld.yml.
Latest commit 11d880d Oct 3, 2018
Permalink
Failed to load latest commit information.
.github Add github issue template. (prometheus#225) Aug 18, 2017
collector PMM-2913 custom queries (#39) Oct 3, 2018
vendor PMM-2574: Downgrade prometheus/common to version before introducing k… Jun 16, 2018
.gitignore New release process using docker, circleci and a centralized building… May 4, 2016
.promu.yml Merge branch 'master' of git://github.com/prometheus/mysqld_exporter … Jun 15, 2017
.travis.yml PMM-2605 remove go 1.9 from travis files Jun 19, 2018
CHANGELOG.md Update CHANGELOG with breaking change May 15, 2017
CONTRIBUTING.md Add Docker Compose for integration tests (prometheus#228) Aug 29, 2017
Dockerfile New release process using docker, circleci and a centralized building… May 4, 2016
Gopkg.lock PMM-2574: Downgrade prometheus/common to version before introducing k… Jun 16, 2018
Gopkg.toml PMM-2574: Downgrade prometheus/common to version before introducing k… Jun 16, 2018
LICENSE initial Mar 12, 2015
MAINTAINERS.md Replace AUTHORS.md by an updated MAINTAINERS.md Feb 16, 2017
Makefile PMM-2574: `dep` Jun 16, 2018
NOTICE add info files and fix simple issues Mar 12, 2015
README.md Small update to be correlated with our DOC Aug 2, 2018
VERSION Bump version. Jan 17, 2018
circle.yml Merge branch 'master' of git://github.com/prometheus/mysqld_exporter … Jun 15, 2017
docker-compose.yml Add basic RocksDB metrics (PMM-1290). Aug 28, 2017
example.rules Add a recording rule example for TPS May 20, 2017
mysqld_exporter.go PMM-2913 custom queries (#39) Oct 3, 2018
mysqld_exporter_test.go Merge branch 'master' of github.com:prometheus/mysqld_exporter into m… Feb 2, 2018
queries-mysqld.yml PMM-2913 custom queries (#39) Oct 3, 2018

README.md

Percona MySQL Server Exporter Build Status

Prometheus exporter for MySQL server metrics. Supported MySQL versions: 5.1 and up. NOTE: Not all collection methods are supported on MySQL < 5.6

Building and running

Required Grants

CREATE USER 'exporter'@'localhost' IDENTIFIED BY 'XXXXXXXX' WITH MAX_USER_CONNECTIONS 10;
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'localhost';

NOTE: It is recommended to set a max connection limit for the user to avoid overloading the server with monitoring scrapes under heavy load.

Build

make

Running

Running using an environment variable:

export DATA_SOURCE_NAME='login:password@(hostname:port)/'
./mysqld_exporter <flags>

Running using ~/.my.cnf:

./mysqld_exporter <flags>

Collector Flags

Name MySQL Version Description
collect.auto_increment.columns 5.1 Collect auto_increment columns and max values from information_schema.
collect.binlog_size 5.1 Collect the current size of all registered binlog files.
collect.engine_innodb_status 5.1 Collect from SHOW ENGINE INNODB STATUS.
collect.engine_tokudb_status 5.6 Collect from SHOW ENGINE TOKUDB STATUS.
collect.global_status 5.1 Collect from SHOW GLOBAL STATUS (Enabled by default)
collect.global_variables 5.1 Collect from SHOW GLOBAL VARIABLES (Enabled by default)
collect.info_schema.clientstats 5.5 If running with userstat=1, set to true to collect client statistics.
collect.info_schema.innodb_metrics 5.6 Collect metrics from information_schema.innodb_metrics.
collect.info_schema.innodb_tablespaces 5.7 Collect metrics from information_schema.innodb_sys_tablespaces.
collect.info_schema.processlist 5.1 Collect thread state counts from information_schema.processlist.
collect.info_schema.processlist.min_time 5.1 Minimum time a thread must be in each state to be counted. (default: 0)
collect.info_schema.query_response_time 5.5 Collect query response time distribution if query_response_time_stats is ON.
collect.info_schema.tables 5.1 Collect metrics from information_schema.tables (Enabled by default)
collect.info_schema.tables.databases 5.1 The list of databases to collect table stats for, or '*' for all.
collect.info_schema.tablestats 5.1 If running with userstat=1, set to true to collect table statistics.
collect.info_schema.userstats 5.1 If running with userstat=1, set to true to collect user statistics.
collect.perf_schema.eventsstatements 5.6 Collect metrics from performance_schema.events_statements_summary_by_digest.
collect.perf_schema.eventsstatements.digest_text_limit 5.6 Maximum length of the normalized statement text. (default: 120)
collect.perf_schema.eventsstatements.limit 5.6 Limit the number of events statements digests by response time. (default: 250)
collect.perf_schema.eventsstatements.timelimit 5.6 Limit how old the 'last_seen' events statements can be, in seconds. (default: 86400)
collect.perf_schema.eventswaits 5.5 Collect metrics from performance_schema.events_waits_summary_global_by_event_name.
collect.perf_schema.file_events 5.6 Collect metrics from performance_schema.file_summary_by_event_name.
collect.perf_schema.file_instances 5.5 Collect metrics from performance_schema.file_summary_by_instance.
collect.perf_schema.indexiowaits 5.6 Collect metrics from performance_schema.table_io_waits_summary_by_index_usage.
collect.perf_schema.tableiowaits 5.6 Collect metrics from performance_schema.table_io_waits_summary_by_table.
collect.perf_schema.tablelocks 5.6 Collect metrics from performance_schema.table_lock_waits_summary_by_table.
collect.slave_status 5.1 Collect from SHOW SLAVE STATUS (Enabled by default)
collect.heartbeat 5.1 Collect from heartbeat.
collect.heartbeat.database 5.1 Database from where to collect heartbeat data. (default: heartbeat)
collect.heartbeat.table 5.1 Table from where to collect heartbeat data. (default: heartbeat)
collect.all - Collect all metrics.

General Flags

Name Description
config.my-cnf Path to .my.cnf file to read MySQL credentials from. (default: ~/.my.cnf)
log.level Logging verbosity (default: info)
exporter.lock_wait_timeout Set a lock_wait_timeout on the connection to avoid long metadata locking. (default: 2 seconds)
exporter.log_slow_filter Add a log_slow_filter to avoid slow query logging of scrapes. NOTE: Not supported by Oracle MySQL.
exporter.global-conn-pool Use global connection pool instead of creating new pool for each http request.
exporter.max-open-conns Maximum number of open connections to the database. https://golang.org/pkg/database/sql/#DB.SetMaxOpenConns
exporter.max-idle-conns Maximum number of connections in the idle connection pool. https://golang.org/pkg/database/sql/#DB.SetMaxIdleConns
exporter.conn-max-lifetime Maximum amount of time a connection may be reused. https://golang.org/pkg/database/sql/#DB.SetConnMaxLifetime
web.listen-address Address to listen on for web interface and telemetry.
web.telemetry-path Path under which to expose metrics.
version Print the version information.

Setting the MySQL server's data source name

The MySQL server's data source name must be set via the DATA_SOURCE_NAME environment variable. The format of this variable is described at https://github.com/go-sql-driver/mysql#dsn-data-source-name.

Using Docker

You can deploy this exporter using the prom/mysqld-exporter Docker image.

For example:

docker pull prom/mysqld-exporter

docker run -d -p 9104:9104 --link=my_mysql_container:bdd  \
        -e DATA_SOURCE_NAME="user:password@(bdd:3306)/database" prom/mysqld-exporter

heartbeat

With collect.heartbeat enabled, mysqld_exporter will scrape replication delay measured by heartbeat mechanisms. Pt-heartbeat is the reference heartbeat implementation supported.

Prometheus Configuration

The mysqld exporter will expose all metrics from enabled collectors by default, but it can be passed an optional list of collectors to filter metrics. The collect[] parameter accepts values matching Collector Flags names (without collect. prefix).

This can be useful for specifying different scrape intervals for different collectors.

scrape_configs:
  - job_name: 'mysql global status'
    scrape_interval: 15s
    static_configs:
      - targets:
        - '192.168.1.2:9104'
    params:
      collect[]:
        - global_status

  - job_name: 'mysql performance'
    scrape_interval: 1m
    static_configs:
      - targets:
        - '192.168.1.2:9104'
    params:
      collect[]:
        - perf_schema.tableiowaits
        - perf_schema.indexiowaits
        - perf_schema.tablelocks

Example Rules

There are some sample rules available in example.rules

Visualize

There is a Grafana dashboard for MySQL available as a part of PMM project, you can see the demo here.

Submit Bug Report

If you find a bug in Percona MySQL Exporter or one of the related projects, you should submit a report to that project's JIRA issue tracker.

Your first step should be to search the existing set of open tickets for a similar report. If you find that someone else has already reported your problem, then you can upvote that report to increase its visibility.

If there is no existing report, submit a report following these steps:

  1. Sign in to Percona JIRA. You will need to create an account if you do not have one.
  2. Go to the Create Issue screen and select the relevant project.
  3. Fill in the fields of Summary, Description, Steps To Reproduce, and Affects Version to the best you can. If the bug corresponds to a crash, attach the stack trace from the logs.

An excellent resource is Elika Etemad's article on filing good bug reports..

As a general rule of thumb, please try to create bug reports that are:

  • Reproducible. Include steps to reproduce the problem.
  • Specific. Include as much detail as possible: which version, what environment, etc.
  • Unique. Do not duplicate existing tickets.
  • Scoped to a Single Bug. One bug per report.