Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
44baa14
Create troubleshoot-high-disk-io.md
King-Dylan Jun 28, 2020
e5a5a57
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
ff2ee92
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
00898de
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
1d2f95e
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
33dc1cc
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
795827a
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
0dec9a0
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
db0d36e
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
98c5132
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
000e623
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
81e9bd3
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
a19112d
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
55a145c
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
6f7f2b8
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
684446c
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
ce9c7c8
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
df83f53
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
9041f37
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
01035dc
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
1bf17f6
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
be98f31
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
1d238ef
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
2d1e3e3
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
a0acb80
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
12dff5a
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
d792ad3
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
52da550
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
f6b1df5
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
f1f2633
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
3bdc193
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
8b92873
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
0038773
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
2271242
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
c0c5517
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
34b9e22
Update troubleshoot-high-disk-io.md
King-Dylan Jul 2, 2020
c5b1370
Update troubleshoot-high-disk-io.md
King-Dylan Jul 6, 2020
2dc60a3
Update troubleshoot-high-disk-io.md
King-Dylan Jul 6, 2020
5f7329d
Update troubleshoot-high-disk-io.md
King-Dylan Jul 6, 2020
2cddcf3
Update troubleshoot-high-disk-io.md
King-Dylan Jul 6, 2020
ce902b1
Update troubleshoot-high-disk-io.md
King-Dylan Jul 6, 2020
f2ad947
Update troubleshoot-high-disk-io.md
King-Dylan Jul 6, 2020
bbc3154
Update troubleshoot-high-disk-io.md
King-Dylan Jul 6, 2020
ba3a37e
Update troubleshoot-high-disk-io.md
King-Dylan Jul 6, 2020
84d239c
Update troubleshoot-high-disk-io.md
King-Dylan Jul 6, 2020
d90fbea
Update troubleshoot-high-disk-io.md
King-Dylan Jul 6, 2020
482319a
Update troubleshoot-high-disk-io.md
TomShawn Jul 7, 2020
736366d
Apply suggestions from code review
TomShawn Jul 7, 2020
133fffe
Update TOC.md
TomShawn Jul 7, 2020
1db21cb
Update troubleshoot-high-disk-io.md
TomShawn Jul 7, 2020
c3c6529
Merge branch 'master' into King-Dylan-patch-1
TomShawn Jul 16, 2020
8a5f2df
Update troubleshoot-high-disk-io.md
TomShawn Jul 16, 2020
16d020d
Apply suggestions from code review
TomShawn Jul 17, 2020
53eccc3
Merge branch 'master' into King-Dylan-patch-1
lilin90 Jul 17, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,7 @@
+ [Statement Summary Tables](/statement-summary-tables.md)
+ [Troubleshoot Hotspot Issues](/troubleshoot-hot-spot-issues.md)
+ [Troubleshoot Cluster Setup](/troubleshoot-tidb-cluster.md)
+ [Troubleshoot High Disk I/O Usage](/troubleshoot-high-disk-io.md)
+ [Troubleshoot TiCDC](/ticdc/troubleshoot-ticdc.md)
+ [Troubleshoot TiFlash](/tiflash/troubleshoot-tiflash.md)
+ [Troubleshoot Write Conflicts in Optimistic Transactions](/troubleshoot-write-conflicts.md)
Expand Down
94 changes: 94 additions & 0 deletions troubleshoot-high-disk-io.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
---
title: Troubleshoot High Disk I/O Usage in TiDB
summary: Learn how to locate and address the issue of high TiDB storage I/O usage.
---

# Troubleshoot High Disk I/O Usage in TiDB

This document introduces how to locate and address the issue of high disk I/O usage in TiDB.

## Check the current I/O metrics

If TiDB's response slows down after you have troubleshot the CPU bottleneck and the bottleneck caused by transaction conflicts, you need to check I/O metrics to help determine the current system bottleneck.

### Locate I/O issues from monitor

The quickest way to locate I/O issues is to view the overall I/O status from the monitor, such as the Grafana dashboard which is deployed by default by TiDB Ansible and TiUP. The dashboard panels related to I/O include **Overview**, **Node_exporter**, and **Disk-Performance**.

#### The first type of monitoring panels

In **Overview**> **System Info**> **IO Util**, you can see the I/O status of each machine in the cluster. This metric is similar to `util` in the Linux `iostat` monitor. The higher percentage represents higher disk I/O usage:

- If there is only one machine with high I/O usage in the monitor, currently there might be read and write hotspots on this machine.
- If the I/O usage of most machines in the monitor is high, the cluster now has high I/O loads.

For the first situation above (only one machine with high I/O usage), you can further observe I/O metrics from the **Disk-Performance Dashboard** such as `Disk Latency` and `Disk Load` to determine whether any anomaly exists. If necessary, use the fio tool to check the disk.

#### The second type of monitoring panels

The main storage component of the TiDB cluster is TiKV. One TiKV instance contains two RocksDB instances: one for storing Raft logs, located in `data/raft`, and the other for storing real data, located in `data/db`.

In **TiKV-Details** > **Raft IO**, you can see the metrics related to disk writes of these two instances:

- `Append log duration`: This metric indicates the response time of writes into RockDB that stores Raft logs. The `.99` response time should be within 50 ms.
- `Apply log duration`: This metric indicates the response time of writes into RockDB that stores real data. The `.99` response should be within 100 ms.

These two metrics also have the **.. per server** monitoring panel to help you view the write hotspots.

#### The third type of monitoring panels

In **TiKV-Details** > **Storage**, there are monitoring metrics related to storage:

- `Storage command total`: Indicates the number of different commands received.
- `Storage async write duration`: Includes monitoring metrics such as `disk sync duration`, which might be related to Raft I/O. If you encounter an abnormal situation, check the working statuses of related components by checking logs.

#### Other panels

In addition, some other panel metrics might help you determine whether the bottleneck is I/O, and you can try to set some parameters. By checking the prewrite/commit/raw-put (for raw key-value clusters only) of TiKV gRPC duration, you can determine that the bottleneck is indeed the slow TiKV write. The common situations of slow TiKV writes are as follows:

- `append log` is slow. TiKV Grafana's `Raft I/O` and `append log duration` metrics are relatively high, which is often due to slow disk writes. You can check the value of `WAL Sync Duration max` in **RocksDB-raft** to determine the cause of slow `append log`. Otherwise, you might need to report a bug.
- The `raftstore` thread is busy. In TiKV Grafana, `Raft Propose`/`propose wait duration` is significantly higher than `append log duration`. Check the following aspects for troubleshooting:

- Whether the value of `store-pool-size` of `[raftstore]` is too small. It is recommended to set this value between `[1,5]` and not too large.
- Whether the CPU resource of the machine is insufficient.

- `append log` is slow. TiKV Grafana's `Raft I/O` and `append log duration` metrics are relatively high, which might usually occur along with relatively high `Raft Propose`/`apply wait duration`. The possible causes are as follows:

- The value of `apply-pool-size` of `[raftstore]` is too small. It is recommended to set this value between `[1, 5]` and not too large. The value of `Thread CPU`/`apply cpu` is also relatively high.
- Insufficient CPU resources on the machine.
- Write hotspot issue of a single Region (Currently, the solution to this issue is still on the way). The CPU usage of a single `apply` thread is high (which can be viewed by modifying the Grafana expression, appended with `by (instance, name)`).
- Slow write into RocksDB, and `RocksDB kv`/`max write duration` is high. A single Raft log might contain multiple key-value pairs (kv). 128 kvs are written to RocksDB in a batch, so one `apply` log might involve multiple RocksDB writes.
- For other causes, report them as bugs.

- `raft commit log` is slow. In TiKV Grafana, `Raft I/O` and `commit log duration` (only available in Grafana 4.x) metrics are relatively high. Each Region corresponds to an independent Raft group. Raft has a flow control mechanism similar to the sliding window mechanism of TCP. To control the size of a sliding window, adjust the `[raftstore] raft-max-inflight-msgs` parameter. If there is a write hotspot and `commit log duration` is high, you can properly set this parameter to a larger value, such as `1024`.

### Locate I/O issues from log

- If the client reports errors such as `server is busy` or especially `raftstore is busy`, the errors might be related to I/O issues.

You can check the monitoring panel (**Grafana** -> **TiKV** -> **errors**) to confirm the specific cause of the `busy` error. `server is busy` is TiKV's flow control mechanism. In this way, TiKV informs `tidb/ti-client` that the current pressure of TiKV is too high, and the client should try later.

- `Write stall` appears in TiKV RocksDB logs.

It might be that too many level-0 SST files cause the write stall. To address the issue, you can add the `[rocksdb] max-sub-compactions = 2 (or 3)` parameter to speed up the compaction of level-0 SST files. This parameter means that the compaction tasks of level-0 to level-1 can be divided into `max-sub-compactions` subtasks for multi-threaded concurrent execution.

If the disk's I/O capability fails to keep up with the write, it is recommended to scale up the disk. If the throughput of the disk reaches the upper limit (for example, the throughput of SATA SSD is much lower than that of NVMe SSD), which results in write stall, but the CPU resource is relatively sufficient, you can try to use a compression algorithm of higher compression ratio to relieve the pressure on the disk, that is, use CPU resources to make up for disk resources.

For example, when the pressure of `default cf compaction` is relatively high, you can change the parameter`[rocksdb.defaultcf] compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd" , "zstd"]` to `compression-per-level = ["no", "no", "zstd", "zstd", "zstd", "zstd", "zstd"]`.

### I/O issues found in alerts

The cluster deployment tools (TiDB Ansible and TiUP) deploy the cluster with alert components by default that have built-in alert items and thresholds. The following alert items are related to I/O:

- TiKV_write_stall
- TiKV_raft_log_lag
- TiKV_async_request_snapshot_duration_seconds
- TiKV_async_request_write_duration_seconds
- TiKV_raft_append_log_duration_secs
- TiKV_raft_apply_log_duration_secs

## Handle I/O issues

+ When an I/O hotspot issue is confirmed to occur, you need to refer to Handle TiDB Hotspot Issues to eliminate the I/O hotspots.
+ When it is confirmed that the overall I/O performance has become the bottleneck, and you can determine that the I/O performance will keep falling behind in the application side, then you can take advantage of the distributed database's capability of scaling and scale out the number of TiKV nodes to have greater overall I/O throughput.
+ Adjust some of the parameters as described above, and use computing/memory resources to make up for disk storage resources.