diff --git a/check-before-deployment.md b/check-before-deployment.md new file mode 100644 index 0000000000000..7cfe54ef610c5 --- /dev/null +++ b/check-before-deployment.md @@ -0,0 +1,342 @@ +--- +title: TiDB Environment and System Configuration Check +summary: Learn the environment check operations before deploying TiDB. +category: how-to +--- + +# TiDB Environment and System Configuration Check + +This document describes the environment check operations before deploying TiDB. The following steps are ordered by priorities. + +## Mount the data disk ext4 filesystem with options on the target machines that deploy TiKV + +For production deployments, it is recommended to use NVMe SSD of EXT4 filesystem to store TiKV data. This configuration is the best practice, whose reliability, security, and stability have been proven in a large number of online scenarios. + +Log in to the target machines using the `root` user account. + +Format your data disks to the ext4 filesystem and add the `nodelalloc` and `noatime` mount options to the filesystem. It is required to add the `nodelalloc` option, or else the TiUP deployment cannot pass the precheck. The `noatime` option is optional. + +> **Note:** +> +> If your data disks have been formatted to ext4 and have added the mount options, you can uninstall it by running the `umount /dev/nvme0n1p1` command, skip directly to the fifth step below to edit the `/etc/fstab` file, and add the options again to the filesystem. + +Take the `/dev/nvme0n1` data disk as an example: + +1. View the data disk. + + {{< copyable "shell-root" >}} + + ```bash + fdisk -l + ``` + + ``` + Disk /dev/nvme0n1: 1000 GB + ``` + +2. Create the partition. + + {{< copyable "shell-root" >}} + + ```bash + parted -s -a optimal /dev/nvme0n1 mklabel gpt -- mkpart primary ext4 1 -1 + ``` + + > **Note:** + > + > Use the `lsblk` command to view the device number of the partition: for a NVMe disk, the generated device number is usually `nvme0n1p1`; for a regular disk (for example, `/dev/sdb`), the generated device number is usually `sdb1`. + +3. Format the data disk to the ext4 filesystem. + + {{< copyable "shell-root" >}} + + ```bash + mkfs.ext4 /dev/nvme0n1p1 + ``` + +4. View the partition UUID of the data disk. + + In this example, the UUID of nvme0n1p1 is `c51eb23b-195c-4061-92a9-3fad812cc12f`. + + {{< copyable "shell-root" >}} + + ```bash + lsblk -f + ``` + + ``` + NAME FSTYPE LABEL UUID MOUNTPOINT + sda + ├─sda1 ext4 237b634b-a565-477b-8371-6dff0c41f5ab /boot + ├─sda2 swap f414c5c0-f823-4bb1-8fdf-e531173a72ed + └─sda3 ext4 547909c1-398d-4696-94c6-03e43e317b60 / + sr0 + nvme0n1 + └─nvme0n1p1 ext4 c51eb23b-195c-4061-92a9-3fad812cc12f + ``` + +5. Edit the `/etc/fstab` file and add the `nodelalloc` mount options. + + {{< copyable "shell-root" >}} + + ```bash + vi /etc/fstab + ``` + + ``` + UUID=c51eb23b-195c-4061-92a9-3fad812cc12f /data1 ext4 defaults,nodelalloc,noatime 0 2 + ``` + +6. Mount the data disk. + + {{< copyable "shell-root" >}} + + ```bash + mkdir /data1 && \ + mount -a + ``` + +7. Check using the following command. + + {{< copyable "shell-root" >}} + + ```bash + mount -t ext4 + ``` + + ``` + /dev/nvme0n1p1 on /data1 type ext4 (rw,noatime,nodelalloc,data=ordered) + ``` + + If the filesystem is ext4 and `nodelalloc` is included in the mount options, you have successfully mount the data disk ext4 filesystem with options on the target machines. + +## Check and disable system swap + +This section describes how to disable swap. + +TiDB requires sufficient memory space for operation. It is not recommended to use swap as a buffer for insufficient memory, which might reduce performance. Therefore, it is recommended to disable the system swap permanently. + +Do not disable the system swap by executing `swapoff -a`, or this setting will be invalid after the machine is restarted. + +To disable the system swap, execute the following command: + +{{< copyable "shell-regular" >}} + +```bash +echo "vm.swappiness = 0">> /etc/sysctl.conf +swapoff -a && swapon -a +sysctl -p +``` + +## Check and stop the firewall service of target machines + +In TiDB clusters, the access ports between nodes must be open to ensure the transmission of information such as read and write requests and data heartbeats. In common online scenarios, the data interaction between the database and the application service and between the database nodes are all made within a secure network. Therefore, if there are no special security requirements, it is recommended to stop the firewall of the target machine. Otherwise, refer to [the port usage](/hardware-and-software-requirements.md#network-requirements) and add the needed port information to the whitelist of the firewall service. + +The rest of this section describes how to stop the firewall service of a target machine. + +1. Check the firewall status. Take CentOS Linux release 7.7.1908 (Core) as an example. + + {{< copyable "shell-regular" >}} + + ```shell + sudo firewall-cmd --state + sudo systemctl status firewalld.service + ``` + +2. Stop the firewall service. + + {{< copyable "shell-regular" >}} + + ```bash + sudo systemctl stop firewalld.service + ``` + +3. Disable automatic start of the firewall service. + + {{< copyable "shell-regular" >}} + + ```bash + sudo systemctl disable firewalld.service + ``` + +4. Check the firewall status. + + {{< copyable "shell-regular" >}} + + ```bash + sudo systemctl status firewalld.service + ``` + +## Check and install the NTP service + +TiDB is a distributed database system that requires clock synchronization between nodes to guarantee linear consistency of transactions in the ACID model. + +At present, the common solution to clock synchronization is to use the Network Time Protocol (NTP) services. You can use the `pool.ntp.org` timing service on the Internet, or build your own NTP service in an offline environment. + +To check whether the NTP service is installed and whether it synchronizes with the NTP server normally, take the following steps: + +1. Run the following command. If it returns `running`, then the NTP service is running. + + {{< copyable "shell-regular" >}} + + ```bash + sudo systemctl status ntpd.service + ``` + + ``` + ntpd.service - Network Time Service + Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled) + Active: active (running) since 一 2017-12-18 13:13:19 CST; 3s ago + ``` + +2. Run the `ntpstat` command to check whether the NTP service synchronizes with the NTP server. + + > **Note:** + > + > For the Ubuntu system, you need to install the `ntpstat` package. + + {{< copyable "shell-regular" >}} + + ```bash + ntpstat + ``` + + - If it returns `synchronised to NTP server` (synchronizing with the NTP server), then the synchronization process is normal. + + ``` + synchronised to NTP server (85.199.214.101) at stratum 2 + time correct to within 91 ms + polling server every 1024 s + ``` + + - The following situation indicates the NTP service is not synchronizing normally: + + ``` + unsynchronised + ``` + + - The following situation indicates the NTP service is not running normally: + + ``` + Unable to talk to NTP daemon. Is it running? + ``` + +To make the NTP service start synchronizing as soon as possible, run the following command. Replace `pool.ntp.org` with your NTP server. + +{{< copyable "shell-regular" >}} + +```bash +sudo systemctl stop ntpd.service && \ +sudo ntpdate pool.ntp.org && \ +sudo systemctl start ntpd.service +``` + +To install the NTP service manually on the CentOS 7 system, run the following command: + +{{< copyable "shell-regular" >}} + +```bash +sudo yum install ntp ntpdate && \ +sudo systemctl start ntpd.service && \ +sudo systemctl enable ntpd.service +``` + +## Manually configure the SSH mutual trust and sudo without password + +This section describes how to manually configure the SSH mutual trust and sudo without password. It is recommended to use TiUP for deployment, which automatically configure SSH mutual trust and login without password. If you deploy TiDB clusters using TiUP, ignore this section. + +1. Log in to the target machine respectively using the `root` user account, create the `tidb` user and set the login password. + + {{< copyable "shell-root" >}} + + ```bash + useradd tidb && \ + passwd tidb + ``` + +2. To configure sudo without password, run the following command, and add `tidb ALL=(ALL) NOPASSWD: ALL` to the end of the file: + + {{< copyable "shell-root" >}} + + ```bash + visudo + ``` + + ``` + tidb ALL=(ALL) NOPASSWD: ALL + ``` + +3. Use the `tidb` user to log in to the control machine, and run the following command. Replace `10.0.1.1` with the IP of your target machine, and enter the `tidb` user password of the target machine as prompted. After the command is executed, SSH mutual trust is already created. This applies to other machines as well. + + {{< copyable "shell-regular" >}} + + ```bash + ssh-copy-id -i ~/.ssh/id_rsa.pub 10.0.1.1 + ``` + +4. Log in to the control machine using the `tidb` user account, and log in to the IP of the target machine using `ssh`. If you do not need to enter the password and can successfully log in, then the SSH mutual trust is successfully configured. + + {{< copyable "shell-regular" >}} + + ```bash + ssh 10.0.1.1 + ``` + + ``` + [tidb@10.0.1.1 ~]$ + ``` + +5. After you log in to the target machine using the `tidb` user, run the following command. If you do not need to enter the password and can switch to the `root` user, then sudo without password of the `tidb` user is successfully configured. + + {{< copyable "shell-regular" >}} + + ```bash + sudo -su root + ``` + + ``` + [root@10.0.1.1 tidb]# + ``` + +## Install the `numactl` tool + +This section describes how to install the NUMA tool. In online environments, because the hardware configuration is usually higher than required, to better plan the hardware resources, multiple instances of TiDB or TiKV can be deployed on a single machine. In such scenarios, you can use NUMA tools to prevent the competition for CPU resources which might cause reduced performance. + +> **Note:** +> +> - Binding cores using NUMA is a method to isolate CPU resources and is suitable for deploying multiple instances on highly configured physical machines. +> - After completing deployment using `tiup cluster deploy`, you can use the `exec` command to perform cluster level management operations. + +1. Log in to the target node to install. Take CentOS Linux release 7.7.1908 (Core) as an example. + + {{< copyable "shell-regular" >}} + + ```bash + sudo yum -y install numactl + ``` + +2. Run the `exec` command using `tiup cluster` to install in batches. + + {{< copyable "shell-regular" >}} + + ```bash + tiup cluster exec --help + ``` + + ``` + Run shell command on host in the tidb cluster + Usage: + cluster exec [flags] + Flags: + --command string the command run on cluster host (default "ls") + -h, --help help for exec + --sudo use root permissions (default false) + ``` + + To use the sudo privilege to execute the installation command for all the target machines in the `tidb-test` cluster, run the following command: + + {{< copyable "shell-regular" >}} + + ```bash + tiup cluster exec tidb-test --sudo --command "yum -y install numactl" + ``` diff --git a/faq/tidb-faq.md b/faq/tidb-faq.md index 562a95d858d3c..b8829babb58e8 100644 --- a/faq/tidb-faq.md +++ b/faq/tidb-faq.md @@ -248,7 +248,7 @@ Check the time difference between the machine time of the monitor and the time w #### Deploy TiDB offline using TiDB Ansible -It is not recommended to deploy TiDB offline using TiDB Ansible. If the Control Machine has no access to external network, you can deploy TiDB offline using TiDB Ansible. For details, see [Offline Deployment Using TiDB Ansible](/offline-deployment-using-ansible.md). +It is not recommended to deploy TiDB offline using TiDB Ansible. If the control machine has no access to external network, you can deploy TiDB offline using TiDB Ansible. For details, see [Offline Deployment Using TiDB Ansible](/offline-deployment-using-ansible.md). #### How to deploy TiDB quickly using Docker Compose on a single machine? diff --git a/hardware-and-software-requirements.md b/hardware-and-software-requirements.md index 7d98de088e2bd..47360da11f0eb 100644 --- a/hardware-and-software-requirements.md +++ b/hardware-and-software-requirements.md @@ -26,7 +26,7 @@ As an open source distributed NewSQL database with high performance, TiDB can be ## Software recommendations -### Control Machine +### Control machine | Software | Version | | :--- | :--- | @@ -35,7 +35,7 @@ As an open source distributed NewSQL database with high performance, TiDB can be > **Note:** > -> It is required that you deploy TiUP on the Control Machine to operate and manage TiDB clusters. +> It is required that you deploy TiUP on the control machine to operate and manage TiDB clusters. ### Target machines diff --git a/offline-deployment-using-ansible.md b/offline-deployment-using-ansible.md index 141e67aa0dff6..10646c64f27ef 100644 --- a/offline-deployment-using-ansible.md +++ b/offline-deployment-using-ansible.md @@ -18,22 +18,22 @@ Before you start, make sure that you have: - The machine must have access to the Internet in order to download TiDB Ansible, TiDB and related packages. - For Linux operating system, it is recommended to install CentOS 7.3 or later. -2. Several target machines and one Control Machine +2. Several target machines and one control machine - For system requirements and configuration, see [Prepare the environment](/online-deployment-using-ansible.md#prepare). - It is acceptable without access to the Internet. -## Step 1: Install system dependencies on the Control Machine +## Step 1: Install system dependencies on the control machine -Take the following steps to install system dependencies on the Control Machine installed with the CentOS 7 system. +Take the following steps to install system dependencies on the control machine installed with the CentOS 7 system. -1. Download the [`pip`](https://download.pingcap.org/ansible-system-rpms.el7.tar.gz) offline installation package on the download machine and then upload it to the Control Machine. +1. Download the [`pip`](https://download.pingcap.org/ansible-system-rpms.el7.tar.gz) offline installation package on the download machine and then upload it to the control machine. > **Note:** > > This offline installation package includes `pip` and `sshpass`, and only supports the CentOS 7 system. -2. Install system dependencies on the Control Machine. +2. Install system dependencies on the control machine. {{< copyable "shell-root" >}} @@ -60,15 +60,15 @@ Take the following steps to install system dependencies on the Control Machine i > > If `pip` is already installed to your system, make sure that the version is 8.1.2 or later. Otherwise, compatibility error occurs when you install TiDB Ansible and its dependencies offline. -## Step 2: Create the `tidb` user on the Control Machine and generate the SSH key +## Step 2: Create the `tidb` user on the control machine and generate the SSH key -See [Create the `tidb` user on the Control Machine and generate the SSH key](/online-deployment-using-ansible.md#step-2-create-the-tidb-user-on-the-control-machine-and-generate-the-ssh-key). +See [Create the `tidb` user on the control machine and generate the SSH key](/online-deployment-using-ansible.md#step-2-create-the-tidb-user-on-the-control-machine-and-generate-the-ssh-key). -## Step 3: Install TiDB Ansible and its dependencies offline on the Control Machine +## Step 3: Install TiDB Ansible and its dependencies offline on the control machine Currently, all the versions of TiDB Ansible from 2.4 to 2.7.11 are supported. The versions of TiDB Ansible and the related dependencies are listed in the `tidb-ansible/requirements.txt` file. The following installation steps take Ansible 2.5 as an example. -1. Download [Ansible 2.5 offline installation package](https://download.pingcap.org/ansible-2.5.0-pip.tar.gz) on the download machine and then upload it to the Control Machine. +1. Download [Ansible 2.5 offline installation package](https://download.pingcap.org/ansible-2.5.0-pip.tar.gz) on the download machine and then upload it to the control machine. 2. Install TiDB Ansible and its dependencies offline. @@ -140,11 +140,11 @@ Currently, all the versions of TiDB Ansible from 2.4 to 2.7.11 are supported. Th ansible-playbook local_prepare.yml ``` -4. After running the above command, copy the `tidb-ansible` folder to the `/home/tidb` directory of the Control Machine. The ownership authority of the file must be the `tidb` user. +4. After running the above command, copy the `tidb-ansible` folder to the `/home/tidb` directory of the control machine. The ownership authority of the file must be the `tidb` user. -## Step 5: Configure the SSH mutual trust and sudo rules on the Control Machine +## Step 5: Configure the SSH mutual trust and sudo rules on the control machine -See [Configure the SSH mutual trust and sudo rules on the Control Machine](/online-deployment-using-ansible.md#step-5-configure-the-ssh-mutual-trust-and-sudo-rules-on-the-control-machine). +See [Configure the SSH mutual trust and sudo rules on the control machine](/online-deployment-using-ansible.md#step-5-configure-the-ssh-mutual-trust-and-sudo-rules-on-the-control-machine). ## Step 6: Install the NTP service on the target machines diff --git a/online-deployment-using-ansible.md b/online-deployment-using-ansible.md index 325e0a44dd046..929f23ab9fbc1 100644 --- a/online-deployment-using-ansible.md +++ b/online-deployment-using-ansible.md @@ -49,20 +49,20 @@ Before you start, make sure you have: > > When you deploy TiDB using TiDB Ansible, **use SSD disks for the data directory of TiKV and PD nodes**. Otherwise, it cannot pass the check. If you only want to try TiDB out and explore the features, it is recommended to [deploy TiDB using Docker Compose](/deploy-test-cluster-using-docker-compose.md) on a single machine. -2. A Control Machine that meets the following requirements: +2. A control machine that meets the following requirements: > **Note:** > - > The Control Machine can be one of the target machines. + > The control machine can be one of the target machines. - CentOS 7.3 (64 bit) or later with Python 2.7 installed - Access to the Internet -## Step 1: Install system dependencies on the Control Machine +## Step 1: Install system dependencies on the control machine -Log in to the Control Machine using the `root` user account, and run the corresponding command according to your operating system. +Log in to the control machine using the `root` user account, and run the corresponding command according to your operating system. -- If you use a Control Machine installed with CentOS 7, run the following command: +- If you use a control machine installed with CentOS 7, run the following command: {{< copyable "shell-root" >}} @@ -71,7 +71,7 @@ Log in to the Control Machine using the `root` user account, and run the corresp yum -y install python2-pip ``` -- If you use a Control Machine installed with Ubuntu, run the following command: +- If you use a control machine installed with Ubuntu, run the following command: {{< copyable "shell-root" >}} @@ -79,9 +79,9 @@ Log in to the Control Machine using the `root` user account, and run the corresp apt-get -y install git curl sshpass python-pip ``` -## Step 2: Create the `tidb` user on the Control Machine and generate the SSH key +## Step 2: Create the `tidb` user on the control machine and generate the SSH key -Make sure you have logged in to the Control Machine using the `root` user account, and then run the following command. +Make sure you have logged in to the control machine using the `root` user account, and then run the following command. 1. Create the `tidb` user. @@ -153,9 +153,9 @@ Make sure you have logged in to the Control Machine using the `root` user accoun +----[SHA256]-----+ ``` -## Step 3: Download TiDB Ansible to the Control Machine +## Step 3: Download TiDB Ansible to the control machine -Log in to the Control Machine using the `tidb` user account and enter the `/home/tidb` directory. Run the following command to download the [TAG version](https://github.com/pingcap/tidb-ansible/tags) corresponding to TiDB Ansible 4.0 from the [TiDB Ansible project](https://github.com/pingcap/tidb-ansible). The default folder name is `tidb-ansible`. +Log in to the control machine using the `tidb` user account and enter the `/home/tidb` directory. Run the following command to download the [TAG version](https://github.com/pingcap/tidb-ansible/tags) corresponding to TiDB Ansible 4.0 from the [TiDB Ansible project](https://github.com/pingcap/tidb-ansible). The default folder name is `tidb-ansible`. {{< copyable "shell-regular" >}} @@ -171,13 +171,13 @@ git clone -b $tag https://github.com/pingcap/tidb-ansible.git If you have questions regarding which version to use, email to info@pingcap.com for more information or [file an issue](https://github.com/pingcap/tidb-ansible/issues/new). -## Step 4: Install TiDB Ansible and its dependencies on the Control Machine +## Step 4: Install TiDB Ansible and its dependencies on the control machine -Make sure you have logged in to the Control Machine using the `tidb` user account. +Make sure you have logged in to the control machine using the `tidb` user account. It is required to use `pip` to install Ansible and its dependencies, otherwise a compatibility issue occurs. Currently, the release-4.0 branch of TiDB Ansible is compatible with Ansible 2.5 ~ 2.7.11 (2.5 ≤ Ansible ≤ 2.7.11). -1. Install TiDB Ansible and the dependencies on the Control Machine: +1. Install TiDB Ansible and the dependencies on the control machine: {{< copyable "shell-regular" >}} @@ -200,9 +200,9 @@ It is required to use `pip` to install Ansible and its dependencies, otherwise a ansible 2.7.11 ``` -## Step 5: Configure the SSH mutual trust and sudo rules on the Control Machine +## Step 5: Configure the SSH mutual trust and sudo rules on the control machine -Make sure you have logged in to the Control Machine using the `tidb` user account. +Make sure you have logged in to the control machine using the `tidb` user account. 1. Add the IPs of your target machines to the `[servers]` section of the `hosts.ini` file. @@ -235,7 +235,7 @@ Make sure you have logged in to the Control Machine using the `tidb` user accoun ansible-playbook -i hosts.ini create_users.yml -u root -k ``` - This step creates the `tidb` user account on the target machines, configures the sudo rules and the SSH mutual trust between the Control Machine and the target machines. + This step creates the `tidb` user account on the target machines, configures the sudo rules and the SSH mutual trust between the control machine and the target machines. To configure the SSH mutual trust and sudo without password manually, see [How to manually configure the SSH mutual trust and sudo without password](#how-to-manually-configure-the-ssh-mutual-trust-and-sudo-without-password). @@ -245,7 +245,7 @@ To configure the SSH mutual trust and sudo without password manually, see [How t > > If the time and time zone of all your target machines are same, the NTP service is on and is normally synchronizing time, you can ignore this step. See [How to check whether the NTP service is normal](#how-to-check-whether-the-ntp-service-is-normal). -Make sure you have logged in to the Control Machine using the `tidb` user account, run the following command: +Make sure you have logged in to the control machine using the `tidb` user account, run the following command: {{< copyable "shell-regular" >}} @@ -437,7 +437,7 @@ Take the `/dev/nvme0n1` data disk as an example: ## Step 9: Edit the `inventory.ini` file to orchestrate the TiDB cluster -Log in to the Control Machine using the `tidb` user account, and edit the `tidb-ansible/inventory.ini` file to orchestrate the TiDB cluster. The standard TiDB cluster contains 6 machines: 2 TiDB instances, 3 PD instances, and 3 TiKV instances. +Log in to the control machine using the `tidb` user account, and edit the `tidb-ansible/inventory.ini` file to orchestrate the TiDB cluster. The standard TiDB cluster contains 6 machines: 2 TiDB instances, 3 PD instances, and 3 TiKV instances. - Deploy at least 3 instances for TiKV. - Do not deploy TiKV together with TiDB or PD on the same machine. @@ -638,13 +638,13 @@ To enable the following control variables, use the capitalized `True`. To disabl | grafana_admin_user | the username of Grafana administrator; default `admin` | | grafana_admin_password | the password of Grafana administrator account; default `admin`; used to import Dashboard and create the API key using TiDB Ansible; update this variable if you have modified it through Grafana web | | collect_log_recent_hours | to collect the log of recent hours; default the recent 2 hours | -| enable_bandwidth_limit | to set a bandwidth limit when pulling the diagnostic data from the target machines to the Control Machine; used together with the `collect_bandwidth_limit` variable | -| collect_bandwidth_limit | the limited bandwidth when pulling the diagnostic data from the target machines to the Control Machine; unit: Kbit/s; default 10000, indicating 10Mb/s; for the cluster topology of multiple TiKV instances on each TiKV node, you need to divide the number of the TiKV instances on each TiKV node | +| enable_bandwidth_limit | to set a bandwidth limit when pulling the diagnostic data from the target machines to the control machine; used together with the `collect_bandwidth_limit` variable | +| collect_bandwidth_limit | the limited bandwidth when pulling the diagnostic data from the target machines to the control machine; unit: Kbit/s; default 10000, indicating 10Mb/s; for the cluster topology of multiple TiKV instances on each TiKV node, you need to divide the number of the TiKV instances on each TiKV node | | prometheus_storage_retention | the retention time of the monitoring data of Prometheus (30 days by default); this is a new configuration in the `group_vars/monitoring_servers.yml` file in 2.1.7, 3.0 and the later tidb-ansible versions | ## Step 11: Deploy the TiDB cluster -When `ansible-playbook` runs Playbook, the default concurrent number is 5. If many deployment target machines are deployed, you can add the `-f` parameter to specify the concurrency, such as `ansible-playbook deploy.yml -f 10`. +When `ansible-playbook` runs Playbook, the default concurrent number is 5. If many target machines are deployed, you can add the `-f` parameter to specify the concurrency, such as `ansible-playbook deploy.yml -f 10`. The following example uses `tidb` as the user who runs the service. @@ -676,7 +676,7 @@ The following example uses `tidb` as the user who runs the service. ansible -i inventory.ini all -m shell -a 'whoami' -b ``` -2. Run the `local_prepare.yml` playbook and download TiDB binary to the Control Machine. +2. Run the `local_prepare.yml` playbook and download TiDB binary to the control machine. {{< copyable "shell-regular" >}} @@ -889,7 +889,7 @@ ansible-playbook start.yml ### How to manually configure the SSH mutual trust and sudo without password? -1. Log in to the deployment target machine respectively using the `root` user account, create the `tidb` user and set the login password. +1. Log in to the target machine respectively using the `root` user account, create the `tidb` user and set the login password. {{< copyable "shell-root" >}} @@ -910,7 +910,7 @@ ansible-playbook start.yml tidb ALL=(ALL) NOPASSWD: ALL ``` -3. Use the `tidb` user to log in to the Control Machine, and run the following command. Replace `172.16.10.61` with the IP of your deployment target machine, and enter the `tidb` user password of the deployment target machine as prompted. Successful execution indicates that SSH mutual trust is already created. This applies to other machines as well. +3. Use the `tidb` user to log in to the control machine, and run the following command. Replace `172.16.10.61` with the IP of your target machine, and enter the `tidb` user password of the target machine as prompted. Successful execution indicates that SSH mutual trust is already created. This applies to other machines as well. {{< copyable "shell-regular" >}} @@ -918,7 +918,7 @@ ansible-playbook start.yml ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.10.61 ``` -4. Log in to the Control Machine using the `tidb` user account, and log in to the IP of the target machine using SSH. If you do not need to enter the password and can successfully log in, then the SSH mutual trust is successfully configured. +4. Log in to the control machine using the `tidb` user account, and log in to the IP of the target machine using SSH. If you do not need to enter the password and can successfully log in, then the SSH mutual trust is successfully configured. {{< copyable "shell-regular" >}} @@ -930,7 +930,7 @@ ansible-playbook start.yml [tidb@172.16.10.61 ~]$ ``` -5. After you login to the deployment target machine using the `tidb` user, run the following command. If you do not need to enter the password and can switch to the `root` user, then sudo without password of the `tidb` user is successfully configured. +5. After you login to the target machine using the `tidb` user, run the following command. If you do not need to enter the password and can switch to the `root` user, then sudo without password of the `tidb` user is successfully configured. {{< copyable "shell-regular" >}} @@ -944,7 +944,7 @@ ansible-playbook start.yml ### Error: You need to install jmespath prior to running json_query filter -1. See [Install TiDB Ansible and its dependencies on the Control Machine](#step-4-install-tidb-ansible-and-its-dependencies-on-the-control-machine) and use `pip` to install TiDB Ansible and the corresponding dependencies in the Control Machine. The `jmespath` dependent package is installed by default. +1. See [Install TiDB Ansible and its dependencies on the control machine](#step-4-install-tidb-ansible-and-its-dependencies-on-the-control-machine) and use `pip` to install TiDB Ansible and the corresponding dependencies in the control machine. The `jmespath` dependent package is installed by default. 2. Run the following command to check whether `jmespath` is successfully installed: @@ -954,7 +954,7 @@ ansible-playbook start.yml pip show jmespath ``` -3. Enter `import jmespath` in the Python interactive window of the Control Machine. +3. Enter `import jmespath` in the Python interactive window of the control machine. - If no error displays, the dependency is successfully installed. - If the `ImportError: No module named jmespath` error displays, the Python `jmespath` module is not successfully installed. diff --git a/production-deployment-using-tiup.md b/production-deployment-using-tiup.md index 0e789ed9524d6..33834cd92e505 100644 --- a/production-deployment-using-tiup.md +++ b/production-deployment-using-tiup.md @@ -12,7 +12,7 @@ aliases: ['/docs/stable/how-to/deploy/orchestrated/tiup/'] This document introduces how to use TiUP to deploy a TiDB cluster. The steps are as follows: - [Step 1: Prepare the right machines for deployment](#step-1-prepare-the-right-machines-for-deployment) -- [Step 2: Install TiUP on the Control Machine](#step-2-install-tiup-on-the-control-machine) +- [Step 2: Install TiUP on the control machine](#step-2-install-tiup-on-the-control-machine) - [Step 3: Mount the data disk ext4 filesystem with options on the target machines that deploy TiKV](#step-3-mount-the-data-disk-ext4-filesystem-with-options-on-the-target-machines-that-deploy-tikv) - [Step 4: Edit the initialization configuration file `topology.yaml`](#step-4-edit-the-initialization-configuration-file-topologyyaml) - [Step 5: Execute the deployment command](#step-5-execute-the-deployment-command) @@ -35,12 +35,12 @@ Here are the steps of preparing your deployment environment. ### Step 1: Prepare the right machines for deployment -The software and hardware recommendations for the **Control Machine** are as follows: +The software and hardware recommendations for the **control machine** are as follows: -- The Control Machine can be one of the target machines. -- For the Control Machine' operating system, it is recommended to install CentOS 7.3 or above. -- The Control Machine needs to access the external Internet to download TiDB and related software installation packages. -- You need to install TiUP on the Control Machine. Refer to [Step 2](#step-2-install-tiup-on-the-control-machine) for installation steps. +- The control machine can be one of the target machines. +- For the control machine' operating system, it is recommended to install CentOS 7.3 or above. +- The control machine needs to access the external Internet to download TiDB and related software installation packages. +- You need to install TiUP on the control machine. Refer to [Step 2](#step-2-install-tiup-on-the-control-machine) for installation steps. The software and hardware recommendations for the **target machines** are as follows: @@ -50,14 +50,14 @@ The software and hardware recommendations for the **target machines** are as fol - Under ARM architecture, it is recommended to use CentOS 7.6 1810 as the operating system. - For the file system of TiKV data files, it is recommended to use EXT4 format. (refer to [Step 3](#step-3-mount-the-data-disk-ext4-filesystem-with-options-on-the-target-machines-that-deploy-tikv)) You can also use CentOS default XFS format. - The target machines can communicate with each other on the Intranet. (It is recommended to [disable the firewall `firewalld`](#how-to-stop-the-firewall-service-of-deployment-machines), or enable the required ports between the nodes of the TiDB cluster.) -- [Disable the system swap](#how-to-disable-system-swap) on all the deployment machines. +- [Disable the system swap](#how-to-disable-system-swap) on all the target machines. - If you need to bind CPU cores, [install the `numactl` tool](#how-to-install-the-numactl-tool). For other software and hardware recommendations, refer to [TiDB Software and Hardware Recommendations](/hardware-and-software-requirements.md). -### Step 2: Install TiUP on the Control Machine +### Step 2: Install TiUP on the control machine -Log in to the Control Machine using a regular user account (take the `tidb` user as an example). All the following TiUP installation and cluster management operations can be performed by the `tidb` user. +Log in to the control machine using a regular user account (take the `tidb` user as an example). All the following TiUP installation and cluster management operations can be performed by the `tidb` user. 1. Install TiUP by executing the following command: @@ -286,7 +286,7 @@ Take the `/dev/nvme0n1` data disk as an example: You need to manually create and edit the cluster initialization configuration file. For the full configuration template, refer to [Github TiUP Project](https://github.com/pingcap-incubator/tiup-cluster/blob/master/examples/topology.example.yaml). -You need to create a YAML configuration file on the Control Machine, such as `topology.yaml`. +You need to create a YAML configuration file on the control machine, such as `topology.yaml`. The following sections provide a cluster configuration template for each of the following common scenarios: @@ -317,7 +317,7 @@ The following sections provide a cluster configuration template for each of the > **Note:** > -> You do not need to manually create the `tidb` user, because the TiUP cluster component will automatically create the `tidb` user on the target machines. You can customize the user or keep it the same as the user of the Control Machine. +> You do not need to manually create the `tidb` user, because the TiUP cluster component will automatically create the `tidb` user on the target machines. You can customize the user or keep it the same as the user of the control machine. > **Note:** > @@ -654,7 +654,7 @@ You need to fill in the result in the configuration file (as described in the St > **Note:** > -> - You do not need to manually create the `tidb` user, because the TiUP cluster component will automatically create the `tidb` user on the target machines. You can customize the user or keep it the same as the user of the Control Machine. +> - You do not need to manually create the `tidb` user, because the TiUP cluster component will automatically create the `tidb` user on the target machines. You can customize the user or keep it the same as the user of the control machine. > - By default, `deploy_dir` of each component uses `/-` of the global configuration. For example, if you specify the `tidb` port as `4001`, then the TiDB component's default `deploy_dir` is `tidb-deploy/tidb-4001`. Therefore, when you specify non-default ports in multi-instance scenarios, you do not need to specify `deploy_dir` again. > **Note:** @@ -941,7 +941,7 @@ Key parameters of TiDB: > **Note:** > -> You do not need to manually create the `tidb` user, because the TiUP cluster component will automatically create the `tidb` user on the target machines. You can customize the user or keep it the same as the user of the Control Machine. +> You do not need to manually create the `tidb` user, because the TiUP cluster component will automatically create the `tidb` user on the target machines. You can customize the user or keep it the same as the user of the control machine. > **Note:** > @@ -1777,7 +1777,7 @@ cdc darwin/amd64,linux/amd64,linux/arm64 ### How to manually configure the SSH mutual trust and sudo without password -1. Log in to the deployment target machine respectively using the `root` user account, create the `tidb` user and set the login password. +1. Log in to the target machine respectively using the `root` user account, create the `tidb` user and set the login password. {{< copyable "shell-root" >}} @@ -1798,7 +1798,7 @@ cdc darwin/amd64,linux/amd64,linux/arm64 tidb ALL=(ALL) NOPASSWD: ALL ``` -3. Use the `tidb` user to log in to the Control Machine, and run the following command. Replace `10.0.1.1` with the IP of your deployment target machine, and enter the `tidb` user password of the deployment target machine as prompted. Successful execution indicates that SSH mutual trust is already created. This applies to other machines as well. +3. Use the `tidb` user to log in to the control machine, and run the following command. Replace `10.0.1.1` with the IP of your target machine, and enter the `tidb` user password of the target machine as prompted. Successful execution indicates that SSH mutual trust is already created. This applies to other machines as well. {{< copyable "shell-regular" >}} @@ -1806,7 +1806,7 @@ cdc darwin/amd64,linux/amd64,linux/arm64 ssh-copy-id -i ~/.ssh/id_rsa.pub 10.0.1.1 ``` -4. Log in to the Control Machine using the `tidb` user account, and log in to the IP of the target machine using `ssh`. If you do not need to enter the password and can successfully log in, then the SSH mutual trust is successfully configured. +4. Log in to the control machine using the `tidb` user account, and log in to the IP of the target machine using `ssh`. If you do not need to enter the password and can successfully log in, then the SSH mutual trust is successfully configured. {{< copyable "shell-regular" >}} @@ -1818,7 +1818,7 @@ cdc darwin/amd64,linux/amd64,linux/arm64 [tidb@10.0.1.1 ~]$ ``` -5. After you login to the deployment target machine using the `tidb` user, run the following command. If you do not need to enter the password and can switch to the `root` user, then sudo without password of the `tidb` user is successfully configured. +5. After you login to the target machine using the `tidb` user, run the following command. If you do not need to enter the password and can switch to the `root` user, then sudo without password of the `tidb` user is successfully configured. {{< copyable "shell-regular" >}} @@ -1830,7 +1830,7 @@ cdc darwin/amd64,linux/amd64,linux/arm64 [root@10.0.1.1 tidb]# ``` -### How to stop the firewall service of deployment machines +### How to stop the firewall service of target machines 1. Check the firewall status. Take CentOS Linux release 7.7.1908 (Core) as an example. diff --git a/quick-start-with-tidb.md b/quick-start-with-tidb.md index 15986b265f830..7a523f09cfff3 100644 --- a/quick-start-with-tidb.md +++ b/quick-start-with-tidb.md @@ -118,7 +118,7 @@ This section describes how to deploy a TiDB cluster using a YAML file of the sma ### Prepare -Prepare a deployment machine that meets the following requirements: +Prepare a target machine that meets the following requirements: - CentOS 7.3 or a later version is installed - The Linux OS has access to the Internet, which is required to download TiDB and related software installation packages @@ -133,10 +133,10 @@ The smallest TiDB cluster topology is as follows: | TiFlash | 1 | 10.0.1.1 | The default port
Global directory configuration | | Monitor | 1 | 10.0.1.1 | The default port
Global directory configuration | -Other requirements for the deployment machine: +Other requirements for the target machine: - The `root` user and its password is required -- [Stop the firewall service of the deployment machine](/production-deployment-using-tiup.md#how-to-stop-the-firewall-service-of-deployment-machines), or open the port needed by the TiDB cluster nodes +- [Stop the firewall service of the target machine](/production-deployment-using-tiup.md#how-to-stop-the-firewall-service-of-deployment-machines), or open the port needed by the TiDB cluster nodes - Currently, TiUP only supports deploying TiDB on the x86_64 (AMD64) architecture (the ARM architecture will be supported in TiDB 4.0 GA): - It is recommended to use CentOS 7.3 or later versions on AMD64 @@ -146,7 +146,7 @@ Other requirements for the deployment machine: > **Note:** > -> You can log in to the deployment machine as a regular user or the `root` user. The following steps use the `root` user as an example. +> You can log in to the target machine as a regular user or the `root` user. The following steps use the `root` user as an example. 1. Download and install TiUP: @@ -245,7 +245,7 @@ Other requirements for the deployment machine: - `user: "tidb"`: Use the `tidb` system user (automatically created during deployment) to perform the internal management of the cluster. By default, use port 22 to log in to the target machine via SSH. - `replication.enable-placement-rules`: This PD parameter is set to ensure that TiFlash runs normally. - - `host`: The IP of the deployment machine. + - `host`: The IP of the target machine. 6. Execute the cluster deployment command: diff --git a/releases/release-3.0.11.md b/releases/release-3.0.11.md index 7fe4925b45367..b8a89f329f90f 100644 --- a/releases/release-3.0.11.md +++ b/releases/release-3.0.11.md @@ -33,7 +33,7 @@ TiDB Ansible version: 3.0.11 + Support the TLS configuration [#44](https://github.com/tikv/importer/pull/44) [#270](https://github.com/pingcap/tidb-lightning/pull/270) * TiDB Ansible - + Modify the logic of `create_users.yml` so that users of the central control machine do not have to be consistent with `ansible_user` [#1184](https://github.com/pingcap/tidb-ansible/pull/1184) + + Modify the logic of `create_users.yml` so that users of the control machine do not have to be consistent with `ansible_user` [#1184](https://github.com/pingcap/tidb-ansible/pull/1184) ## Bug Fixes diff --git a/scale-tidb-using-ansible.md b/scale-tidb-using-ansible.md index 04769b264a5ef..ad795e8d5ca5c 100644 --- a/scale-tidb-using-ansible.md +++ b/scale-tidb-using-ansible.md @@ -111,7 +111,7 @@ For example, if you want to add two TiDB nodes (node101, node102) with the IP ad 2. Initialize the newly added node. - 1. Configure the SSH mutual trust and sudo rules of the deployment machine on the central control machine: + 1. Configure the SSH mutual trust and sudo rules of the target machine on the control machine: {{< copyable "shell-regular" >}} @@ -119,7 +119,7 @@ For example, if you want to add two TiDB nodes (node101, node102) with the IP ad ansible-playbook -i hosts.ini create_users.yml -l 172.16.10.101,172.16.10.102 -u root -k ``` - 2. Install the NTP service on the deployment target machine: + 2. Install the NTP service on the target machine: {{< copyable "shell-regular" >}} @@ -127,7 +127,7 @@ For example, if you want to add two TiDB nodes (node101, node102) with the IP ad ansible-playbook -i hosts.ini deploy_ntp.yml -u tidb -b ``` - 3. Initialize the node on the deployment target machine: + 3. Initialize the node on the target machine: {{< copyable "shell-regular" >}} diff --git a/scale-tidb-using-tiup.md b/scale-tidb-using-tiup.md index 4204eb9cf56f9..94384ffccd2a6 100644 --- a/scale-tidb-using-tiup.md +++ b/scale-tidb-using-tiup.md @@ -9,7 +9,7 @@ aliases: ['/docs/stable/how-to/scale/with-tiup/'] The capacity of a TiDB cluster can be increased or decreased without affecting the online services. -This document describes how to scale the TiDB, TiKV, PD, TiCDC, or TiFlash nodes using TiUP. If you have not installed TiUP, refer to the steps in [Install TiUP on the Control Machine](/upgrade-tidb-using-tiup.md#install-tiup-on-the-control-machine) and import the cluster into TiUP before you scale the TiDB cluster. +This document describes how to scale the TiDB, TiKV, PD, TiCDC, or TiFlash nodes using TiUP. If you have not installed TiUP, refer to the steps in [Install TiUP on the control machine](/upgrade-tidb-using-tiup.md#install-tiup-on-the-control-machine) and import the cluster into TiUP before you scale the TiDB cluster. To view the current cluster name list, run `tiup cluster list`. diff --git a/tidb-binlog/deploy-tidb-binlog.md b/tidb-binlog/deploy-tidb-binlog.md index a941a0affa1c4..10e7ac8f49918 100644 --- a/tidb-binlog/deploy-tidb-binlog.md +++ b/tidb-binlog/deploy-tidb-binlog.md @@ -29,7 +29,7 @@ In environments of development, testing and production, the requirements on serv ### Step 1: Download TiDB Ansible -1. Use the TiDB user account to log in to the central control machine and go to the `/home/tidb` directory. The information about the branch of TiDB Ansible and the corresponding TiDB version is as follows. If you have questions regarding which version to use, email to [info@pingcap.com](mailto:info@pingcap.com) for more information or [file an issue](https://github.com/pingcap/tidb-ansible/issues/new). +1. Use the TiDB user account to log in to the control machine and go to the `/home/tidb` directory. The information about the branch of TiDB Ansible and the corresponding TiDB version is as follows. If you have questions regarding which version to use, email to [info@pingcap.com](mailto:info@pingcap.com) for more information or [file an issue](https://github.com/pingcap/tidb-ansible/issues/new). | tidb-ansible branch | TiDB version | Note | | :------------------- | :------------ | :---- | @@ -56,7 +56,7 @@ In environments of development, testing and production, the requirements on serv enable_binlog = True ``` - 2. Add the deployment machine IPs for `pump_servers`. + 2. Add the target machine IPs for `pump_servers`. ```ini ## Binlog Part @@ -173,7 +173,7 @@ In environments of development, testing and production, the requirements on serv 2. Modify the `tidb-ansible/inventory.ini` file. - Add the deployment machine IPs for `drainer_servers`. Set `initial_commit_ts` to the value you have obtained, which is only used for the initial start of Drainer. + Add the target machine IPs for `drainer_servers`. Set `initial_commit_ts` to the value you have obtained, which is only used for the initial start of Drainer. - Assume that the downstream is MySQL with the alias `drainer_mysql`: diff --git a/tiflash/deploy-tiflash.md b/tiflash/deploy-tiflash.md index 9799be0ca838a..f76900ec161cf 100644 --- a/tiflash/deploy-tiflash.md +++ b/tiflash/deploy-tiflash.md @@ -22,7 +22,7 @@ This section provides hardware configuration recommendations based on different * Minimum configuration: 32 VCore, 64 GB RAM, 1 SSD + n HDD * Recommended configuration: 48 VCore, 128 GB RAM, 1 NVMe SSD + n SSD -There is no limit to the number of deployment machines (one at least). A single machine can use multiple disks, but deploying multiple instances on a single machine is not recommended. +There is no limit to the number of target machines (one at least). A single machine can use multiple disks, but deploying multiple instances on a single machine is not recommended. It is recommended to use an SSD disk to buffer the real-time data being replicated and written to TiFlash. The performance of this disk need to be not lower than the hard disk used by TiKV. It is recommended that you use a better performance NVMe SSD and the SSD's capacity is not less than 10% of the total capacity. Otherwise, it might become the bottleneck of the amount of data that this node can handle. diff --git a/tiup/tiup-mirrors.md b/tiup/tiup-mirrors.md index 9f674dae7cbf9..f5b2e46b8f443 100644 --- a/tiup/tiup-mirrors.md +++ b/tiup/tiup-mirrors.md @@ -117,7 +117,7 @@ If you want to install a TiDB cluster of the v4.0.0-rc version in an isolated en Then a `package` directory is created in the current directory. This `package` directory contains necessary components packages to start a cluster. -2. Use the `tar` command to pack the components package and send this package to the central control machine that is in the isolated environment: +2. Use the `tar` command to pack the components package and send this package to the control machine that is in the isolated environment: {{< copyable "shell-regular" >}} @@ -127,7 +127,7 @@ If you want to install a TiDB cluster of the v4.0.0-rc version in an isolated en Then `package.tar.gz` is an isolated, offline environment. -3. After sending the package to the central control machine of the target cluster, execute the following command to install TiUP: +3. After sending the package to the control machine of the target cluster, execute the following command to install TiUP: {{< copyable "shell-regular" >}} diff --git a/upgrade-tidb-using-ansible.md b/upgrade-tidb-using-ansible.md index 63e4e0ecb1e39..2a658c41c05f7 100644 --- a/upgrade-tidb-using-ansible.md +++ b/upgrade-tidb-using-ansible.md @@ -22,7 +22,7 @@ This document is targeted for users who want to upgrade from TiDB 2.0, 2.1, 3.0, > > Do not execute any DDL statements during the upgrading process, otherwise the undefined behavior error might occur. -## Step 1: Install Ansible and dependencies on the Control Machine +## Step 1: Install Ansible and dependencies on the control machine > **Note:** > @@ -30,7 +30,7 @@ This document is targeted for users who want to upgrade from TiDB 2.0, 2.1, 3.0, The latest development version of TiDB Ansible depends on Ansible 2.5.0 ~ 2.7.11 (`2.5.0 ≦ ansible ≦ 2.7.11`, Ansible 2.7.11 recommended) and the Python modules of `jinja2 ≧ 2.9.6` and `jmespath ≧ 0.9.0`. -To make it easy to manage dependencies, use `pip` to install Ansible and its dependencies. For details, see [Install Ansible and its dependencies on the Control Machine](/online-deployment-using-ansible.md#step-4-install-tidb-ansible-and-its-dependencies-on-the-control-machine). For offline environment, see [Install Ansible and its dependencies offline on the Control Machine](/offline-deployment-using-ansible.md#step-3-install-tidb-ansible-and-its-dependencies-offline-on-the-control-machine). +To make it easy to manage dependencies, use `pip` to install Ansible and its dependencies. For details, see [Install Ansible and its dependencies on the control machine](/online-deployment-using-ansible.md#step-4-install-tidb-ansible-and-its-dependencies-on-the-control-machine). For offline environment, see [Install Ansible and its dependencies offline on the control machine](/offline-deployment-using-ansible.md#step-3-install-tidb-ansible-and-its-dependencies-offline-on-the-control-machine). After the installation is finished, you can view the version information using the following command: @@ -72,9 +72,9 @@ Version: 0.9.0 > - Make sure that the Jinja2 version is correct, otherwise an error occurs when you start Grafana. > - Make sure that the jmespath version is correct, otherwise an error occurs when you perform a rolling update to TiKV. -## Step 2: Download TiDB Ansible to the Control Machine +## Step 2: Download TiDB Ansible to the control machine -1. Log in to the Control Machine using the `tidb` user account and enter the `/home/tidb` directory. +1. Log in to the control machine using the `tidb` user account and enter the `/home/tidb` directory. 2. Back up the `tidb-ansible` folders of TiDB 2.0, 2.1, 3.0, 3.1, or an earlier `latest` version using the following command: @@ -84,7 +84,7 @@ Version: 0.9.0 mv tidb-ansible tidb-ansible-bak ``` -3. Download the tidb-ansible with the tag corresponding to the TiDB 4.0 version. For more details, See [Download TiDB-Ansible to the Control Machine](/online-deployment-using-ansible.md#step-3-download-tidb-ansible-to-the-control-machine). The default folder name is `tidb-ansible`. Replace `$tag` with the value of the chosen TAG version. For example, `v4.0.0-rc`. +3. Download the tidb-ansible with the tag corresponding to the TiDB 4.0 version. For more details, See [Download TiDB-Ansible to the control machine](/online-deployment-using-ansible.md#step-3-download-tidb-ansible-to-the-control-machine). The default folder name is `tidb-ansible`. Replace `$tag` with the value of the chosen TAG version. For example, `v4.0.0-rc`. {{< copyable "shell-regular" >}} @@ -94,7 +94,7 @@ Version: 0.9.0 ## Step 3: Edit the `inventory.ini` file and the configuration file -Log in to the Control Machine using the `tidb` user account and enter the `/home/tidb/tidb-ansible` directory. +Log in to the control machine using the `tidb` user account and enter the `/home/tidb/tidb-ansible` directory. ### Edit the `inventory.ini` file @@ -112,7 +112,7 @@ Edit the `inventory.ini` file. For IP information, see the `/home/tidb/tidb-ansi ansible_user = tidb ``` - You can refer to [How to configure SSH mutual trust and sudo rules on the Control Machine](/online-deployment-using-ansible.md#step-5-configure-the-ssh-mutual-trust-and-sudo-rules-on-the-control-machine) to automatically configure the mutual trust among hosts. + You can refer to [How to configure SSH mutual trust and sudo rules on the control machine](/online-deployment-using-ansible.md#step-5-configure-the-ssh-mutual-trust-and-sudo-rules-on-the-control-machine) to automatically configure the mutual trust among hosts. 2. Keep the `process_supervision` variable consistent with that in the previous version. It is recommended to use `systemd` by default. @@ -173,9 +173,9 @@ If you have previously customized the configuration file of TiDB cluster compone TiKV3-2 ansible_host=172.16.10.6 deploy_dir=/data2/deploy tikv_port=20172 tikv_status_port=20182 labels="host=tikv3" ``` -## Step 4: Download TiDB latest binary to the Control Machine +## Step 4: Download TiDB latest binary to the control machine -Make sure that `tidb_version = v4.0.x` in the `tidb-ansible/inventory.ini` file, and then run the following command to download TiDB 4.0 binary to the Control Machine: +Make sure that `tidb_version = v4.0.x` is in the `tidb-ansible/inventory.ini` file, and then run the following command to download TiDB 4.0 binary to the control machine: {{< copyable "shell-regular" >}} diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 51212d800ee8c..0acaa1d2a69a5 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -25,9 +25,9 @@ If you have deployed the TiDB cluster using TiDB Ansible, you can use TiUP to im - You still use the `'push'` method to collect monitoring metrics (since v3.0, `pull` is the default mode, which is supported if you have not modified this mode). - In the `inventory.ini` configuration file, the `node_exporter` or `blackbox_exporter` item of the machine is set to non-default ports through `node_exporter_port` or `blackbox_exporter_port`, which is compatible if you have unified the configuration in the `group_vars` directory. -## Install TiUP on the Control Machine +## Install TiUP on the control machine -1. Execute the following command on the Control Machine to install TiUP: +1. Execute the following command on the control machine to install TiUP: {{< copyable "shell-regular" >}}