-
Notifications
You must be signed in to change notification settings - Fork 708
how-to/deploy: add doc for offline deployment using tiup #2648
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
7 commits
Select commit
Hold shift + click to select a range
7e3a224
tiup: add doc for offline deployment
ran-huang b967e7e
update wording
ran-huang ac29d04
update new contents
ran-huang 849b689
Merge branch 'master' into tiup-offline-deploy
ran-huang dcfcfcf
Update production-offline-deployment-using-tiup.md
ran-huang c175882
fix some issues
f7f009b
Merge branch 'master' into tiup-offline-deploy
yikeke File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,277 @@ | ||
| --- | ||
| title: Deploy a TiDB Cluster Offline Using TiUP | ||
| summary: Introduce how to deploy a TiDB cluster offline using TiUP. | ||
| category: how-to | ||
| --- | ||
|
|
||
| # Deploy a TiDB Cluster Offline Using TiUP | ||
|
|
||
| This document describes how to deploy a TiDB cluster offline using TiUP. | ||
|
|
||
| ## Step 1: Prepare the TiUP offline component package | ||
|
|
||
| You can either download the official package, or manually pack a component package. | ||
|
|
||
| ### Download the official TiUP offline component package | ||
|
|
||
| Download the prepared offline mirror package at <http://download.pingcap.org> by running the following command: | ||
|
|
||
| {{< copyable "shell-regular" >}} | ||
|
|
||
| ```shell | ||
| wget http://download.pingcap.org/tidb-community-server-${version}-linux-amd64.tar.gz | ||
| mv tidb-community-server-${version}-linux-amd64.tar.gz package.tar.gz | ||
| ``` | ||
|
|
||
| In the command above, replace `${version}` with the offline mirror version you want to download, such as `v4.0.0`. | ||
|
|
||
| `package.tar.gz` is a separate offline environment package. | ||
|
|
||
| ### Manually pack an offline component package using `tiup mirror clone` | ||
|
|
||
| The steps are below. | ||
|
|
||
| #### Deploy the online TiUP component | ||
|
|
||
| Log in to a machine that has access to the Internet using a regular user account. Perform the following steps: | ||
|
|
||
| 1. Install the TiUP tool: | ||
|
|
||
| {{< copyable "shell-regular" >}} | ||
|
|
||
| ```shell | ||
| curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh | ||
| ``` | ||
|
|
||
| 2. Redeclare the global environment variables: | ||
|
|
||
| {{< copyable "shell-regular" >}} | ||
|
|
||
| ```shell | ||
| source .bash_profile | ||
| ``` | ||
|
|
||
| 3. Confirm whether TiUP is installed: | ||
|
|
||
| {{< copyable "shell-regular" >}} | ||
|
|
||
| ```shell | ||
| which tiup | ||
| ``` | ||
|
|
||
| #### Pull the mirror using TiUP | ||
|
|
||
| Assume that you are installing a v4.0.0 TiDB cluster using the `tidb` user account in an isolated environment, take the following steps: | ||
|
|
||
| 1. Pull the needed components on a machine that has access to the Internet: | ||
|
|
||
| {{< copyable "shell-regular" >}} | ||
|
|
||
| ```shell | ||
| tiup mirror clone package v4.0.0 --os=linux --arch=amd64 | ||
| ``` | ||
|
|
||
| The command above creates a directory named `package` in the current directory, which contains the component package necessary for starting a cluster. | ||
|
|
||
| 2. Pack the component package by using the `tar` command and send the package to the control machine in the isolated environment: | ||
|
|
||
| {{< copyable "shell-regular" >}} | ||
|
|
||
| ```bash | ||
| tar czvf package.tar.gz package | ||
| ``` | ||
|
|
||
| `package.tar.gz` is an independent offline environment package. | ||
|
|
||
| ## Step 2: Deploy the offline TiUP component | ||
|
|
||
| After sending the package to the control machine of the target cluster, install the TiUP component by running the following command: | ||
|
|
||
| {{< copyable "shell-regular" >}} | ||
|
|
||
| ```shell | ||
| tar xzvf package.tar.gz && | ||
| cd package && | ||
| sh local_install.sh && | ||
| source /home/tidb/.bash_profile | ||
| ``` | ||
|
|
||
| ## Step 3: Mount the TiKV data disk | ||
|
|
||
| > **Note:** | ||
| > | ||
| > It is recommended to use the EXT4 file system format for the data directory of the target machines that deploy TiKV. Compared with the XFS file system format, the EXT4 file system format has more deployment cases of TiDB clusters. For the production environment, use the EXT4 file system format. | ||
|
|
||
| Log in to the target machines using the `root` user account. | ||
|
|
||
| Format your data disks to the ext4 filesystem and add the `nodelalloc` and `noatime` mount options to the filesystem. It is required to add the `nodelalloc` option, or else the TiUP deployment cannot pass the test. The `noatime` option is optional. | ||
|
|
||
| > **Note:** | ||
| > | ||
| > If your data disks have been formatted to ext4 and have added the mount options, you can uninstall it by running the `umount /dev/nvme0n1p1` command, follow the steps starting from editing the `/etc/fstab` file, and add the options again to the filesystem. | ||
|
|
||
| Take the `/dev/nvme0n1` data disk as an example: | ||
|
|
||
| 1. View the data disk: | ||
|
|
||
| {{< copyable "shell-root" >}} | ||
|
|
||
| ```bash | ||
| fdisk -l | ||
| ``` | ||
|
|
||
| ``` | ||
| Disk /dev/nvme0n1: 1000 GB | ||
| ``` | ||
|
|
||
| 2. Create the partition table: | ||
|
|
||
| {{< copyable "shell-root" >}} | ||
|
|
||
| ```bash | ||
| parted -s -a optimal /dev/nvme0n1 mklabel gpt -- mkpart primary ext4 1 -1 | ||
| ``` | ||
|
|
||
| > **Note:** | ||
| > | ||
| > Use the `lsblk` command to view the device number of the partition: for a `nvme` disk, the generated device number is usually `nvme0n1p1`; for a regular disk (for example, `/dev/sdb`), the generated device number is usually `sdb1`. | ||
|
|
||
| 3. Format the data disk to the ext4 filesystem: | ||
|
|
||
| {{< copyable "shell-root" >}} | ||
|
|
||
| ```bash | ||
| mkfs.ext4 /dev/nvme0n1p1 | ||
| ``` | ||
|
|
||
| 4. View the partition UUID of the data disk: | ||
|
|
||
| In this example, the UUID of `nvme0n1p1` is `c51eb23b-195c-4061-92a9-3fad812cc12f`. | ||
|
|
||
| {{< copyable "shell-root" >}} | ||
|
|
||
| ```bash | ||
| lsblk -f | ||
| ``` | ||
|
|
||
| ``` | ||
| NAME FSTYPE LABEL UUID MOUNTPOINT | ||
| sda | ||
| ├─sda1 ext4 237b634b-a565-477b-8371-6dff0c41f5ab /boot | ||
| ├─sda2 swap f414c5c0-f823-4bb1-8fdf-e531173a72ed | ||
| └─sda3 ext4 547909c1-398d-4696-94c6-03e43e317b60 / | ||
| sr0 | ||
| nvme0n1 | ||
| └─nvme0n1p1 ext4 c51eb23b-195c-4061-92a9-3fad812cc12f | ||
| ``` | ||
|
|
||
| 5. Edit the `/etc/fstab` file and add the mount options: | ||
|
|
||
| {{< copyable "shell-root" >}} | ||
|
|
||
| ```bash | ||
| vi /etc/fstab | ||
| ``` | ||
|
|
||
| ``` | ||
| UUID=c51eb23b-195c-4061-92a9-3fad812cc12f /data1 ext4 defaults,nodelalloc,noatime 0 2 | ||
| ``` | ||
|
|
||
| 6. Mount the data disk: | ||
|
|
||
| {{< copyable "shell-root" >}} | ||
|
|
||
| ```bash | ||
| mkdir /data1 && \ | ||
| mount -a | ||
| ``` | ||
|
|
||
| 7. Check whether the steps above take effect by using the following command: | ||
|
|
||
| {{< copyable "shell-root" >}} | ||
|
|
||
| ```bash | ||
| mount -t ext4 | ||
| ``` | ||
|
|
||
| If the filesystem is ext4 and `nodelalloc` is included in the mount options, you have successfully mount the data disk ext4 filesystem with options on the target machines. | ||
|
|
||
| ``` | ||
| /dev/nvme0n1p1 on /data1 type ext4 (rw,noatime,nodelalloc,data=ordered) | ||
| ``` | ||
|
|
||
| ## Step 4: Edit the initialization configuration file `topology.yaml` | ||
|
|
||
| You need to manually create and edit the cluster initialization configuration file. For the full configuration template, refer to the [TiUP configuration parameter template](https://github.com/pingcap/tiup/blob/master/examples/topology.example.yaml). | ||
|
|
||
| Create a YAML configuration file on the control machine, such as `topology.yaml`: | ||
|
|
||
| {{< copyable "shell-regular" >}} | ||
|
|
||
| ```shell | ||
| cat topology.yaml | ||
| ``` | ||
|
|
||
| ```yaml | ||
| # # Global variables are applied to all deployments and used as the default value of | ||
| # # the deployments if a specific deployment value is missing. | ||
| global: | ||
| user: "tidb" | ||
| ssh_port: 22 | ||
| deploy_dir: "/tidb-deploy" | ||
| data_dir: "/tidb-data" | ||
|
|
||
| server_configs: | ||
| pd: | ||
| replication.enable-placement-rules: true | ||
|
|
||
| pd_servers: | ||
| - host: 10.0.1.4 | ||
| - host: 10.0.1.5 | ||
| - host: 10.0.1.6 | ||
| tidb_servers: | ||
| - host: 10.0.1.7 | ||
| - host: 10.0.1.8 | ||
| - host: 10.0.1.9 | ||
| tikv_servers: | ||
| - host: 10.0.1.1 | ||
| - host: 10.0.1.2 | ||
| - host: 10.0.1.3 | ||
| tiflash_servers: | ||
| - host: 10.0.1.10 | ||
| data_dir: /data1/tiflash/data,/data2/tiflash/data | ||
| cdc_servers: | ||
| - host: 10.0.1.6 | ||
| - host: 10.0.1.7 | ||
| - host: 10.0.1.8 | ||
| monitoring_servers: | ||
| - host: 10.0.1.4 | ||
| grafana_servers: | ||
| - host: 10.0.1.4 | ||
| alertmanager_servers: | ||
| - host: 10.0.1.4 | ||
| ``` | ||
|
|
||
| ## Step 5: Deploy the TiDB cluster | ||
|
|
||
| `/path/to/mirror` is the location of the offline mirror package that is output by the `local_install.sh` command: | ||
|
|
||
| {{< copyable "shell-regular" >}} | ||
|
|
||
| ```shell | ||
| export TIUP_MIRRORS=/path/to/mirror && | ||
| tiup cluster deploy tidb-test v4.0.0 topology.yaml --user tidb [-p] [-i /home/root/.ssh/gcp_rsa] && | ||
| tiup cluster start tidb-test | ||
| ``` | ||
|
|
||
| > **Parameter description:** | ||
| > | ||
| > - The name of the cluster deployed by the TiUP cluster is `tidb-test`. | ||
| > - The deployment version is `v4.0.0`. To obtain other supported versions, run `tiup list tidb`. | ||
| > - The initialization configuration file is `topology.yaml`. | ||
| > - `–user tidb`: log in to the target machine using the `tidb` user account to complete the cluster deployment. The `tidb` user needs to have `ssh` and `sudo` privileges of the target machine. You can use other users with `ssh` and `sudo` privileges to complete the deployment. | ||
| > - `[-i]` and `[-p]`: optional. If you have configured login to the target machine without password, these parameters are not required. If not, choose one of the two parameters. `[-i]` is the private key of the `root` user (or other users specified by `-user`) that has access to the deployment machine. `[-p]` is used to input the user password interactively. | ||
|
|
||
| If you see the ``Deployed cluster `tidb-test` successfully`` output at the end of the log, the deployment is successful. | ||
|
|
||
| After the deployment, see [Deploy and Maintain TiDB Using TiUP](/tiup/tiup-cluster.md) for the cluster operations. | ||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.