Skip to content

Commit

Permalink
update tiup ctl:<version> to tiup clt:v<version>
Browse files Browse the repository at this point in the history
  • Loading branch information
Oreoxmt committed Feb 14, 2023
1 parent 9ffd24a commit c3b5b98
Show file tree
Hide file tree
Showing 19 changed files with 54 additions and 54 deletions.
2 changes: 1 addition & 1 deletion best-practices/three-nodes-hybrid-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ In addition to setting this parameter value in the configuration file, you can a
{{< copyable "shell-regular" >}}

```shell
tiup ctl:<cluster-version> tikv --host=${ip:port} modify-tikv-config -n gc.max_write_bytes_per_sec -v ${limit}
tiup ctl:v<CLUSTER_VERSION> tikv --host=${ip:port} modify-tikv-config -n gc.max_write_bytes_per_sec -v ${limit}
```

> **Note:**
Expand Down
4 changes: 2 additions & 2 deletions clinic/clinic-data-instruction-for-tiup.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,8 +52,8 @@ This section lists the types of diagnostic data that can be collected by Diag fr
| Error log | `pd_stderr.log` | `--include=log` |
| Configuration file | `pd.toml` | `--include=config` |
| Real-time configuration | `config.json` | `--include=config` |
| Outputs of the command `tiup ctl:<cluster-version> pd -u http://${pd IP}:${PORT} store` | `store.json` | `--include=config` |
| Outputs of the command `tiup ctl:<cluster-version> pd -u http://${pd IP}:${PORT} config placement-rules show` | `placement-rule.json` | `--include=config` |
| Outputs of the command `tiup ctl:v<CLUSTER_VERSION> pd -u http://${pd IP}:${PORT} store` | `store.json` | `--include=config` |
| Outputs of the command `tiup ctl:v<CLUSTER_VERSION> pd -u http://${pd IP}:${PORT} config placement-rules show` | `placement-rule.json` | `--include=config` |

### TiFlash diagnostic data

Expand Down
12 changes: 6 additions & 6 deletions dashboard/dashboard-ops-deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,12 +68,12 @@ http://192.168.0.123:2379/dashboard/
### Switch to another PD instance to serve TiDB Dashboard

For a running cluster deployed using TiUP, you can use the `tiup ctl:<cluster-version> pd` command to change the PD instance that serves TiDB Dashboard, or re-specify a PD instance to serve TiDB Dashboard when it is disabled:
For a running cluster deployed using TiUP, you can use the `tiup ctl:v<CLUSTER_VERSION> pd` command to change the PD instance that serves TiDB Dashboard, or re-specify a PD instance to serve TiDB Dashboard when it is disabled:

{{< copyable "shell-regular" >}}

```bash
tiup ctl:<cluster-version> pd -u http://127.0.0.1:2379 config set dashboard-address http://9.9.9.9:2379
tiup ctl:v<CLUSTER_VERSION> pd -u http://127.0.0.1:2379 config set dashboard-address http://9.9.9.9:2379
```

In the command above:
Expand All @@ -95,12 +95,12 @@ tiup cluster display CLUSTER_NAME --dashboard
## Disable TiDB Dashboard

For a running cluster deployed using TiUP, use the `tiup ctl:<cluster-version> pd` command to disable TiDB Dashboard on all PD instances (replace `127.0.0.1:2379` with the IP and port of any PD instance):
For a running cluster deployed using TiUP, use the `tiup ctl:v<CLUSTER_VERSION> pd` command to disable TiDB Dashboard on all PD instances (replace `127.0.0.1:2379` with the IP and port of any PD instance):

{{< copyable "shell-regular" >}}

```bash
tiup ctl:<cluster-version> pd -u http://127.0.0.1:2379 config set dashboard-address none
tiup ctl:v<CLUSTER_VERSION> pd -u http://127.0.0.1:2379 config set dashboard-address none
```

After disabling TiDB Dashboard, checking which PD instance provides the TiDB Dashboard service will fail:
Expand All @@ -117,12 +117,12 @@ Dashboard is not started.

## Re-enable TiDB Dashboard

For a running cluster deployed using TiUP, use the `tiup ctl:<cluster-version> pd` command to request PD to renegotiate an instance to run TiDB Dashboard (replace `127.0.0.1:2379` with the IP and port of any PD instance):
For a running cluster deployed using TiUP, use the `tiup ctl:v<CLUSTER_VERSION> pd` command to request PD to renegotiate an instance to run TiDB Dashboard (replace `127.0.0.1:2379` with the IP and port of any PD instance):

{{< copyable "shell-regular" >}}

```bash
tiup ctl:<cluster-version> pd -u http://127.0.0.1:2379 config set dashboard-address auto
tiup ctl:v<CLUSTER_VERSION> pd -u http://127.0.0.1:2379 config set dashboard-address auto
```

After executing the command above, you can use the `tiup cluster display` command to view the TiDB Dashboard instance address automatically negotiated by PD (replace `CLUSTER_NAME` with the cluster name):
Expand Down
2 changes: 1 addition & 1 deletion enable-tls-between-components.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ Currently, it is not supported to only enable encrypted transmission of some spe
{{< copyable "shell-regular" >}}

```bash
tiup ctl:<cluster-version> pd -u https://127.0.0.1:2379 --cacert /path/to/ca.pem --cert /path/to/client.pem --key /path/to/client-key.pem
tiup ctl:v<CLUSTER_VERSION> pd -u https://127.0.0.1:2379 --cacert /path/to/ca.pem --cert /path/to/client.pem --key /path/to/client-key.pem
```

{{< copyable "shell-regular" >}}
Expand Down
2 changes: 1 addition & 1 deletion migrate-from-tidb-to-mysql.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ After setting up the environment, you can use [Dumpling](/dumpling-overview.md)
In the upstream cluster, run the following command to create a changefeed from the upstream to the downstream clusters:

```shell
tiup ctl:<cluster-version> cdc changefeed create --server=http://127.0.0.1:8300 --sink-uri="mysql://root:@127.0.0.1:3306" --changefeed-id="upstream-to-downstream" --start-ts="434217889191428107"
tiup ctl:v<CLUSTER_VERSION> cdc changefeed create --server=http://127.0.0.1:8300 --sink-uri="mysql://root:@127.0.0.1:3306" --changefeed-id="upstream-to-downstream" --start-ts="434217889191428107"
```

In this command, the parameters are as follows:
Expand Down
10 changes: 5 additions & 5 deletions pd-control.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ As a command line tool of PD, PD Control obtains the state information of the cl
### Use TiUP command

To use PD Control, execute the `tiup ctl:<cluster-version> pd -u http://<pd_ip>:<pd_port> [-i]` command.
To use PD Control, execute the `tiup ctl:v<CLUSTER_VERSION> pd -u http://<pd_ip>:<pd_port> [-i]` command.

### Download the installation package

Expand All @@ -41,26 +41,26 @@ To obtain `pd-ctl` of the latest version, download the TiDB server installation
Single-command mode:

```bash
tiup ctl:<cluster-version> pd store -u http://127.0.0.1:2379
tiup ctl:v<CLUSTER_VERSION> pd store -u http://127.0.0.1:2379
```

Interactive mode:

```bash
tiup ctl:<cluster-version> pd -i -u http://127.0.0.1:2379
tiup ctl:v<CLUSTER_VERSION> pd -i -u http://127.0.0.1:2379
```

Use environment variables:

```bash
export PD_ADDR=http://127.0.0.1:2379
tiup ctl:<cluster-version> pd
tiup ctl:v<CLUSTER_VERSION> pd
```

Use TLS to encrypt:

```bash
tiup ctl:<cluster-version> pd -u https://127.0.0.1:2379 --cacert="path/to/ca" --cert="path/to/cert" --key="path/to/key"
tiup ctl:v<CLUSTER_VERSION> pd -u https://127.0.0.1:2379 --cacert="path/to/ca" --cert="path/to/cert" --key="path/to/key"
```

## Command line flags
Expand Down
6 changes: 3 additions & 3 deletions replicate-data-to-kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ The preceding steps are performed in a lab environment. You can also deploy a cl
2. Create a changefeed to replicate incremental data to Kafka:

```shell
tiup ctl:<cluster-version> cdc changefeed create --server="http://127.0.0.1:8300" --sink-uri="kafka://127.0.0.1:9092/kafka-topic-name?protocol=canal-json" --changefeed-id="kafka-changefeed" --config="changefeed.conf"
tiup ctl:v<CLUSTER_VERSION> cdc changefeed create --server="http://127.0.0.1:8300" --sink-uri="kafka://127.0.0.1:9092/kafka-topic-name?protocol=canal-json" --changefeed-id="kafka-changefeed" --config="changefeed.conf"
```

- If the changefeed is successfully created, changefeed information, such as changefeed ID, is displayed, as shown below:
Expand All @@ -73,13 +73,13 @@ The preceding steps are performed in a lab environment. You can also deploy a cl
In a production environment, a Kafka cluster has multiple broker nodes. Therefore, you can add the addresses of multiple brokers to the sink UIR. This ensures stable access to the Kafka cluster. When the Kafka cluster is down, the changefeed still works. Suppose that a Kafka cluster has three broker nodes, with IP addresses being 127.0.0.1:9092, 127.0.0.2:9092, and 127.0.0.3:9092, respectively. You can create a changefeed with the following sink URI.

```shell
tiup ctl:<cluster-version> cdc changefeed create --server="http://127.0.0.1:8300" --sink-uri="kafka://127.0.0.1:9092,127.0.0.2:9092,127.0.0.3:9092/kafka-topic-name?protocol=canal-json&partition-num=3&replication-factor=1&max-message-bytes=1048576" --config="changefeed.conf"
tiup ctl:v<CLUSTER_VERSION> cdc changefeed create --server="http://127.0.0.1:8300" --sink-uri="kafka://127.0.0.1:9092,127.0.0.2:9092,127.0.0.3:9092/kafka-topic-name?protocol=canal-json&partition-num=3&replication-factor=1&max-message-bytes=1048576" --config="changefeed.conf"
```

3. After creating the changefeed, run the following command to check the changefeed status:

```shell
tiup ctl:<cluster-version> cdc changefeed list --server="http://127.0.0.1:8300"
tiup ctl:v<CLUSTER_VERSION> cdc changefeed list --server="http://127.0.0.1:8300"
```

You can refer to [Manage TiCDC Changefeeds](/ticdc/ticdc-manage-changefeed.md) to manage the changefeed.
Expand Down
10 changes: 5 additions & 5 deletions scale-tidb-using-tiup.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ This section exemplifies how to add a TiFlash node to the `10.0.1.4` host.
> When adding a TiFlash node to an existing TiDB cluster, note the following:
>
> - Confirm that the current TiDB version supports using TiFlash. Otherwise, upgrade your TiDB cluster to v5.0 or later versions.
> - Run the `tiup ctl:<cluster-version> pd -u http://<pd_ip>:<pd_port> config set enable-placement-rules true` command to enable the Placement Rules feature. Or run the corresponding command in [pd-ctl](/pd-control.md).
> - Run the `tiup ctl:v<CLUSTER_VERSION> pd -u http://<pd_ip>:<pd_port> config set enable-placement-rules true` command to enable the Placement Rules feature. Or run the corresponding command in [pd-ctl](/pd-control.md).
1. Add the node information to the `scale-out.yaml` file:

Expand Down Expand Up @@ -381,12 +381,12 @@ In special cases (such as when a node needs to be forcibly taken down), or if th

* Enter the store command in [pd-ctl](/pd-control.md) (the binary file is under `resources/bin` in the tidb-ansible directory).

* If you use TiUP deployment, replace `pd-ctl` with `tiup ctl:<cluster-version> pd`:
* If you use TiUP deployment, replace `pd-ctl` with `tiup ctl:v<CLUSTER_VERSION> pd`:

{{< copyable "shell-regular" >}}

```shell
tiup ctl:<cluster-version> pd -u http://<pd_ip>:<pd_port> store
tiup ctl:v<CLUSTER_VERSION> pd -u http://<pd_ip>:<pd_port> store
```

> **Note:**
Expand All @@ -397,12 +397,12 @@ In special cases (such as when a node needs to be forcibly taken down), or if th

* Enter `store delete <store_id>` in pd-ctl (`<store_id>` is the store ID of the TiFlash node found in the previous step.

* If you use TiUP deployment, replace `pd-ctl` with `tiup ctl:<cluster-version> pd`:
* If you use TiUP deployment, replace `pd-ctl` with `tiup ctl:v<CLUSTER_VERSION> pd`:

{{< copyable "shell-regular" >}}

```shell
tiup ctl:<cluster-version> pd -u http://<pd_ip>:<pd_port> store delete <store_id>
tiup ctl:v<CLUSTER_VERSION> pd -u http://<pd_ip>:<pd_port> store delete <store_id>
```

> **Note:**
Expand Down
2 changes: 1 addition & 1 deletion three-data-centers-in-two-cities-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ In the deployment of three DCs in two cities, to optimize performance, you need
raftstore.raft-max-election-timeout-ticks: 1200
```

- Configure scheduling. After the cluster is enabled, use the `tiup ctl:<cluster-version> pd` tool to modify the scheduling policy. Modify the number of TiKV Raft replicas. Configure this number as planned. In this example, the number of replicas is five.
- Configure scheduling. After the cluster is enabled, use the `tiup ctl:v<CLUSTER_VERSION> pd` tool to modify the scheduling policy. Modify the number of TiKV Raft replicas. Configure this number as planned. In this example, the number of replicas is five.

```yaml
config set max-replicas 5
Expand Down
4 changes: 2 additions & 2 deletions ticdc/deploy-ticdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,10 +152,10 @@ See [Enable TLS Between TiDB Components](/enable-tls-between-components.md).

## View TiCDC status using the command-line tool

Run the following command to view the TiCDC cluster status. Note that you need to replace `<version>` with the TiCDC cluster version:
Run the following command to view the TiCDC cluster status. Note that you need to replace `v<CLUSTER_VERSION>` with the TiCDC cluster version:

```shell
tiup ctl:<version> cdc capture list --server=http://10.0.10.25:8300
tiup ctl:v<CLUSTER_VERSION> cdc capture list --server=http://10.0.10.25:8300
```

```shell
Expand Down
6 changes: 3 additions & 3 deletions ticdc/integrate-confluent-using-ticdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ The preceding steps are performed in a lab environment. You can also deploy a cl
2. Create a changefeed to replicate incremental data to Confluent Cloud:

```shell
tiup ctl:<cluster-version> cdc changefeed create --server="http://127.0.0.1:8300" --sink-uri="kafka://<broker_endpoint>/ticdc-meta?protocol=avro&replication-factor=3&enable-tls=true&auto-create-topic=true&sasl-mechanism=plain&sasl-user=<broker_api_key>&sasl-password=<broker_api_secret>" --schema-registry="https://<schema_registry_api_key>:<schema_registry_api_secret>@<schema_registry_endpoint>" --changefeed-id="confluent-changefeed" --config changefeed.conf
tiup ctl:v<CLUSTER_VERSION> cdc changefeed create --server="http://127.0.0.1:8300" --sink-uri="kafka://<broker_endpoint>/ticdc-meta?protocol=avro&replication-factor=3&enable-tls=true&auto-create-topic=true&sasl-mechanism=plain&sasl-user=<broker_api_key>&sasl-password=<broker_api_secret>" --schema-registry="https://<schema_registry_api_key>:<schema_registry_api_secret>@<schema_registry_endpoint>" --changefeed-id="confluent-changefeed" --config changefeed.conf
```

You need to replace the values of the following fields with those created or recorded in [Step 2. Create an access key pair](#step-2-create-an-access-key-pair):
Expand All @@ -114,7 +114,7 @@ The preceding steps are performed in a lab environment. You can also deploy a cl
Note that you should encode `<schema_registry_api_secret>` based on [HTML URL Encoding Reference](https://www.w3schools.com/tags/ref_urlencode.asp) before replacing its value. After you replace all the preceding fields, the configuration file is as follows:

```shell
tiup ctl:<cluster-version> cdc changefeed create --server="http://127.0.0.1:8300" --sink-uri="kafka://xxx-xxxxx.ap-east-1.aws.confluent.cloud:9092/ticdc-meta?protocol=avro&replication-factor=3&enable-tls=true&auto-create-topic=true&sasl-mechanism=plain&sasl-user=L5WWA4GK4NAT2EQV&sasl-password=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" --schema-registry="https://7NBH2CAFM2LMGTH7:xxxxxxxxxxxxxxxxxx@yyy-yyyyy.us-east-2.aws.confluent.cloud" --changefeed-id="confluent-changefeed" --config changefeed.conf
tiup ctl:v<CLUSTER_VERSION> cdc changefeed create --server="http://127.0.0.1:8300" --sink-uri="kafka://xxx-xxxxx.ap-east-1.aws.confluent.cloud:9092/ticdc-meta?protocol=avro&replication-factor=3&enable-tls=true&auto-create-topic=true&sasl-mechanism=plain&sasl-user=L5WWA4GK4NAT2EQV&sasl-password=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" --schema-registry="https://7NBH2CAFM2LMGTH7:xxxxxxxxxxxxxxxxxx@yyy-yyyyy.us-east-2.aws.confluent.cloud" --changefeed-id="confluent-changefeed" --config changefeed.conf
```

- Run the command to create a changefeed.
Expand All @@ -132,7 +132,7 @@ The preceding steps are performed in a lab environment. You can also deploy a cl
3. After creating the changefeed, run the following command to check the changefeed status:

```shell
tiup ctl:<cluster-version> cdc changefeed list --server="http://127.0.0.1:8300"
tiup ctl:v<CLUSTER_VERSION> cdc changefeed list --server="http://127.0.0.1:8300"
```

You can refer to [Manage TiCDC Changefeeds](/ticdc/ticdc-manage-changefeed.md) to manage the changefeed.
Expand Down
2 changes: 1 addition & 1 deletion ticdc/ticdc-changefeed-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,4 +38,4 @@ You can manage a TiCDC cluster and its replication tasks using the command-line

You can also use the HTTP interface (the TiCDC OpenAPI feature) to manage a TiCDC cluster and its replication tasks. For details, see [TiCDC OpenAPI](/ticdc/ticdc-open-api.md).

If your TiCDC is deployed using TiUP, you can start `cdc cli` by running the `tiup ctl:<version> cdc` command. Replace `<version>` with the TiCDC cluster version. You can also run `cdc cli` directly.
If your TiCDC is deployed using TiUP, you can start `cdc cli` by running the `tiup ctl:v<CLUSTER_VERSION> cdc` command. Replace `v<CLUSTER_VERSION>` with the TiCDC cluster version, such as `v6.5.0`. You can also run `cdc cli` directly.
2 changes: 1 addition & 1 deletion tidb-control.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ You can get TiDB Control by installing it using TiUP or by compiling it from sou
### Install TiDB Control using TiUP

After installing TiUP, you can use `tiup ctl:<cluster-version> tidb` command to get and execute TiDB Control.
After installing TiUP, you can use `tiup ctl:v<CLUSTER_VERSION> tidb` command to get and execute TiDB Control.

### Compile from source code

Expand Down
28 changes: 14 additions & 14 deletions tiflash/create-tiflash-replicas.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,25 +155,25 @@ Before TiFlash replicas are added, each TiKV instance performs a full table scan

2. Use [PD Control](https://docs.pingcap.com/tidb/stable/pd-control) to progressively ease the new replica speed limit.

The default new replica speed limit is 30, which means, approximately 30 Regions add TiFlash replicas every minute. Executing the following command will adjust the limit to 60 for all TiFlash instances, which doubles the original speed:
The default new replica speed limit is 30, which means, approximately 30 Regions add TiFlash replicas every minute. Executing the following command will adjust the limit to 60 for all TiFlash instances, which doubles the original speed:

```shell
tiup ctl:v<CLUSTER_VERSION> pd -u http://<PD_ADDRESS>:2379 store limit all engine tiflash 60 add-peer
```
```shell
tiup ctl:v<CLUSTER_VERSION> pd -u http://<PD_ADDRESS>:2379 store limit all engine tiflash 60 add-peer
```

> In the preceding command, you need to replace `<CLUSTER_VERSION>` with the actual cluster version and `<PD_ADDRESS>:2379` with the address of any PD node. For example:
>
> ```shell
> tiup ctl:v6.1.1 pd -u http://192.168.1.4:2379 store limit all engine tiflash 60 add-peer
> ```
> In the preceding command, you need to replace `v<CLUSTER_VERSION>` with the actual cluster version, such as `v6.5.0` and `<PD_ADDRESS>:2379` with the address of any PD node. For example:
>
> ```shell
> tiup ctl:v6.1.1 pd -u http://192.168.1.4:2379 store limit all engine tiflash 60 add-peer
> ```
Within a few minutes, you will observe a significant increase in CPU and disk IO resource usage of the TiFlash nodes, and TiFlash should create replicas faster. At the same time, the TiKV nodes' CPU and disk IO resource usage increases as well.
Within a few minutes, you will observe a significant increase in CPU and disk IO resource usage of the TiFlash nodes, and TiFlash should create replicas faster. At the same time, the TiKV nodes' CPU and disk IO resource usage increases as well.

If the TiKV and TiFlash nodes still have spare resources at this point and the latency of your online service does not increase significantly, you can further ease the limit, for example, triple the original speed:
If the TiKV and TiFlash nodes still have spare resources at this point and the latency of your online service does not increase significantly, you can further ease the limit, for example, triple the original speed:

```shell
tiup ctl:v<CLUSTER_VERSION> pd -u http://<PD_ADDRESS>:2379 store limit all engine tiflash 90 add-peer
```
```shell
tiup ctl:v<CLUSTER_VERSION> pd -u http://<PD_ADDRESS>:2379 store limit all engine tiflash 90 add-peer
```

3. After the TiFlash replication is complete, revert to the default configuration to reduce the impact on online services.

Expand Down
2 changes: 1 addition & 1 deletion tiflash/tiflash-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This document introduces the configuration parameters related to the deployment

## PD scheduling parameters

You can adjust the PD scheduling parameters using [pd-ctl](/pd-control.md). Note that you can use `tiup ctl:<cluster-version> pd` to replace `pd-ctl -u <pd_ip:pd_port>` when using tiup to deploy and manage your cluster.
You can adjust the PD scheduling parameters using [pd-ctl](/pd-control.md). Note that you can use `tiup ctl:v<CLUSTER_VERSION> pd` to replace `pd-ctl -u <pd_ip:pd_port>` when using tiup to deploy and manage your cluster.

- [`replica-schedule-limit`](/pd-configuration-file.md#replica-schedule-limit): determines the rate at which the replica-related operator is generated. The parameter affects operations such as making nodes offline and add replicas.

Expand Down

0 comments on commit c3b5b98

Please sign in to comment.