Skip to content

Commit 1a08f9e

Browse files
author
tp-nan
committed
gitlab -> github
1 parent aab44b5 commit 1a08f9e

File tree

25 files changed

+71
-71
lines changed

25 files changed

+71
-71
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11

22

33
<p align="center">
4-
<h1 align="center">Documentation for torchpipe</h1>
4+
<h1 align="center">Documentation for [torchpipe](https://github.com/torchpipe/torchpipe)</h1>
55
<h6 align="center">Accelerated <a href="https://pytorch.org/">Pytorch</a> Serving with Multithreading</h6>
66
</p>
77
<p align="center">
@@ -23,9 +23,9 @@ Torchpipe is a multi-instance pipeline parallel library that acts as a bridge be
2323

2424

2525

26-
torchpipe代码正在开源准备中。这里是其文档站点
26+
这里是其文档站点
2727

28-
The torchpipe code is being prepared for open sourcing. Here is its documentation site.
28+
Here is its documentation site.
2929

3030

3131

docs/backend-reference/torch.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ For quantizing onnx models using tensorrt, the following parameters are availabl
138138

139139

140140

141-
See [example](https://g.hz.netease.com/deploy/torchpipe/-/tree/master/examples/int8).
141+
See [example](https://github.com/torchpipe/torchpipe/-/tree/master/examples/int8).
142142

143143

144144
### Forward Computation

docs/benchmark.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ Client 10/Compute Backend Instance 1/Timeout 0/Max Batch 1
6767
| triton-cli | QPS: 15039 <br /> | - |
6868

6969

70-
[import]: https://g.hz.netease.com/deploy/torchpipe/blob/main/libs/commands/import/README.md
70+
[import]: https://github.com/torchpipe/torchpipe/blob/main/libs/commands/import/README.md
7171

7272

7373

docs/contribution_guide/communicate.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: Communication and Questions
44
type: reference
55
---
66

7-
Please submit an [issue](https://g.hz.netease.com/deploy/torchpipe/-/issues) here.
7+
Please submit an [issue](https://github.com/torchpipe/torchpipe/-/issues) here.
88

99
# Communication
1010
POPO communication group: Group ID: 4101019

docs/contribution_guide/modify_the_code.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,8 @@ pip install -r requirements.txt
2525
pytest .
2626
```
2727

28-
If necessary, please consider supplementing with [Python tests](https://g.hz.netease.com/deploy/torchpipe/-/tree/develop/test).
28+
If necessary, please consider supplementing with [Python tests](https://github.com/torchpipe/torchpipe//test).
2929

3030
:::note Code Formatting (optional)
31-
Please configure a formatting plugin to enable [.clang-format](https://g.hz.netease.com/deploy/torchpipe/-/blob/develop/.clang-format).
31+
Please configure a formatting plugin to enable [.clang-format](https://github.com/torchpipe/torchpipe/-/blob/develop/.clang-format).
3232
:::

docs/current_state.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@
3939
| 编译 | NGC docker/定制docker | A | - |
4040
| | c++扩展 | A | - |
4141
| | pypi/manylinux | B | - |
42-
| | 文档CI管道 | A(gitlab CI走通,开源后自动更新到github.io) | - |
42+
| | 文档CI管道 | A(github.io) | - |
4343
| 规范 | 第三方代码的引用 | C | - |
4444
| | license | C | - |
4545
| | 代码规范 | B | - |

docs/installation.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ First, clone the code:
3333

3434
```bash
3535
git clone -b master ssh://git@g.hz.netease.com:22222/deploy/torchpipe.git
36-
# git clone -b master https://g.hz.netease.com/deploy/torchpipe.git
36+
# git clone -b master https://github.com/torchpipe/torchpipe.git
3737
cd torchpipe/ && git submodule update --init --recursive
3838
```
3939

@@ -144,7 +144,7 @@ For more examples, see [Showcase](./showcase/showcase.mdx).
144144

145145
## Customizing Dockerfile {#selfdocker}
146146

147-
Refer to the [example Dockerfile](https://g.hz.netease.com/deploy/torchpipe/-/blob/master/docker/torchpipe.base). After downloading TensorRT and OpenCV in advance, you can compile the corresponding base image.
147+
Refer to the [example Dockerfile](https://github.com/torchpipe/torchpipe/-/blob/master/docker/torchpipe.base). After downloading TensorRT and OpenCV in advance, you can compile the corresponding base image.
148148
```
149149
# put TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-11.8.tar.gz into thirdparty/
150150
wget https://codeload.github.com/opencv/opencv/zip/refs/tags/4.5.4 -O thirdparty/opencv-4.5.4.zip

docs/python/test.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -392,7 +392,7 @@ def test_all_files(file_dir:str, num_clients=10, batch_size = 1,
392392

393393

394394
### Clients with Different Batch Sizes
395-
In the example provided [here](https://g.hz.netease.com/deploy/torchpipe/-/blob/master/examples/yolox/yolox_multithreads_test.py), we use ten clients, each requesting different amounts of data per request, ranging from 1 to 10. We validate the consistency of the results in this case.
395+
In the example provided [here](https://github.com/torchpipe/torchpipe/-/blob/master/examples/yolox/yolox_multithreads_test.py), we use ten clients, each requesting different amounts of data per request, ranging from 1 to 10. We validate the consistency of the results in this case.
396396

397397
Typically, users can iterate through all the data in a directory and repeatedly send requests to verify the stability and consistency of the results.
398398

docs/quick_start_new_user.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ type: explainer
77

88
# Trial in 30mins(new users)
99

10-
TorchPipe is a multi-instance pipeline parallel library that provides a seamless integration between lower-level acceleration libraries (such as TensorRT and OpenCV) and RPC frameworks. It guarantees high service throughput while meeting latency requirements. This document is mainly for new users, that is, users who are in the introductory stage of acceleration-related theoretical knowledge, know some python grammar, and can read simple codes. This content mainly includes the use of torchpipe for accelerating service deployment, complemented by performance and effect comparisons. The complete code of this document can be found at [resnet50_thrift](https://g.hz.netease.com/deploy/torchpipe/-/blob/develop/examples/resnet50_thrift/)
10+
TorchPipe is a multi-instance pipeline parallel library that provides a seamless integration between lower-level acceleration libraries (such as TensorRT and OpenCV) and RPC frameworks. It guarantees high service throughput while meeting latency requirements. This document is mainly for new users, that is, users who are in the introductory stage of acceleration-related theoretical knowledge, know some python grammar, and can read simple codes. This content mainly includes the use of torchpipe for accelerating service deployment, complemented by performance and effect comparisons. The complete code of this document can be found at [resnet50_thrift](https://github.com/torchpipe/torchpipe/-/blob/develop/examples/resnet50_thrift/)
1111

1212
## Catalogue
1313
* [1. Basic knowledge](#1)
@@ -84,7 +84,7 @@ self.classification_engine = torch2trt(resnet50, [input_shape],
8484

8585
```
8686

87-
The overall online service deployment can be found at [main_trt.py](https://g.hz.netease.com/deploy/torchpipe/-/blob/develop/examples/resnet50_thrift/main_trt.py)
87+
The overall online service deployment can be found at [main_trt.py](https://github.com/torchpipe/torchpipe/-/blob/develop/examples/resnet50_thrift/main_trt.py)
8888

8989
:::tip
9090
Since TensorRT is not thread-safe, when using this method for model acceleration, it is necessary to handle locking (with self.lock:) during the service deployment process.
@@ -104,7 +104,7 @@ From the above process, it's clear that when accelerating a single model, the fo
104104

105105
![](images/quick_start_new_user/torchpipe_en.png)
106106

107-
We've made adjustments to the deployment of our service using TorchPipe.The overall online service deployment can be found at [main_torchpipe.py](https://g.hz.netease.com/deploy/torchpipe/-/blob/develop/examples/resnet50_thrift/main_torchpipe.py).
107+
We've made adjustments to the deployment of our service using TorchPipe.The overall online service deployment can be found at [main_torchpipe.py](https://github.com/torchpipe/torchpipe/-/blob/develop/examples/resnet50_thrift/main_torchpipe.py).
108108
The core function modifications as follows:
109109

110110
```py
@@ -219,7 +219,7 @@ std="58.395, 57.120, 57.375" # 255*"0.229, 0.224, 0.225"
219219
`python clien_qps.py --img_dir /your/testimg/path/ --port 8888 --request_client 20 --request_batch 1
220220
`
221221

222-
The specific test code can be found at [client_qps.py](https://g.hz.netease.com/deploy/torchpipe/-/blob/develop/examples/resnet50_thrift/client_qps.py)
222+
The specific test code can be found at [client_qps.py](https://github.com/torchpipe/torchpipe/-/blob/develop/examples/resnet50_thrift/client_qps.py)
223223

224224
With the same Thrift service interface, testing on a machine with NIDIA-3080 GPU, 36-core CPU, and concurrency of 10, we have the following results:
225225

docs/showcase/showcase.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,10 +15,10 @@ slug: /showcase
1515
| [PP-OCRv2] | ![](../assets/ppocr.svg) | [MapReduce](../Inter-node/graphtraversal.mdx#mapreduce)<br />[Jump] | |
1616
| [tensorrt's native int8] | | [TensorrtTensor](../backend-reference/torch.mdx#tensorrttensor) | |
1717

18-
[resnet18]: https://g.hz.netease.com/deploy/torchpipe/-/tree/master/examples/resnet18
19-
[yolox]: https://g.hz.netease.com/deploy/torchpipe/-/tree/master/examples/yolox
20-
[PP-OCRv2]: https://g.hz.netease.com/deploy/torchpipe/-/tree/master/examples/ppocr
21-
[TensorRT's native INT8]: https://g.hz.netease.com/deploy/torchpipe/-/tree/master/examples/int8
18+
[resnet18]: https://github.com/torchpipe/torchpipe/-/tree/master/examples/resnet18
19+
[yolox]: https://github.com/torchpipe/torchpipe/-/tree/master/examples/yolox
20+
[PP-OCRv2]: https://github.com/torchpipe/torchpipe/-/tree/master/examples/ppocr
21+
[TensorRT's native INT8]: https://github.com/torchpipe/torchpipe/-/tree/master/examples/int8
2222

2323
[torchpipe.utils.cpp_extension.load]: ../python/compile.mdx
2424
[filter]: ../Inter-node/filter.mdx

0 commit comments

Comments
 (0)