Skip to content

Commit

Permalink
docs: add disk usage / memory usage benchmark table (#751)
Browse files Browse the repository at this point in the history
* docs: add disk usage and peak memory usage to benchmark table

* docs: add disk usage

* docs: benchmark_table

* docs: benchmark_table

* docs: disk usage

* docs: peak memory usage

* docs: peak memory usage

* docs: peak memory usage

* docs: peak memory usage

* docs: benchmark table

* docs: add RAM usage

* docs: add RAM usage

* docs: update RAM usage

* docs: update RAM usage

* docs: update narrative

* docs: use default config in benchmark

* docs: correct link
  • Loading branch information
ZiniuYu committed Jun 15, 2022
1 parent 96923f1 commit 9d872f2
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 14 deletions.
26 changes: 13 additions & 13 deletions docs/user-guides/server.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,19 +60,19 @@ The procedure and UI of ONNX and TensorRT runtime would look the same as Pytorch

## Model support

Open AI has released 9 models so far. `ViT-B/32` is used as default model in all runtimes. Due to the limitation of some runtime, not every runtime supports all nine models. Please also note that different model give different size of output dimensions. This will affect your downstream applications. For example, switching the model from one to another make your embedding incomparable, which breaks the downstream applications. Here is a list of supported models of each runtime and its corresponding size:

| Model | PyTorch | ONNX | TensorRT | Output dimension |
| --- |---------| ---- | --- |--- |
| RN50 |||| 1024 |
| RN101 |||| 512 |
| RN50x4 |||| 640 |
| RN50x16 |||| 768 |
| RN50x64 |||| 1024 |
| ViT-B/32 |||| 512 |
| ViT-B/16 |||| 512 |
| ViT-L/14 |||| 768 |
| ViT-L/14-336px |||| 768 |
Open AI has released 9 models so far. `ViT-B/32` is used as default model in all runtimes. Due to the limitation of some runtime, not every runtime supports all nine models. Please also note that different model give different size of output dimensions. This will affect your downstream applications. For example, switching the model from one to another make your embedding incomparable, which breaks the downstream applications. Below is a list of supported models of each runtime and its corresponding size. We include the disk usage (in delta) and the peak RAM and VRAM usage (in delta) when running on a single Nvidia TITAN RTX GPU (24GB VRAM) using a default `minibatch_size=32` in server and a default `batch_size=8` in client.

| Model | PyTorch | ONNX | TensorRT | Output Dimension | Disk Usage (MB) | Peak RAM Usage (GB) | Peak VRAM Usage (GB) |
|----------------|---------|------|----------|------------------|-----------------|---------------------|----------------------|
| RN50 | || | 1024 | 256 | 2.99 | 1.36 |
| RN101 | || | 512 | 292 | 3.51 | 1.40 |
| RN50x4 | || | 640 | 422 | 3.23 | 1.63 |
| RN50x16 | || | 768 | 661 | 3.63 | 2.02 |
| RN50x64 | || | 1024 | 1382 | 4.08 | 2.98 |
| ViT-B/32 | || | 512 | 351 | 3.20 | 1.40 |
| ViT-B/16 | || | 512 | 354 | 3.20 | 1.44 |
| ViT-L/14 | || | 768 | 933 | 3.66 | 2.04 |
| ViT-L/14-336px | || | 768 | 934 | 3.74 | 2.23 |


## YAML config
Expand Down
2 changes: 1 addition & 1 deletion scripts/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ def run(self):
time_costs = []
for _ in range(self.num_iter):
start = time.perf_counter()
r = client.encode(batch)
r = client.encode(batch, batch_size=self.batch_size)
time_costs.append(time.perf_counter() - start)
self.avg_time = np.mean(time_costs[2:])

Expand Down

0 comments on commit 9d872f2

Please sign in to comment.