Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix output table column names and float numbers format #732

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

spital
Copy link

@spital spital commented May 6, 2022

Small fix / improvement of the output table markdown file.
(1) added PSNR and MSE columns
(2) changed format to %.5g to see numbers in a bit nicer format
Tested with NVDA Torch containers on RTX3080ti, seems fine:

Name Data Type Input Shapes torch2trt kwargs Max Error Peak Signal to Noise Ratio Mean Squared Error Throughput (PyTorch) Throughput (TensorRT) Latency (PyTorch) Latency (TensorRT)
torch2trt.tests.torchvision.classification.alexnet float16 [(1, 3, 224, 224)] {'fp16_mode': True} 9.82E-05 nan 0.00E+00 1706.2 3989.4 0.66819 0.3019
torch2trt.tests.torchvision.classification.squeezenet1_0 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 1.95E-03 83.18 7.37E-08 443.78 5379.2 2.2734 0.23403
torch2trt.tests.torchvision.classification.squeezenet1_1 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 1.95E-03 78.75 8.20E-08 467.02 6372.6 2.1582 0.20253
torch2trt.tests.torchvision.classification.resnet18 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 9.77E-03 68.15 7.10E-06 437.28 4456.4 2.3051 0.27306
torch2trt.tests.torchvision.classification.resnet34 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 2.50E-01 67.56 4.39E-03 246.57 2500.8 4.0779 0.45222
torch2trt.tests.torchvision.classification.resnet50 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 1.41E-01 69.97 2.16E-03 167.71 1978.3 5.9556 0.55828
torch2trt.tests.torchvision.classification.resnet101 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 0.00E+00 nan NAN 86.54 1054.1 11.533 0.92494
torch2trt.tests.torchvision.classification.resnet152 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 0.00E+00 nan NAN 58.125 768.77 17.16 1.3138
torch2trt.tests.torchvision.classification.densenet121 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 1.17E-02 65.21 3.00E-06 70.819 421.54 14.134 2.2273
torch2trt.tests.torchvision.classification.densenet169 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 1.95E-03 77.21 2.89E-07 50.035 238.08 19.78 3.9731
torch2trt.tests.torchvision.classification.densenet201 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 1.95E-03 75.21 2.52E-07 41.468 175.19 23.495 5.6947
torch2trt.tests.torchvision.classification.densenet161 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 3.91E-03 74.17 6.70E-07 52.874 236.35 18.864 3.9961
torch2trt.tests.torchvision.classification.vgg11 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 2.29E-03 52.95 4.43E-07 1004.5 1771.8 1.1096 0.61939
torch2trt.tests.torchvision.classification.vgg13 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 2.08E-03 51.72 4.25E-07 802.06 1513.3 1.329 0.71259
torch2trt.tests.torchvision.classification.vgg16 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 2.10E-03 50.12 3.53E-07 650.73 1298.2 1.6187 0.82363
torch2trt.tests.torchvision.classification.vgg19 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 2.05E-03 49.17 4.50E-07 546.91 1130.8 1.9119 0.93664
torch2trt.tests.torchvision.classification.vgg11_bn float16 [(1, 3, 224, 224)] {'fp16_mode': True} 3.99E-03 49.57 1.19E-06 853.21 1756.4 1.3895 0.62139
torch2trt.tests.torchvision.classification.vgg13_bn float16 [(1, 3, 224, 224)] {'fp16_mode': True} 2.38E-03 49.36 4.63E-07 703.61 1503.9 1.5963 0.716
torch2trt.tests.torchvision.classification.vgg16_bn float16 [(1, 3, 224, 224)] {'fp16_mode': True} 2.00E-03 49.34 4.13E-07 566.44 1289.8 1.9695 0.82836
torch2trt.tests.torchvision.classification.vgg19_bn float16 [(1, 3, 224, 224)] {'fp16_mode': True} 2.44E-03 48.31 6.53E-07 471.05 1132.4 2.3146 0.93495
torch2trt.tests.torchvision.classification.mobilenet_v2 float16 [(1, 3, 224, 224)] {'fp16_mode': True} 0.00E+00 nan 0.00E+00 201.1 3408.5 4.96 0.34366

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant