You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
what is the right way to calibrate a hybrid quantization model ?
i built my tensorrt engine from ONNX model by the sub code, i selected the class Calibrator(trt.IInt8EntropyCalibrator2) to set the config.int8_calibrator
My hybrid-quantized super-resolution model's inference results are biased towards magenta. I have performed clipping operations; what could be the possible reason for this? Is there an issue with my calibration code? Or could it be due to a poor distribution of the calibration dataset? i am sure that my infer program is absolute right.
profile.set_shape(input_name, opt_shape, opt_shape, opt_shape) # for fixed shape
before config.add_optimization_profile(profile)
And check your preprocess code, or try minmax calibrator.
thanks alot, i will try the minmax calibrator, but isn't network.get_input(0).shape = opt_shape and profile.set_shape(input_name, opt_shape, opt_shape, opt_shape) # for fixed shape serve the same purpose? the exported model information is as follows:
Description
what is the right way to calibrate a hybrid quantization model ?
i built my tensorrt engine from ONNX model by the sub code, i selected the
class Calibrator(trt.IInt8EntropyCalibrator2)
to set theconfig.int8_calibrator
My hybrid-quantized super-resolution model's inference results are biased towards magenta. I have performed clipping operations; what could be the possible reason for this? Is there an issue with my calibration code? Or could it be due to a poor distribution of the calibration dataset? i am sure that my infer program is absolute right.
![image](https://private-user-images.githubusercontent.com/27677934/345323615-d53bdfd8-5550-4a52-b4fd-39a56c6d978b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA2NzUzNzEsIm5iZiI6MTcyMDY3NTA3MSwicGF0aCI6Ii8yNzY3NzkzNC8zNDUzMjM2MTUtZDUzYmRmZDgtNTU1MC00YTUyLWI0ZmQtMzlhNTZjNmQ5NzhiLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzExVDA1MTc1MVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTFlZDI2YWY3OGRjMjJkZjRmNzczZGZhNzFlNzRjN2M2OWEwNGM5ODMxMWYyOGVlMGQxYmE4OTA0Y2QyZDYzZGQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.d-R84OLFVVaUsNL13Pa4pATCyrd_neFtNxokTIDi3qE)
Environment
TensorRT Version: 10.0.1
NVIDIA GPU: RTX4090
NVIDIA Driver Version: 12.0
CUDA Version: 12.0
CUDNN Version: 8.2.0
Operating System: Operating System: Linux interactive11554 5.11.0-27-generic #29 SMP Wed Aug 11 15:58:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Python Version (if applicable): 3,8,19
The text was updated successfully, but these errors were encountered: