-
Notifications
You must be signed in to change notification settings - Fork 697
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tensorrt error #19
Comments
Yep, after changing to tensorrt got an error. But examples scrips work. |
|
I also encountered the same issue. diff --git a/src/streamdiffusion/acceleration/tensorrt/__init__.py b/src/streamdiffusion/acceleration/tensorrt/__init__.py
index e629567..90a6a07 100644
--- a/src/streamdiffusion/acceleration/tensorrt/__init__.py
+++ b/src/streamdiffusion/acceleration/tensorrt/__init__.py
@@ -139,6 +139,7 @@ def accelerate_with_tensorrt(
unet_model,
create_onnx_path("unet", onnx_dir, opt=False),
create_onnx_path("unet", onnx_dir, opt=True),
+ engine_path=unet_engine_path,
opt_batch_size=max_batch_size,
**engine_build_options,
)
@@ -151,6 +152,7 @@ def accelerate_with_tensorrt(
vae_decoder_model,
create_onnx_path("vae_decoder", onnx_dir, opt=False),
create_onnx_path("vae_decoder", onnx_dir, opt=True),
+ engine_path=vae_decoder_engine_path,
opt_batch_size=max_batch_size,
**engine_build_options,
)
@@ -162,6 +164,7 @@ def accelerate_with_tensorrt(
vae_encoder_model,
create_onnx_path("vae_encoder", onnx_dir, opt=False),
create_onnx_path("vae_encoder", onnx_dir, opt=True),
+ engine_path=vae_encoder_engine_path,
opt_batch_size=max_batch_size,
**engine_build_options,
)` |
got multiple values for keyword argument 'opt_batch_size' means give 'opt_batch_size' more than once, opt_batch_size=max_batch_size and engine_build_options, just del one of them |
Sorry but i can you explain solution. I've tried with default example from readme and still get this error #61 |
did you comments this opt_batch_size=max_batch_size, |
engine_path is a bug too, but i got error in compile_vae_decoder (compile_unet fine): RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[2, 4, 64, 64] to have 3 channels, but got 4 channels instead |
Thank you, now i've got engine_path error as you mentition above |
How to resolve it ? I also encountered it :TypeError: streamdiffusion.acceleration.tensorrt.compile_unet() got multiple values for keyword argument 'opt_batch_size' |
I have fixed the batch size and path errors, now I am getting the same error as @8600862 Full error for context:
This appears to be caused by the VAE model class providing the wrong shape of sample input. def get_sample_input(self, batch_size, image_height, image_width):
latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
return torch.randn(
batch_size,
# FIX
# 4,
3,
latent_height,
latent_width,
dtype=torch.float32,
device=self.device,
) I have no idea if the model will actually work with the above changes, because after fixing this issue there appears to be another issue with the model:
|
The tensor shape issue was also reported in #53 They submitted #53 with this fix, it is more likely to work: if not os.path.exists(vae_decoder_engine_path):
# FIX: https://github.com/cumulo-autumn/StreamDiffusion/pull/54/files
vae.forward = vae.decode
compile_vae_decoder(
vae,
vae_decoder_model,
create_onnx_path("vae_decoder", onnx_dir, opt=False),
create_onnx_path("vae_decoder", onnx_dir, opt=True),
# FIX: SEE: https://github.com/cumulo-autumn/StreamDiffusion/issues/19
# opt_batch_size=max_batch_size,
engine_path=vae_decoder_engine_path,
**engine_build_options,
) However I am still getting the "TensorRT does not support UINT8 types for intermediate tensors" error after fixing this. Worth mentioning I had to tell TensorRT where to find CUDA before running, in case anyone else gets this issue:
Note that |
Turns out to be a bug in CUDA 11.8: NVIDIA/TensorRT#3124 After switching to CUDA 12.1 and making the fixes above (batch size, engine path, Make sure you run |
i fix the bug above (batch size, engine path, vae.forward) ,finally,i get bug:AttributeError: module 'polygraphy.backend.trt.util' has no attribute 'get_bindings_per_profile' |
def accelerate_with_tensorrt( ---------------can work-------------------- |
TypeError: streamdiffusion.acceleration.tensorrt.compile_unet() got multiple values for keyword argument 'opt_batch_size'
?
The text was updated successfully, but these errors were encountered: