-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reuse engine for multiple consequent runs #3868
Comments
Are you referring to our Stable-Diffusion sample? |
Yes, I'm using stable-diffusion pipeline from samples in demo folder |
The parameters to init pipeline and build engines were the following:
|
Seems that something is wrong with loadResources. To be sure that all my enhancements don't add any mess i shifted to simple txt2img: init like this:
then generate with function:
Here I use SIZE2 as I want to test that I can change input size (because I set static_shape to False when the engine was built). And I get incorrect images in the second call to gen_t2p. If I comment out line I've also tried to make new function to only allocate memory for new sizes, without operations with events and stream, but the result was the same:
|
I figured out that these lines in the code lead to the error:
In the function:
When I run it after the first inference I get nans and infs at some places in the output vectors of the engines. Also maybe the problem is in loading addresses at inference:
|
@zerollzeng Hello, if you need the code to reproduce, I can share it |
@KyriaAnnwyn Hello, are you using StableDiffusionPipeline from diffusers library or from TensorRT's implementation? I'm wondering if you could share the modified code to integrate IP Adapter? |
Description
Build engines for SDXL.
Then init pipeline. And do several runs. At the first run I get good picture, but the second run gives all grey image.
I've added controlnet and ip-adapter to original code.
init function in my code:
then i have generate function, which uses loaded engines to generate images:
In main:
pic1.png is good, but pic2.png is all grey
![image](https://private-user-images.githubusercontent.com/36903488/330693435-2daa6562-12ae-4ea6-b65b-9a081153f555.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTg4OTc2MjcsIm5iZiI6MTcxODg5NzMyNywicGF0aCI6Ii8zNjkwMzQ4OC8zMzA2OTM0MzUtMmRhYTY1NjItMTJhZS00ZWE2LWI2NWItOWEwODExNTNmNTU1LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjAlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIwVDE1Mjg0N1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTE5MmVmYjlkODZhY2NlOTUzNDA3YTMxZGMzNjU0M2FmYTAzNmE3MDNjOWY5NzQzOWEzYjIyZTkzMTdlNzRiMzImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.6NUw03nhYWm95Cp6YF9CJwyuK_boKC6VkaUJUpLSKBE)
Seems like I have to reinit something to get correct result, but I don't do that.
In original code there goes pipe.teardown() after generation, but this deletes engines and to make one more call we will need to load them again so it would not be possible to get fast inference.
Help, please, to solve the problem
Environment
TensorRT Version: Tensorrt 10.0.0b6
NVIDIA GPU: A100
NVIDIA Driver Version: 550.54.15
CUDA Version: 11.8
CUDNN Version:
Operating System: ubuntu 20.04
Python Version (if applicable): 3.10
Tensorflow Version (if applicable): -
PyTorch Version (if applicable): 2.2.1
Baremetal or Container (if so, version): -
The text was updated successfully, but these errors were encountered: