Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MaxWorkSpace #51

Closed
JadBatmobile opened this issue May 10, 2019 · 6 comments
Closed

MaxWorkSpace #51

JadBatmobile opened this issue May 10, 2019 · 6 comments

Comments

@JadBatmobile
Copy link

Hey Andres,

In the NetTRT.cpp code, the line:

_builder->setMaxWorkspaceSize(1 << size);

I assume this means the amount of GPU mem allocated for the network? You use size 32. What does this mean in this context?

If i wanted to deploy two models (two .uff files), on one GPU, how do you recommend i proceed?

@tano297
Copy link
Member

tano297 commented May 10, 2019

That is a first trial to get roughly 4Gb from your GPU. If your GPU is not big enough to allocate this, then I try decreasing by halving each time, 2Gb, 1Gb, etc. I would recommend you set it to half your usual GPU free memory per model.

TensorRT api docs for setMaxWorkspaceSize

@JadBatmobile
Copy link
Author

cool! Not to ask too much... to allocate half, would you do size=15; (1<<size) ?

@JadBatmobile
Copy link
Author

sorry ill ask again in a better way: when you use size = 32 and then (1<<size) evaluates to 1073741824.. can you explain how this corresponds to 4 Gb?

@tano297
Copy link
Member

tano297 commented May 10, 2019

No, it would be 1<<31. It is a bitshift, so 1<<32 means 2^32, and 1<<31 is 2^31, effectively half :)
That's why inside the loop I do size--;

You could also do 4294967296 and 2147483648, but it is not as fancy, is it? 😛

@JadBatmobile
Copy link
Author

thank you Andres!

@tano297
Copy link
Member

tano297 commented May 10, 2019

my pleasure

@tano297 tano297 closed this as completed May 10, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants