New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multi gpu inference #19
Comments
Hi, I think inference works pretty fast with single-GPU itself. Any reason you specifically want to use multi-GPU? |
Hi @prajwalkr , I am trying to make a real time chat bot for fun, but she reply me slow as she has to reply me in 1min and 20 sec :D |
Hi @prajwalkr , when I run it on multiple gpu machine, I observed that all GPU allocate the same amount of ram , no matter it is one gpu or 8 gpu. E.g. When I run it on single gpu machine, it use up 7GB gpu ram. I run the same thing on 8 gpu machine, each gpu also use up 7GB ram. |
@chikiuso noticed the bot is following facial movements while lipgan for a single image only follows mouth movements are you using some other model as well? Also, how are you getting the speech input? |
Hi @ak9250 , yes I use other model as well, the speech input is normal tts |
By default Tensorflow uses up all GPUs available. Run it as:
Are you taking inference on a single static image or a video? Static image would be faster, as it does not have to do face detection in each frame. |
Closing due to inactivity. Please re-open if needed. |
Thanks for your great work. I tried to inference on a machine with multiple gpu, it could detect all gpu (I also set the n_gpus as the number of gpu). it works fine on dlib, after running dlib part, it halts on just after "Model Created" and "Model Loaded".
Could I have some hints how I could try on multi gpu machine? Thanks for your help!
The text was updated successfully, but these errors were encountered: