New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only 1 Tesla T4 GPU is used for 25 streams and encoding can't keep up on 2.6.2 #5590
Comments
Along with the nvidia-smi output is top output
|
Reduced the stream count to 15 and still having the same issue
|
Thank you for the issue. It seems that Ant Media Server cannot utilize the whole GPUs for some reasons. Regards |
Hi @alfred-stokespace The problem has been resolved and will soon be merged into the master branch for the next release. If you prefer not to wait, we can offer you an early build snapshot that includes the fix. |
Hi Guys, Please check it and feel free to re-open if it does not work for you |
Short description
I have 25 inbound rtmp streams (1080p) and only some of those can sustain broadcast status of around
Broadcasting 1.00x
the rest go down to 0.01I have evidence that only one of the four T4 GPUs on this EC2 instance are being engaged.
Environment
Steps to reproduce
nvidia-smi
outputExpected behavior
Same performance as 2.4.3 (which is able to handle the exact same camera sources and rendition count/type)
All 4 gpu's are utilized on 2.4.3 and 2.4.3 can keep up all 25 streams at 99 to 101 percent broadcast status.
Actual behavior
Only a fraction of the streams can keep up and
nvidia-smi
shows the following...Logs
will send to support upon request.
The text was updated successfully, but these errors were encountered: