-
-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ROCm support #545
base: main
Are you sure you want to change the base?
ROCm support #545
Conversation
Can't comment on the build instructions as I don't operate in an Ubuntu environment. However, with the new changes, the plugin builds successfully on my machine. Upon installation, OBS loads the plugin and evidently is making use of the GPU when selecting "GPU - TensorRT" in the settings (which I would recommend to change to avoid confusion). With the SINet, Mediapipe and PPHumanSeg models, the plugins runs perfectly in my environment. - I can't discern any difference when switching between running over the CPU or the GPU. Unfortunately, the remaining segmentation models don't work. When using the Selfie Segmentation model, I get garbage, and my OBS application outright crashes when selecting either Robust Video Matting or TCMonoDepth due to a "memory access fault" in the HIP backend. My environment:
|
@payom Thank you for your feedback! It would be so helpful for us if you posted the whole part of the OBS log when OBS crashed. You can get OBS logs and crash reports from the OBS help menu. |
Here's the log while running OBS with verbose logging enabled Unfortunately, the crash doesn't appear to have been captured in the log. Setting the MIOPEN_ENABLE_LOGGING_CMD flag and running OBS from my terminal, I also get this print out which is what I used to identify what crashed in my earlier message
|
@royshil How should we implement ROCm support? |
i think we need to research ROCm execution provider outside of the plugin to see why it behaves in this way with these particular models, and not with others. the other option would be to switch away from ONNX Runtime to a different neural net framework that has more seemless support for the various accelerator vendors like Nvidia, AMD, Intel, DirectX etc. |
I've added the documentation for ROCm-supported build.