Replies: 3 comments 5 replies
-
cc @lzhangzz |
Beta Was this translation helpful? Give feedback.
0 replies
-
@joshuafc we have plans to support passing external device memory to the API, but it's likely to be ready in v0.9 (the next next release) |
Beta Was this translation helpful? Give feedback.
5 replies
-
@lzhangzz @RunningLeon I know this post is 3 years old. I wonder if you still have a plan to add support passing external GPU device memory to the API. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
In some case, we have image directly in gpu memory, such as call NVDEC to decode H264 stream, or call nvJPEG for JPEG decode.
But now, the C++ inference only can call
Apply
with anMat
object which cannot specify where the image data is in( CPU, or GPU 0, or GPU1 ?)In
mmdeploy/csrc/mmdeploy/apis/c/mmdeploy/common.cpp:mmdeploy_common_create_input
, I found anmmdeploy::Mat
is created withdevice
parameter hardcode to 'cpu'.Is there any plan to export the
device
parameter to high level API such asDetector::Apply
, so when inference, we can specify which memory space hold the image?Beta Was this translation helpful? Give feedback.
All reactions