We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
博主,你好,我想请教一下,我使用libtorch cuda版本进行推理时,显存一直在增加直到爆内存,这是什么原因?下面时我调用的方法,,每次来一张图像就会调用这个函数进行推理,模型和数据都放到cuda了,推理完把数据放cpu,使用c10::cuda::CUDACachingAllocator::emptyCache()好像没有用
The text was updated successfully, but these errors were encountered:
可能存在的问题哈,你这个函数如果频繁调用的话,会每次将模型从 cpu 到 gpu 再到 cpu,pytorch 模型从 gpu 到cpu 的话,显存好像不会完全释放。
Sorry, something went wrong.
No branches or pull requests
博主,你好,我想请教一下,我使用libtorch cuda版本进行推理时,显存一直在增加直到爆内存,这是什么原因?下面时我调用的方法,,每次来一张图像就会调用这个函数进行推理,模型和数据都放到cuda了,推理完把数据放cpu,使用c10::cuda::CUDACachingAllocator::emptyCache()好像没有用
The text was updated successfully, but these errors were encountered: