You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently facing an issue and would greatly appreciate any assistance you can offer.
I have a paddle model that I'm serving through a Docker image based on version 2.5.1 of the paddlepaddle/paddle image. On one workstation, it works well with the 'use_gpu' attribute set to True or False. However, on another workstation, the outputs of the model are incorrect when it runs on GPU. I have attached the results of the model in both situations.
CPU result:
GPU result:
It appears that the computation of the model is incorrect when it's running on GPU. It's important to note that I'm not encountering any errors or warnings, just inaccurate results.
Some additional context:
This issue did not occur with version 2.4 of the Paddle library, but I need to upgrade my Paddle version.
I have attempted to make the environments of the two workstations as similar as possible, but due to them having different GPUs (3090 Ti and Tesla P40), achieving complete parity is not feasible. And while it runs on Docker, I'm not sure how the host effect on the reult of the model on container.
The images above show the result of the text detection model, but I've had the same experience with the text recognition and layout models as well.
What could be the root cause of this inconsistency?
Thank you in advance for any insights or suggestions you can provide.
The text was updated successfully, but these errors were encountered:
Unfortunately, updating the Paddle version to 2.5.2 doesn't fix my problem. I've added more description about this problem on the issue that Vvsmile mentioned.
It's indeed very strange. Maybe the cuda version in the docker image does not match the installed paddle package.
There is a solution, you can try to run gpu inference through the official image. refer to: https://www.paddlepaddle.org.cn/en
For example, if your environment is cuda11.7, please use the following command
Hello everyone,
I'm currently facing an issue and would greatly appreciate any assistance you can offer.
I have a paddle model that I'm serving through a Docker image based on version 2.5.1 of the paddlepaddle/paddle image. On one workstation, it works well with the 'use_gpu' attribute set to True or False. However, on another workstation, the outputs of the model are incorrect when it runs on GPU. I have attached the results of the model in both situations.
CPU result:
GPU result:
It appears that the computation of the model is incorrect when it's running on GPU. It's important to note that I'm not encountering any errors or warnings, just inaccurate results.
Some additional context:
What could be the root cause of this inconsistency?
Thank you in advance for any insights or suggestions you can provide.
The text was updated successfully, but these errors were encountered: