Skip to content

.NET/C# binding for Baidu paddle inference library and PaddleOCR

License

Notifications You must be signed in to change notification settings

RankinChen/PaddleSharp

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PaddleSharp QQ

💗.NET Wrapper for PaddleInference C API, include PaddleOCR, PaddleDetection, support Windows(x64), NVIDIA GPU and Linux(Ubuntu-20.04 x64).

PaddleOCR support 14 OCR languages model download on-demand, allow rotated text angle detection, 180 degree text detection.

PaddleDetection support PPYolo detection model and PicoDet model.

NuGet Packages/Docker Images

NuGet Package Version Description
Sdcb.PaddleInference NuGet Paddle Inference C API .NET binding
Sdcb.PaddleInference.runtime.win64.openblas NuGet Paddle Inference native windows-x64-openblas binding
Sdcb.PaddleInference.runtime.win64.mkl NuGet Paddle Inference native windows-x64-mkldnn binding
Sdcb.PaddleInference.runtime.win64.cuda10_cudnn7 NuGet Paddle Inference native windows-x64(CUDA 10/cuDNN 7.x) binding
Sdcb.PaddleInference.runtime.win64.cuda11_cudnn8_tr7 NuGet Paddle Inference native windows-x64(CUDA 11/cuDNN 8.0/TensorRT 7) binding
Sdcb.PaddleOCR NuGet PaddleOCR library(based on Sdcb.PaddleInference)
Sdcb.PaddleOCR.KnownModels NuGet Helper to download PaddleOCR models
Sdcb.PaddleDetection NuGet PaddleDetection library(based on Sdcb.PaddleInference)

Note: Linux does not need a native binding NuGet package like windows(Sdcb.PaddleInference.runtime.win64.mkl), instead, you can/should based from a Dockerfile to development:

Docker Images Version Description
sdflysha/dotnet6-focal-paddle2.2.2 Docker PaddleInference 2.2.2, OpenCV 4.5.5, based on official Ubuntu 20.04 .NET 6 Runtime
sdflysha/dotnet6sdk-focal-paddle2.2.2 Docker PaddleInference 2.2.2, OpenCV 4.5.5, based on official Ubuntu 20.04 .NET 6 SDK

Usage

FAQ

Why my code runs good in my windows machine, but DllNotFoundException in other machine:

  1. Please ensure the latest Visual C++ Redistributable was installed in Windows(typically it should automatically installed if you have Visual Studio installed) Otherwise, it will failed with following error(Windows only):

    DllNotFoundException: Unable to load DLL 'paddle_inference_c' or one of its dependencies (0x8007007E)
    
  2. Many old CPUs does not support AVX instructions, please ensure your CPU supports AVX, or download the x64-noavx-openblas dlls and disable Mkldnn: PaddleConfig.Defaults.UseMkldnn = false;

How to enable GPU?

Enable GPU support can significantly improve the throughput and lower the CPU usage.

Steps to use GPU in windows:

  1. (for windows) Install the package: Sdcb.PaddleInference.runtime.win64.cuda11_cudnn8_tr7 instead of Sdcb.PaddleInference.runtime.win64.mkl, do not install both.
  2. Install CUDA from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH(linux)
  3. Install cuDNN from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH(linux)
  4. Install TensorRT from NVIDIA, and configure environment variables to PATH or LD_LIBRARY_PATH(linux)

You can refer this blog page for GPU in Windows: 关于PaddleSharp GPU使用 常见问题记录

If you're using Linux, you need to compile your own OpenCvSharp4 environment following the docker build scripts follow the CUDA/cuDNN/TensorRT configuration tasks.

After these steps completed, you can try specify PaddleConfig.Defaults.UseGpu = true in begin of your code and then enjoy😁.

Thanks & Sponsors

Contact

QQ group of C#/.NET computer vision technical communicate(C#/.NET计算机视觉技术交流群): 579060605

About

.NET/C# binding for Baidu paddle inference library and PaddleOCR

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C# 95.4%
  • Dockerfile 4.4%
  • Batchfile 0.2%