PArallel Distributed Deep LEarning
C++ Python Cuda CMake C Shell Other
Clone or download
Failed to load latest commit information.
benchmark add no_random args in Jun 30, 2018
cmake Merge pull request #12095 from velconia/port_py3 Jul 14, 2018
doc Design doc of fixed-point quantization. (#10553) Jul 11, 2018
go fix Jul 3, 2018
paddle update with cuda sync (#12037) Jul 16, 2018
patches/grpc Move grpc changes to patches Jul 6, 2018
proto hide utils to legacy Jul 3, 2018
python Change blocksize (#11863) Jul 16, 2018
tools Merge branch 'develop' of into… Jul 12, 2018
.clang-format fix develop build issue (#10978) May 29, 2018
.dockerignore refine docker build Mar 22, 2017
.gitignore Build: generate all the build related files into one directory. (#9512) Apr 5, 2018
.pre-commit-config.yaml Move all pre-commit hooks to tools/codestyle (#11610) Jun 21, 2018
.style.yapf change python code style to pep8 Nov 11, 2016
.travis.yml Merge branch 'develop' of into… Jul 12, 2018 add author (#11499) Jun 15, 2018
CMakeLists.txt Merge pull request #12095 from velconia/port_py3 Jul 14, 2018 Adding a Code of Conduct for Paddle open source project (#7579) Jan 17, 2018 change from Traditional Chinese to Simplified C… Jan 18, 2018 also move a few other dir to legacy/ Jul 1, 2018
Dockerfile Move LinkChecker from requirements to Dockerfile Jul 13, 2018 scripts: clean bash scripts. (#10721) May 25, 2018 Revise one word in (#371) Nov 7, 2016
LICENSE Fix the grammar in copyright. (#8403) Feb 12, 2018 hide utils to legacy Jul 3, 2018 update v0.11.0 release note Dec 10, 2017 change Fluid description Dec 10, 2017


Build Status Documentation Status Documentation Status Release License

Welcome to the PaddlePaddle GitHub.

PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu.

Our vision is to enable deep learning for everyone via PaddlePaddle. Please refer to our release announcement to track the latest feature of PaddlePaddle.

Lastest PaddlePaddle Version: Fluid


  • Flexibility

    PaddlePaddle supports a wide range of neural network architectures and optimization algorithms. It is easy to configure complex models such as neural machine translation model with attention mechanism or complex memory connection.

  • Efficiency

    In order to unleash the power of heterogeneous computing resource, optimization occurs at different levels of PaddlePaddle, including computing, memory, architecture and communication. The following are some examples:

    • Optimized math operations through SSE/AVX intrinsics, BLAS libraries (e.g. MKL, OpenBLAS, cuBLAS) or customized CPU/GPU kernels.
    • Optimized CNN networks through MKL-DNN library.
    • Highly optimized recurrent networks which can handle variable-length sequence without padding.
    • Optimized local and distributed training for models with high dimensional sparse data.
  • Scalability

    With PaddlePaddle, it is easy to use many CPUs/GPUs and machines to speed up your training. PaddlePaddle can achieve high throughput and performance via optimized communication.

  • Connected to Products

    In addition, PaddlePaddle is also designed to be easily deployable. At Baidu, PaddlePaddle has been deployed into products and services with a vast number of users, including ad click-through rate (CTR) prediction, large-scale image classification, optical character recognition(OCR), search ranking, computer virus detection, recommendation, etc. It is widely utilized in products at Baidu and it has achieved a significant impact. We hope you can also explore the capability of PaddlePaddle to make an impact on your product.


It is recommended to check out the Docker installation guide before looking into the build from source guide.


We provide English and Chinese documentation.

Ask Questions

You are welcome to submit questions and bug reports as Github Issues.

Copyright and License

PaddlePaddle is provided under the Apache-2.0 license.