PArallel Distributed Deep LEarning (PaddlePaddle核心框架,高性能单机、分布式训练和跨平台部署)
Clone or download
wopeizl Merge pull request #15351 from wopeizl/fixbuildissue
disable the parallel mode for adam op on windows test=develop
Latest commit 994e73f Jan 16, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github/ISSUE_TEMPLATE Update issue templates Nov 23, 2018
benchmark Add brpc serialization support. (#11430) Dec 14, 2018
cmake Optimize while_op for test (#14764) Jan 16, 2019
doc remove unused doc folder Jan 16, 2019
go fix Jul 3, 2018
paddle Merge pull request #15351 from wopeizl/fixbuildissue Jan 16, 2019
patches/grpc Add stopped sign for grpc client Jul 27, 2018
proto hide utils to legacy Jul 3, 2018
python Merge pull request #14805 from mozga-intel/mozga-intel/element_wise_o… Jan 16, 2019
tools update CI rules for checking change of python reference (#15104) Dec 29, 2018
.clang-format fix develop build issue (#10978) May 29, 2018
.dockerignore Polish code Nov 24, 2018
.gitignore Merge pull request #14479 from reyoung/feature/fix_macos_ut Nov 21, 2018
.pre-commit-config.yaml Move all pre-commit hooks to tools/codestyle (#11610) Jun 21, 2018
.style.yapf change python code style to pep8 Nov 11, 2016
.travis.yml new delete develop doc/fluid folder Sep 19, 2018 Group Norm (#13843) Nov 22, 2018
CMakeLists.txt Remove debug info Jan 7, 2019 Adding a Code of Conduct for Paddle open source project (#7579) Jan 17, 2018 change from Traditional Chinese to Simplified C… Jan 18, 2018 test=develop Jan 11, 2019
Dockerfile Upgrade ar version (#15109) Jan 2, 2019 scripts: clean bash scripts. (#10721) May 25, 2018 Revise one word in (#371) Nov 7, 2016
LICENSE Fix the grammar in copyright. (#8403) Feb 12, 2018 Update Dec 14, 2018 update v0.11.0 release note Dec 10, 2017 change Fluid description Dec 10, 2017


Build Status Documentation Status Documentation Status Release License

Welcome to the PaddlePaddle GitHub.

PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu.

Our vision is to enable deep learning for everyone via PaddlePaddle. Please refer to our release announcement to track the latest feature of PaddlePaddle.

欢迎来到 PaddlePaddle GitHub

PaddlePaddle (PArallel Distributed Deep LEarning) 是一个简单易用、高效灵活、可扩展的深度学习平台,最初由百度科学家和工程师共同开发,目的是将深度学习技术应用到百度的众多产品中。



Latest PaddlePaddle Release: Fluid 1.2.0

Install Latest Stable Release:

# Linux CPU
pip install paddlepaddle
# Linux GPU cuda9cudnn7
pip install paddlepaddle-gpu
# Linux GPU cuda8cudnn7
pip install paddlepaddle-gpu==1.2.0.post87
# Linux GPU cuda8cudnn5
pip install paddlepaddle-gpu==1.2.0.post85

# For installation on other platform, refer to

PaddlePaddle最新版本: Fluid 1.2.0


# Linux CPU
pip install paddlepaddle
# Linux GPU cuda9cudnn7
pip install paddlepaddle-gpu
# Linux GPU cuda8cudnn7
pip install paddlepaddle-gpu==1.2.0.post87
# Linux GPU cuda8cudnn5
pip install paddlepaddle-gpu==1.2.0.post85

# 其他平台上的安装指引请参考


  • Flexibility

    PaddlePaddle supports a wide range of neural network architectures and optimization algorithms. It is easy to configure complex models such as neural machine translation model with attention mechanism or complex memory connection.

  • Efficiency

    In order to unleash the power of heterogeneous computing resource, optimization occurs at different levels of PaddlePaddle, including computing, memory, architecture and communication. The following are some examples:

    • Optimized math operations through SSE/AVX intrinsics, BLAS libraries (e.g. MKL, OpenBLAS, cuBLAS) or customized CPU/GPU kernels.
    • Optimized CNN networks through MKL-DNN library.
    • Highly optimized recurrent networks which can handle variable-length sequence without padding.
    • Optimized local and distributed training for models with high dimensional sparse data.
  • Scalability

    With PaddlePaddle, it is easy to use many CPUs/GPUs and machines to speed up your training. PaddlePaddle can achieve high throughput and performance via optimized communication.

  • Connected to Products

    In addition, PaddlePaddle is also designed to be easily deployable. At Baidu, PaddlePaddle has been deployed into products and services with a vast number of users, including ad click-through rate (CTR) prediction, large-scale image classification, optical character recognition(OCR), search ranking, computer virus detection, recommendation, etc. It is widely utilized in products at Baidu and it has achieved a significant impact. We hope you can also explore the capability of PaddlePaddle to make an impact on your product.


  • 灵活性


  • 高效性


    • 通过SSE/AVX 内置函数、BLAS库(例如MKL、OpenBLAS、cuBLAS)或定制的CPU/GPU内核优化数学操作。
    • 通过MKL-DNN库优化CNN网络
    • 高度优化循环网络,无需执行 padding 操作即可处理 变长 序列
    • 针对高维稀疏数据模型,优化了局部和分布式训练。
  • 稳定性

    有了 PaddlePaddle,使得利用各种CPU/GPU和机器来加速训练变得简单。PaddlePaddle 通过优化通信可以实现巨大吞吐量和快速执行。

  • 连接产品

    另外,PaddlePaddle 的设计也易于部署。在百度,PaddlePaddle 已经部署到含有巨大用户量的产品和服务上,包括广告点击率(CTR)预测、大规模图像分类、光学字符识别(OCR)、搜索排序,计算机病毒检测、推荐系统等等。PaddlePaddle广泛应用于百度产品中,产生了非常重要的影响。我们希望您也能探索 PaddlePaddle 的能力,为您的产品创造新的影响力和效果。


It is recommended to read this doc on our website.




We provide English and Chinese documentation.


我们提供英文中文 文档

Ask Questions

You are welcome to submit questions and bug reports as Github Issues.


欢迎您将问题和bug报告以Github Issues的形式提交

Copyright and License

PaddlePaddle is provided under the Apache-2.0 license.


PaddlePaddle由Apache-2.0 license提供