Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions _typos.toml
Original file line number Diff line number Diff line change
Expand Up @@ -28,17 +28,12 @@ datas = "datas"
feeded = "feeded"

# These words need to be fixed
Learing = "Learing"
Operaton = "Operaton"
Optimizaing = "Optimizaing"
Optimzier = "Optimzier"
Setment = "Setment"
Simle = "Simle"
Sovler = "Sovler"
libary = "libary"
matrics = "matrics"
metrices = "metrices"
mutbale = "mutbale"
occurence = "occurence"
opeartor = "opeartor"
opeartors = "opeartors"
Expand Down
4 changes: 2 additions & 2 deletions docs/api/paddle/linalg/svd_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,9 @@ svd
返回
::::::::::::

- Tensor U,奇异值分解的 U 矩阵。如果 full_matrics 设置为 False,则 Shape 为 ``[*, M, K]``,如果 full_metrices 设置为 True,那么 Shape 为 ``[*, M, M]``。其中 K 为 M 和 N 的最小值。
- Tensor U,奇异值分解的 U 矩阵。如果 full_matrices 设置为 False,则 Shape 为 ``[*, M, K]``,如果 full_matrices 设置为 True,那么 Shape 为 ``[*, M, M]``。其中 K 为 M 和 N 的最小值。
- Tensor S,奇异值向量,Shape 为 ``[*, K]`` 。
- Tensor VH,奇异值分解的 VH 矩阵。如果 full_matrics 设置为 False,则 Shape 为 ``[*, K, N]``,如果 full_metrices 设置为 True,那么 Shape 为 ``[*, N, N]``。其中 K 为 M 和 N 的最小值。
- Tensor VH,奇异值分解的 VH 矩阵。如果 full_matrices 设置为 False,则 Shape 为 ``[*, K, N]``,如果 full_matrices 设置为 True,那么 Shape 为 ``[*, N, N]``。其中 K 为 M 和 N 的最小值。

代码示例
::::::::::
Expand Down
2 changes: 1 addition & 1 deletion docs/api/paddle/utils/cpp_extension/load_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ load
from paddle.utils.cpp_extension import load

custom_op_module = load(
name="op_shared_libary_name", # 生成动态链接库的名称
name="op_shared_library_name", # 生成动态链接库的名称
sources=['relu_op.cc', 'relu_op.cu'], # 自定义 OP 的源码文件列表
extra_cxx_cflags=['-g', '-w'], # 可选,指定编译。cc/.cpp 文件时额外的编译选项
extra_cuda_cflags=['-O2'], # 可选,指定编译。cu 文件时额外的编译选项
Expand Down
2 changes: 1 addition & 1 deletion docs/design/modules/optimizer.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ In this design, we propose a high-level API that automatically derives the optim
2. Users create a certain kind of Optimizer with some argument.

```python
optimizer = AdagradOptimizer(learing_rate=0.001)
optimizer = AdagradOptimizer(learning_rate=0.001)
```

3. Users use the optimizer to `minimize` a certain `cost` through updating parameters in parameter_list.
Expand Down
8 changes: 4 additions & 4 deletions docs/design/phi/kernel_migrate_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,10 +155,10 @@ void LogSoftmaxKernel(const Context& dev_ctx,
| `framework::DenseTensor` | `DenseTensor` |
| 模板参数 `DeviceContext` | 模板参数 `Context` |
| `platform::XXXDeviceContext` | `XXXContext` |
| `out->mutbale_data(ctx.GetPlace()/place)` | `dev_ctx.template Alloc(out)` |
| `auto* ptr = out->mutbale_data()` | `auto* ptr = out->data()` |
| `out->mutbale_data(dims, place)` | `out->Resize(dims); dev_ctx.template Alloc(out)` |
| `out->mutbale_data(place, dtype)` | `dev_ctx.Alloc(out, dtype)` |
| `out->mutable_data(ctx.GetPlace()/place)` | `dev_ctx.template Alloc(out)` |
| `auto* ptr = out->mutable_data()` | `auto* ptr = out->data()` |
| `out->mutable_data(dims, place)` | `out->Resize(dims); dev_ctx.template Alloc(out)` |
| `out->mutable_data(place, dtype)` | `dev_ctx.Alloc(out, dtype)` |
| `platform::errors::XXX` | `phi::errors::XXX` |
| `platform::float16/bfloat16/complex64/complex128` | `dtype::float16/bfloat16/complex64/complex128` |
| `framework::Eigen***` | `Eigen***` |
Expand Down
8 changes: 4 additions & 4 deletions docs/design/phi/kernel_migrate_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,10 +155,10 @@ Secondly, it is necessary to replace some of the types or functions that were on
| `framework::DenseTensor` | `DenseTensor` |
| template parameter `DeviceContext` | template parameter `Context` |
| `platform::XXXDeviceContext` | `XXXContext` |
| `out->mutbale_data(ctx.GetPlace()/place)` | `dev_ctx.template Alloc(out)` |
| `auto* ptr = out->mutbale_data()` | `auto* ptr = out->data()` |
| `out->mutbale_data(dims, place)` | `out->Resize(dims); dev_ctx.template Alloc(out)` |
| `out->mutbale_data(place, dtype)` | `dev_ctx.Alloc(out, dtype)` |
| `out->mutable_data(ctx.GetPlace()/place)` | `dev_ctx.template Alloc(out)` |
| `auto* ptr = out->mutable_data()` | `auto* ptr = out->data()` |
| `out->mutable_data(dims, place)` | `out->Resize(dims); dev_ctx.template Alloc(out)` |
| `out->mutable_data(place, dtype)` | `dev_ctx.Alloc(out, dtype)` |
| `platform::errors::XXX` | `phi::errors::XXX` |
| `platform::float16/bfloat16/complex64/complex128` | `dtype::float16/bfloat16/complex64/complex128` |
| `framework::Eigen***` | `Eigen***` |
Expand Down
4 changes: 2 additions & 2 deletions docs/eval/evaluation_of_docs_system.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ TensorFlow 的文档规划,比较直接地匹配了本文所介绍的分类标
- Model Understanding with Captum
- Learning PyTorch
- Deep Learning with PyTorch: A 60 Minute Blitz
- Learing PyTorch with Examples
- Learning PyTorch with Examples
- What is torch.nn really?
- Visualizing Models, Data, and Training with TensorBoard
- Image and Video
Expand Down Expand Up @@ -547,7 +547,7 @@ MindSpore 的有自己独立的文档分类标准和风格,所以硬套本文
| ---------------------------- | ------------------------------------------------------------ | ---- | ------------------------------------------------------------ | ---- | ------------------------------------------------------------ | ---- | ------------------------------------------------------------ | ------ |
| 基本数据(Tensor)和基本算子 | Tensors Variables Tensor slicing Ragged tensor Sparse tensor DTensor concepts | 6 | Tensors Transforms Introduction to PyTorch Tensors | 3 | 张量 Tensor | 1 | Tensor 概念介绍 | 1 |
| 数据加载与预处理 | Images CSV Numpy pandas.DataFrame TFRecord and tf.Example Additional formats with tf.io Text More text loading Classifying structured data with preprocessing layers Classification on imbalanced data Time series forecasting Decision forest models | 13 | Datasets & Dataloaders | 1 | 数据处理 数据处理(进阶) 自动数据增强 轻量化数据处理 单节点数据缓存 优化数据处理 | 6 | 数据集的定义和加载 数据预处理 | 2 |
| 如何组网 | Modules, layers, and models | 1 | Build the Neural Network Building Models with PyTorch What is torch.nn really? Learing PyTorch with Examples | 4 | 创建网络 网络构建 | 2 | 模型组网 飞桨高层 API 使用指南 层与模型 | 3 |
| 如何组网 | Modules, layers, and models | 1 | Build the Neural Network Building Models with PyTorch What is torch.nn really? Learning PyTorch with Examples | 4 | 创建网络 网络构建 | 2 | 模型组网 飞桨高层 API 使用指南 层与模型 | 3 |
| 如何训练 | Training loops NumPy API Checkpoint SavedModel | 4 | Optimization Model Parameters Training with PyTorch | 2 | 模型训练 训练与评估 | 2 | 训练与预测验证 自定义指标 | 2 |
| 保存与加载模型 | Save and load Save and load(Distributed Training) | 2 | Save and Load the Model | 1 | 保存与加载 | 1 | 模型保存与载入 模型保存及加载(应用实践) | 2 |
| 可视化、调优技巧 | Overfit and underfit Tune hyperprameters with Keras Tuner Better performance with tf.function Profile TensorFlow performance Graph optimizaition Optimize GPU Performance Mixed precision | 7 | PyTorch TensorBoard Support Model Understanding with Captum Visualizing Models, Data, and Training with TensorBoard Profiling your PyTorch Module PyTorch Profiler with TensorBoard Hyperparameter tuning with Ray Tune Optimizing Vision Transformer Model for Deployment Parametrization Tutorial Pruning Tutorial Grokking PyTorch Intel CPU performance from first principles | 11 | 查看中间文件 Dump 功能调试 自定义调试信息 调用自定义类 算子增量编译 算子调优工具 自动数据加速 固定随机性以复现脚本运行结果 | 8 | VisualDL 工具简介 VisualDL 使用指南 飞桨模型量化 | 3 |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ torch.Tensor.svd(some=True, compute_uv=True)

### [paddle.linalg.svd](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/linalg/svd_cn.html#svd)
```python
paddle.linalg.svd(x, full_matrics=False, name=None)
paddle.linalg.svd(x, full_matrices=False, name=None)
```

两者参数用法不一致,具体如下:
Expand All @@ -15,7 +15,7 @@ paddle.linalg.svd(x, full_matrics=False, name=None)

| PyTorch | PaddlePaddle | 备注 |
| ------------- | ------------ | ------------------------------------------------------ |
| some | full_matrics | 是否计算完整的 U 和 V 矩阵,两者参数功能相反,需要转写。 |
| some | full_matrices | 是否计算完整的 U 和 V 矩阵,两者参数功能相反,需要转写。 |
| compute_uv | - | 是否返回零填充的 U 和 V 矩阵, 默认为 `True`, Paddle 无此参数。暂无转写方式。 |


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ torch.svd(input, some=True, compute_uv=True, *, out=None)

### [paddle.linalg.svd](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/linalg/svd_cn.html#svd)
```python
paddle.linalg.svd(x, full_matrics=False, name=None)
paddle.linalg.svd(x, full_matrices=False, name=None)
```

PyTorch 相比 Paddle 支持更多其他参数,具体如下:
Expand All @@ -16,7 +16,7 @@ PyTorch 相比 Paddle 支持更多其他参数,具体如下:
| PyTorch | PaddlePaddle | 备注 |
| ------------- | ------------ | ------------------------------------------------------ |
| input | x | 输入 Tensor ,仅参数名不一致。 |
| some | full_matrics | 表示需计算的奇异值数目。 Paddle 与 PyTorch 默认值不同,需要转写。 |
| some | full_matrices | 表示需计算的奇异值数目。 Paddle 与 PyTorch 默认值不同,需要转写。 |
| compute_uv | - | 表示是否计算 U 和 V 。Paddle 无此参数,暂无转写方式。 |
| out | - | 表示输出的 Tensor 元组。 Paddle 无此参数,需要转写。 |

Expand All @@ -27,7 +27,7 @@ PyTorch 相比 Paddle 支持更多其他参数,具体如下:
u, s, v = torch.svd(x, some = True )

# Paddle 写法
u, s, v = paddle.linalg.svd(x, full_matrics = False)
u, s, v = paddle.linalg.svd(x, full_matrices = False)
```
#### out:指定输出
```python
Expand Down