Skip to content

Commit

Permalink
add npu optim docs
Browse files Browse the repository at this point in the history
  • Loading branch information
luomaoling committed Apr 21, 2023
1 parent 191b6a7 commit 97bd284
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 15 deletions.
24 changes: 11 additions & 13 deletions docs/en/common_usage/speed_up_training.md
Expand Up @@ -118,16 +118,14 @@ This feature is only available for PyTorch >= 2.0.0.

If Ascend's equipment is used, Ascend's optimizer can be used to reduce the training time of the model. The optimizers supported by Ascend devices are as follows

```
NpuFusedAdadelta
NpuFusedAdam
NpuFusedAdamP
NpuFusedAdamW
NpuFusedBertAdam
NpuFusedLamb
NpuFusedRMSprop
NpuFusedRMSpropTF
NpuFusedSGD
```

The usage method is the same as the native optimizer, and the more detailed usage method in MMEngine can be referred to [optimizers](https://mmengine.readthedocs.io/zh_CN/latest/tutorials/optim_wrapper.html?highlight=%E4%BC%98%E5%8C%96%E5%99%A8).
- NpuFusedAdadelta
- NpuFusedAdam
- NpuFusedAdamP
- NpuFusedAdamW
- NpuFusedBertAdam
- NpuFusedLamb
- NpuFusedRMSprop
- NpuFusedRMSpropTF
- NpuFusedSGD

The usage method is the same as the native optimizer, and the more detailed usage method in MMEngine can be referred to [optimizers](https://mmengine.readthedocs.io/en/latest/tutorials/optim_wrapper.html).
2 changes: 0 additions & 2 deletions docs/zh_cn/common_usage/speed_up_training.md
Expand Up @@ -119,7 +119,6 @@ runner = Runner(

如果使用了昇腾的设备,可以使用昇腾的优化器从而缩短模型的训练时间。昇腾设备支持的优化器如下


- NpuFusedAdadelta
- NpuFusedAdam
- NpuFusedAdamP
Expand All @@ -130,5 +129,4 @@ runner = Runner(
- NpuFusedRMSpropTF
- NpuFusedSGD


使用方式同原生优化器一样,MMEngine 中更详细的使用方式可参考 [optimizers](https://mmengine.readthedocs.io/zh_CN/latest/tutorials/optim_wrapper.html?highlight=%E4%BC%98%E5%8C%96%E5%99%A8).

0 comments on commit 97bd284

Please sign in to comment.