Skip to content

Conversation

@ngoyal2707
Copy link
Contributor

@ngoyal2707 ngoyal2707 commented Apr 30, 2019

Co-authored-by: jingfeidu jingfeidu@fb.com

The implementation is by Jingfei Du from branch "bigbert". Copied over to this CR to get it merged in isolation since other changes seem to be already in master.

Small changes from original:
Added following line in __init__ as discovered by @myleott :

self.optimizer.set_lr(self.warmup_factor * self.lr)

@ngoyal2707 ngoyal2707 requested a review from myleott April 30, 2019 15:00
@ngoyal2707 ngoyal2707 force-pushed the adding_polynomial_lr_scheduler branch from d51fc2e to 7ddc0f4 Compare April 30, 2019 16:05
Co-authored-by: jingfeidu <jingfeidu@fb.com>
@ngoyal2707 ngoyal2707 force-pushed the adding_polynomial_lr_scheduler branch from 7ddc0f4 to d222b38 Compare April 30, 2019 16:23
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ngoyal2707 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@myleott merged this pull request in 9421e97.

@myleott myleott deleted the adding_polynomial_lr_scheduler branch May 8, 2019 11:49
facebook-github-bot pushed a commit that referenced this pull request Jan 22, 2020
Summary:
Pull Request resolved: pytorch/translate#683

Pull Request resolved: #1612

Make SinusoidalPositionalEmbedding scriptable. Mostly adding types. The only change that affects lots of downstream code is to have max_positions as member variable instead of method.

Reviewed By: myleott

Differential Revision: D18924939

fbshipit-source-id: 2b6486563e9ec5cc34bcf11acdff9054658f4674
louismartin pushed a commit to louismartin/fairseq that referenced this pull request Mar 24, 2020
Summary:
Pull Request resolved: pytorch/translate#683

Pull Request resolved: facebookresearch#1612

Make SinusoidalPositionalEmbedding scriptable. Mostly adding types. The only change that affects lots of downstream code is to have max_positions as member variable instead of method.

Reviewed By: myleott

Differential Revision: D18924939

fbshipit-source-id: 2b6486563e9ec5cc34bcf11acdff9054658f4674
moussaKam pushed a commit to moussaKam/language-adaptive-pretraining that referenced this pull request Sep 29, 2020
Summary:
Pull Request resolved: pytorch/translate#683

Pull Request resolved: facebookresearch#1612

Make SinusoidalPositionalEmbedding scriptable. Mostly adding types. The only change that affects lots of downstream code is to have max_positions as member variable instead of method.

Reviewed By: myleott

Differential Revision: D18924939

fbshipit-source-id: 2b6486563e9ec5cc34bcf11acdff9054658f4674
yfyeung pushed a commit to yfyeung/fairseq that referenced this pull request Dec 6, 2023
…arch#683)

* init files

* add ctc as auxiliary loss and ctc_decode.py

* tuning the scalar of HLG score for 1best, nbest and nbest-oracle

* rename to pruned_transducer_stateless7_ctc

* fix doc

* fix bug, recover the hlg scores

* modify ctc_decode.py, move out the hlg scale

* fix hlg_scale

* add export.py and pretrained.py, and so on

* upload files, update README.md and RESULTS.md

* add CI test
Harleen8118 pushed a commit to Harleen8118/IBERT that referenced this pull request Jun 26, 2025
Summary:
Co-authored-by: jingfeidu <jingfeidu@fb.com>

The implementation is by Jingfei Du from branch "bigbert". Copied over to this CR to get it merged in isolation since other changes seem to be already in master.

**Small changes from original:**
Added following line in `__init__` as discovered by myleott :

```
self.optimizer.set_lr(self.warmup_factor * self.lr)
```
Pull Request resolved: facebookresearch/fairseq#683

Reviewed By: myleott

Differential Revision: D15149628

Pulled By: myleott

fbshipit-source-id: 5f715611182cdd111e636c66d5f24aa88fa03e29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants