Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added sentence ranking task and loss (#809) #1002

Closed

Conversation

jingfeidu
Copy link
Contributor

Summary:
This task and loss are used for sentence ranking and multiple choice tasks such as RACE
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/809

Reviewed By: myleott

Differential Revision: D16715745

Pulled By: myleott

Summary:
This task and loss are used for sentence ranking and multiple choice tasks such as RACE
Pull Request resolved: fairinternal/fairseq-py#809

Reviewed By: myleott

Differential Revision: D16715745

Pulled By: myleott

fbshipit-source-id: b3f3eae048017910e8c7e881026603a5e427ddbc
}
)

label_path = f"{get_path('label', split)}.label"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This format cannot work in python3.5.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in #1005.

@myleott myleott closed this Aug 10, 2019
facebook-github-bot pushed a commit that referenced this pull request Jan 22, 2020
Summary:
Pull Request resolved: fairinternal/fairseq-py#1002

Pull Request resolved: pytorch/translate#681

Pull Request resolved: #1524

Make fairseq MultiheadAttention scriptable. Looking for feedbacks.

1. Add types
2. Move incremental state management logic from util functions to initializers. TorchScript in general doesn't support global dict. As a result modules with multihead attention in it would assign itself fairseq_instance_id in the initializer.
3. There might be opportunities to make assertions and annotations cleaner.

Reviewed By: myleott

Differential Revision: D18772594

fbshipit-source-id: 377aef4bbb7ef51da5b6bac9a87a6f7b03b16fe1
louismartin pushed a commit to louismartin/fairseq that referenced this pull request Mar 24, 2020
Summary:
Pull Request resolved: fairinternal/fairseq-py#1002

Pull Request resolved: pytorch/translate#681

Pull Request resolved: facebookresearch#1524

Make fairseq MultiheadAttention scriptable. Looking for feedbacks.

1. Add types
2. Move incremental state management logic from util functions to initializers. TorchScript in general doesn't support global dict. As a result modules with multihead attention in it would assign itself fairseq_instance_id in the initializer.
3. There might be opportunities to make assertions and annotations cleaner.

Reviewed By: myleott

Differential Revision: D18772594

fbshipit-source-id: 377aef4bbb7ef51da5b6bac9a87a6f7b03b16fe1
moussaKam pushed a commit to moussaKam/language-adaptive-pretraining that referenced this pull request Sep 29, 2020
Summary:
Pull Request resolved: fairinternal/fairseq-py#1002

Pull Request resolved: pytorch/translate#681

Pull Request resolved: facebookresearch#1524

Make fairseq MultiheadAttention scriptable. Looking for feedbacks.

1. Add types
2. Move incremental state management logic from util functions to initializers. TorchScript in general doesn't support global dict. As a result modules with multihead attention in it would assign itself fairseq_instance_id in the initializer.
3. There might be opportunities to make assertions and annotations cleaner.

Reviewed By: myleott

Differential Revision: D18772594

fbshipit-source-id: 377aef4bbb7ef51da5b6bac9a87a6f7b03b16fe1
yfyeung pushed a commit to yfyeung/fairseq that referenced this pull request Dec 6, 2023
* add RNNLM rescore

* add shallow fusion and lm rescore for streaming zipformer

* minor fix

* update RESULTS.md

* fix yesno workflow, change from ubuntu-18.04 to ubuntu-latest
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants