Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix run_seq2seq.py; porting trainer tests to it #10162

Merged
merged 9 commits into from Feb 15, 2021

Conversation

stas00
Copy link
Contributor

@stas00 stas00 commented Feb 13, 2021

This PR:

  • restores some of the essential dropped functionality from finetune_trainer.py - I'm almost sure this is far far from complete since so much was just dropped
  • ports wmt_en_ro test data to jsonlines - I move the tests dataset into the root of examples so that it can be accessed by a variety of sub-projects.
  • ports DeepSpeed tests to use run_seq2seq.py
  • ports the other trainer script to use run_seq2seq.py

@sgugger

Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all your work on this! The test looks perfect to me, just have a few comments on the script itself.

examples/seq2seq/run_seq2seq.py Outdated Show resolved Hide resolved
examples/seq2seq/run_seq2seq.py Outdated Show resolved Hide resolved
examples/seq2seq/run_seq2seq.py Outdated Show resolved Hide resolved
examples/seq2seq/run_seq2seq.py Outdated Show resolved Hide resolved
Comment on lines +377 to +384
if model.config.decoder_start_token_id is None and isinstance(tokenizer, (MBartTokenizer, MBartTokenizerFast)):
assert (
data_args.target_lang is not None and data_args.source_lang is not None
), "mBart requires --target_lang and --source_lang"
if isinstance(tokenizer, MBartTokenizer):
model.config.decoder_start_token_id = tokenizer.lang_code_to_id[data_args.target_lang]
else:
model.config.decoder_start_token_id = tokenizer.convert_tokens_to_ids(data_args.target_lang)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for putting that logic back. @patil-suraj @patrickvonplaten since you better than me, shouldn't this be done by set_tgt_lang_special_tokens inside mBART?

Copy link
Contributor Author

@stas00 stas00 Feb 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make a priority on merging this, so that I could finish porting the other tests? And this can be dealt with separately? This PR isn't introducing anything new. Only restoring what was there in first place.

I can open an issue so that it doesn't fall between the cracks.

Thank you!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set_tgt_lang_special_tokens is a method on MBartTokenizer so this needs to be done outside of the model.

Also, IMO MBartTokenizerFast should also have the lang_code_to_id attribute, not sure why it was treated differently.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I decided to go ahead and port the other scripts instead of waiting for merging of the first set.

Had to make some more fixes in the script while at it.

So no rush.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with @patil-suraj here -> IMO MBartTokenizerFast should have the lang_code_to_id so that we don't need an if-else here...but this is not necessarily the responsibility of this PR, so it's fine for me as it is

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe for models such as MBart that don't have an unique decoder_start_token_id, we should maybe in the future just leave config.decoder_start_token_id=None and then throw an error / warning when generating

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting it manually here model.config.decoder_start_token_id is the correct thing to do here, but not super pretty IMO -> it would be better to incentivize the user to either insert at init or when calling generate But because of backward comp we can't really change it now for mBART anyways...

@stas00 stas00 changed the title fix run_seq2seq.py; porting DeepSpeed tests to it fix run_seq2seq.py; porting trainer tests to it Feb 13, 2021
@stas00
Copy link
Contributor Author

stas00 commented Feb 13, 2021

OK, I decided to go ahead and port the other scripts instead of waiting for merging of the first set. Had to make some more fixes in the script while at it.

@stas00 stas00 requested a review from sgugger February 13, 2021 05:58
Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM and we can merge and deal with the MBart thing later on (it's true it's something in the model, not the tokenizer, will think of what the best API could be).

Just have one naming nit.

Thanks for porting those!

@@ -60,44 +63,46 @@ def require_apex(test_case):
return test_case


class TestFinetuneTrainer(TestCasePlus):
def finetune_trainer_quick(self, distributed=None, extra_args_str=None):
class TestTrainerExt(TestCasePlus):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get what Ext means? Is it for extended? I don't really care for the class name, but the filename should have the full name so test_trainer_extended.py.
It's also actually testing the Seq2SeqTrainer in practice.

Copy link
Contributor Author

@stas00 stas00 Feb 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I just wanted to differentiate it from test_trainer. I think it'll go through a refinement later to probably separate seq2seq distributed tests from integration tests, as they are in one pile at the moment. This is just because they reuse the same helper methods. It will evolve and improve as we expand more tests.

I don't think the filename has to match class name - this is because we have multiple cases where the filename contains multiple class names.

Bottom line, this is a WIP and will evolve over time, but I had to change the original Finetune to reflect the new reality Totally your call, @sgugger, on how do you prefer I name it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't mind if the filename doesn't match the class, it's just that test_trainer_ext.py is not very informative and I have no idea what's tested inside just by looking at it ;-)
Since we are in tests/trainer, the trainer part can be dropped, so there is plenty of space to be more informative. With your plans to separate, it could be test_integration and test_distributed once split.

Copy link
Contributor Author

@stas00 stas00 Feb 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's the idea for when the dust settles - there are at least 3 types of tests in there at the moment.

That is it will not be called test_trainer_ext.py for long. How about I call it test_to_be_sorted.py for now so it's loud and clear it's a pile of things. Or can just rename it back to what it is now under seq2seq.

The purpose of this PR is to transition to the new tool to enable Suraj to move along. And I will work in future PRs to make things intuitive and named well. I will not forget.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me!

@stas00 stas00 merged commit 0b1f552 into huggingface:master Feb 15, 2021
@stas00 stas00 deleted the ds-tests-emergency branch February 15, 2021 17:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants