-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slue PR configs #5087
Slue PR configs #5087
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #5087 +/- ##
==========================================
+ Coverage 76.54% 76.57% +0.03%
==========================================
Files 720 720
Lines 66639 66695 +56
==========================================
+ Hits 51008 51072 +64
+ Misses 15631 15623 -8
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Hi @ftshijt, This PR is ready for review. The primary modifications include the addition of configuration files for the best-trained model in the SLT paper and the ESPnet-SLUE toolkit, which was presented at the ASRU workshop. I've also implemented some minor fixes to the scoring function and the 2-pass SLU implementation. Could you please review it? |
Hi @sw005320, This PR is related to slue-voxceleb and slue-voxpopuli recipes for ESPnet-SLUE toolkit. Please let me know if you have any comments before we can merge this PR! |
OK, I'll start to review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update! Looks very good to me. Two minor comments
egs2/slue-voxpopuli/slu1/run.sh
Outdated
valid_set="devel" | ||
test_sets="test devel" | ||
|
||
slu_config=conf/tuning/train_asr_no_pretrain.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider make it standardize by setting conf/train.yaml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
@@ -0,0 +1,88 @@ | |||
specaug: specaug |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems to me that the change of many configs are quite similar, do we want to keep all of them? (I'm ok with it if that's for public benchmark, but ideally, it would be better if we could summarize them into fewer configs, which is also easier for users)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I considered it, but similar configs (like the ones using pre-trained ASR) have different sets of hyperparameters (like warmup steps, learning rate) selected based on validation performance. Hence, grouping similar configs would reduce reproducibility, so I think it is better to keep all of the configs.
@sw005320 I think this PR is ready to be merged! Let me know if you have any comments. |
egs2/slue-voxpopuli/slu1/README.md
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching this! I have fixed this now!
Upload configs and provide access to best pre-trained models for SLT paper (https://arxiv.org/abs/2211.05869).