New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multilingual dIalogAct benchMark (miam) #2047
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice thank you for adding MIAM :) (nice name btw, I need some chocolate now)
And good job on the dataset card and the script as well.
Could you run make style
to fix the code formatting for the CI please ?
Hello. All aforementioned changes have been made. I've also re-run black on miam.py. :-) |
I will run isort again. Hopefully it resolves the current check_code_quality test failure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM ! good job :)
Once the review period is over, feel free to open a PR to add all the missing information ;) |
Hi! I will follow up right now with one more pull request as I have new anonymous citation information to include. |
My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over.