Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[20210509] Weekly AI Arxiv 만담 #9

Closed
jungwoo-ha opened this issue May 5, 2021 · 6 comments
Closed

[20210509] Weekly AI Arxiv 만담 #9

jungwoo-ha opened this issue May 5, 2021 · 6 comments

Comments

@jungwoo-ha
Copy link
Owner

jungwoo-ha commented May 5, 2021

@qqueing
Copy link

qqueing commented May 9, 2021

  • SUPERB: Speech processing Universal PERformance Benchmark
    • https://github.com/s3prl/s3prl
    • 다양한 self-supervised speech pre-trained models을 가지고 아래 툴킷을 이용해서 대부분의 스피치 관련 downstream task(Phoneme Recognition, ASR, Keyword spotting, Query by Example Spoken Term Detection, Speaker Identification, Automatic Speaker Verfication, Speaker Diarization, Intent Classification, Slot Filling, Emotion Recognition)에 대해서 평가한 벤치 마크
    • 리더 보드를 공개 한다고 하는데 인터스피치 발표 이후로 공개를 할 것 같습니다.
    • HUBERT(ICASSP2021)모델이 wav2vec 2.0에 비해서 성능이 훨씬 좋은 걸로 레포트 되어 있네요.

@hollobit
Copy link

hollobit commented May 9, 2021

Healthcare’s AI Future: A Conversation with Fei-Fei Li & Andrew Ng
https://www.youtube.com/watch?v=Gbnep6RJinQ&t=1312s

@jungwoo-ha
Copy link
Owner Author

@qqueing 공유 감사합니다. HUBERT 빠르게 뜯어보니 distillation 활용이고 token prediction 기반 SSL이군요. 다만 Pretraining data 크기가 LS960 정도라 더 큰 사이즈의 데이터를 학습했을때의 양상도 궁금해졌습니다 ㅎㅎ 그리고.. 음성 self-supervised 에서 ASR 외의 다른 downstream task 궁금했는데 잘 정리되어서 매우 유용하네요. 감사합니다!

@j-min
Copy link

j-min commented May 9, 2021

@hollobit
Copy link

hollobit commented May 9, 2021

이런 행사도 있었는데, 네이버 행사라 소개를 안해주신 듯 ^^
http://naversearchconf.naver.com/

@jungwoo-ha
Copy link
Owner Author

NAVER R&D US
https://naver-career.gitbook.io/en/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants