-
Notifications
You must be signed in to change notification settings - Fork 31.5k
Closed
Description
System Info
So I made a few experiments with whisper and seamless m4tv2 on the fleurs (concatenated files to 5min samples) dataset. I used the batching functionality by setting chunk_length_s to 30s and as is turns out the WER increases by 20% over all languages compared to long form transcription (sequentially going through each file). Do you have the same behaviour? Is this a bug or expected to happen because of the chunking? 20% seems to be far too much from my point of view.
Who can help?
No response
Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examplesfolder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction
Just use the default pipeline implementation of whisper on files which are a few minutes long. Its far worse when chunking is enabled.
Expected behavior
I would expect the same transcription quality or maybe a few % less but 20% is far from that.