Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenAI Whisper medium-model error while processing timestamps #51

Open
nachoh8 opened this issue Apr 27, 2023 · 7 comments
Open

OpenAI Whisper medium-model error while processing timestamps #51

nachoh8 opened this issue Apr 27, 2023 · 7 comments

Comments

@nachoh8
Copy link

nachoh8 commented Apr 27, 2023

I am getting the following error when using "openai/whisper-medium" model with timestamp prediction:
There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?
This error comes from "transformers/models/whisper/tokenization_whisper.py" line 885. The generated tokens do not include any timestamps, except for the first one (0.0).

I have tested to use audios of different length (1min to 1h) and different parameters (half-precision, stride) and always the same error occurs. On the other hand, with the base-model and large-v2-model this error does not occur.

Code:

model = "openai/whisper-medium"
whisper = FlaxWhisperPipline(model, dtype=jnp.float32)
res: dict = whisper(audio_file, stride_length_s=0.0, language="es", return_timestamps=True)

My computer:

  • Python 3.8.10
  • SO: Ubuntu 20.04 LTS 64bits WSL in Windows 11
  • CPU: 12th Gen Intel® Core™ i7-12700
  • GPU: Nvidia RTX 3060
  • RAM: 32,0 GB
@luisroque
Copy link

I am having the same issue exactly

@sanchit-gandhi
Copy link
Owner

Fixed on main in transformers, can you do:

pip install git+https://github.com/huggingface/transformers.git

to install transformers from main?

@nachoh8
Copy link
Author

nachoh8 commented Apr 28, 2023

I have tried but the error persists.

@diegofer25
Copy link

diegofer25 commented May 1, 2023

any update about it? I'm having exctly same error when I try to embed the result with speechbrain audio diarization.

def transform_timestamp_list(input_list, duration):
    output_list = []

    for item in input_list:
        output_item = {
            "start": item["timestamp"][0],
            "end": item["timestamp"][1] if item["timestamp"][1] != None else duration,
            "text": item["text"]
        }
        output_list.append(output_item)

    return output_list
    
        result = pipeline(temp_file.name, task="transcribe", language="pt", return_timestamps=True)
        print("transcribe result", result)

        segments = transform_timestamp_list(result["chunks"], duration)

        # Create embedding
        def segment_embedding(segment):
            audio = Audio()
            start = segment["start"]
            # Whisper overshoots the end timestamp in the last segment
            end = min(duration, segment["end"])
            clip = Segment(start, end)
            waveform, sample_rate = audio.crop(temp_file.name, clip)
            return embedding_model(waveform[None])

        print("starting embedding")
        embeddings = np.zeros(shape=(len(segments), 192))
        for i, segment in enumerate(segments):
            embeddings[i] = segment_embedding(segment)
        embeddings = np.nan_to_num(embeddings)
        print(f'Embedding shape: {embeddings.shape}')

        # Assign speaker label
        clustering = AgglomerativeClustering(num_speakers).fit(embeddings)
        labels = clustering.labels_
        for i in range(len(segments)):
            segments[i]["speaker"] = 'SPEAKER ' + str(labels[i] + 1)

        # Make output
        output = []  # Initialize an empty list for the output
        for segment in segments:
            # Append the segment to the output list
            output.append({
                'start': str(convert_time(segment["start"])),
                'end': str(convert_time(segment["end"])),
                'speaker': segment["speaker"],
                'text': segment["text"]
            })

        print("done with embedding")
        time_end = time.time()
        time_diff = time_end - time_start

        system_info = f"""-----Processing time: {time_diff:.5} seconds-----"""
        print(system_info)

        # Add this line at the end of the handler function, before the return statement
        os.remove(temp_file.name)

        return Response(
            json = output,
            status=200
        )

LOGS

01 May, 11:19:36
There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?
01 May, 11:19:37
starting embedding
01 May, 11:19:38
Embedding shape: (12, 192)
01 May, 11:19:38
-----Processing time: 38.261 seconds-----
01 May, 11:19:38
done with embedding

@jkf87
Copy link

jkf87 commented May 3, 2023

I am having the same issue exactly too

@sanchit-gandhi
Copy link
Owner

Hey @nachoh8 - just double checked your code sample, we shouldn't be using stride_length_s=0.0 since this means we have no overlap between chunks (which will severely degrade the performance of your transcription). Could you try leaving this set to None so that it defaults to chunk_length_s / 6 = 30 / 6 = 5? This probably explains why only your first batch had timestamps, and not the successive ones.

@phineas-pta
Copy link

any update? i'm getting the same error, running on google colab gpu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants