You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
I have been trying for a long time to get inference on the finetuned model but it keeps throwing an error saying that tokenizer is missing.
Steps to reproduce:
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition",model="nodlehs/whisper_finetune") # change to "your-username/the-name-you-picked"
def transcribe(audio):
text = pipe(audio)["text"]
return text
It seems I am missing a tokenizer file but while running the whisper finetune, no such file was uploaded
Could someone please help me out?
Hi @skanda1005 - There appears to be an issue with the way your model was uploaded on the hub. The tokenizer is missing. Might be a good diea to ensure that all the files are there in your model repo.
Hi!
I have been trying for a long time to get inference on the finetuned model but it keeps throwing an error saying that tokenizer is missing.
Steps to reproduce:
It seems I am missing a tokenizer file but while running the whisper finetune, no such file was uploaded
Could someone please help me out?
P.S this is my model on hf https://huggingface.co/nodlehs/whisper_finetune/tree/main
The text was updated successfully, but these errors were encountered: