Skip to content

Commit

Permalink
Modify pipeline_tutorial.mdx (#22726)
Browse files Browse the repository at this point in the history
generator(model="openai/whisper-large") always returns error. As the error says the generator expects an input, just like the .flac file above. Even the generator object has no parameters called model. While there are parameters which can be passed to generator like 'batch_size' but to pass a model i believe the the parameter has to be passed while instantiating the pipeline and not as a parameter to the instance.

I believe the correct term should be:

generator = pipeline(model="openai/whisper-large", device=0)
  • Loading branch information
ARKA1112 committed Apr 12, 2023
1 parent 370f0ca commit d87ef00
Showing 1 changed file with 8 additions and 8 deletions.
16 changes: 8 additions & 8 deletions docs/source/en/pipeline_tutorial.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -81,10 +81,10 @@ If you want to iterate over a whole dataset, or want to use it for inference in
In general you can specify parameters anywhere you want:

```py
generator(model="openai/whisper-large", my_parameter=1)
out = generate(...) # This will use `my_parameter=1`.
out = generate(..., my_parameter=2) # This will override and use `my_parameter=2`.
out = generate(...) # This will go back to using `my_parameter=1`.
generator = pipeline(model="openai/whisper-large", my_parameter=1)
out = generator(...) # This will use `my_parameter=1`.
out = generator(..., my_parameter=2) # This will override and use `my_parameter=2`.
out = generator(...) # This will go back to using `my_parameter=1`.
```

Let's check out 3 important ones:
Expand All @@ -95,14 +95,14 @@ If you use `device=n`, the pipeline automatically puts the model on the specifie
This will work regardless of whether you are using PyTorch or Tensorflow.

```py
generator(model="openai/whisper-large", device=0)
generator = pipeline(model="openai/whisper-large", device=0)
```

If the model is too large for a single GPU, you can set `device_map="auto"` to allow 馃 [Accelerate](https://huggingface.co/docs/accelerate) to automatically determine how to load and store the model weights.

```py
#!pip install accelerate
generator(model="openai/whisper-large", device_map="auto")
generator = pipeline(model="openai/whisper-large", device_map="auto")
```

Note that if `device_map="auto"` is passed, there is no need to add the argument `device=device` when instantiating your `pipeline` as you may encounter some unexpected behavior!
Expand All @@ -114,7 +114,7 @@ By default, pipelines will not batch inference for reasons explained in detail [
But if it works in your use case, you can use:

```py
generator(model="openai/whisper-large", device=0, batch_size=2)
generator = pipeline(model="openai/whisper-large", device=0, batch_size=2)
audio_filenames = [f"audio_{i}.flac" for i in range(10)]
texts = generator(audio_filenames)
```
Expand Down Expand Up @@ -287,4 +287,4 @@ pipe = pipeline(model="facebook/opt-1.3b", device_map="auto", model_kwargs={"loa
output = pipe("This is a cool example!", do_sample=True, top_p=0.95)
```

Note that you can replace the checkpoint with any of the Hugging Face model that supports large model loading such as BLOOM!
Note that you can replace the checkpoint with any of the Hugging Face model that supports large model loading such as BLOOM!

0 comments on commit d87ef00

Please sign in to comment.