Skip to content

Commit 27f9063

Browse files
authored
Update moonshine.md
1 parent d199bc0 commit 27f9063

File tree

1 file changed

+68
-27
lines changed

1 file changed

+68
-27
lines changed

docs/source/en/model_doc/moonshine.md

Lines changed: 68 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -14,35 +14,76 @@ rendered properly in your Markdown viewer.
1414
1515
-->
1616

17-
# Moonshine
18-
19-
<div class="flex flex-wrap space-x-1">
20-
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
21-
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
22-
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
17+
<div style="float: right;">
18+
<div class="flex flex-wrap space-x-1">
19+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
20+
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
21+
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
22+
</div>
2323
</div>
2424

25-
## Overview
26-
27-
The Moonshine model was proposed in [Moonshine: Speech Recognition for Live Transcription and Voice Commands
28-
](https://arxiv.org/abs/2410.15608) by Nat Jeffries, Evan King, Manjunath Kudlur, Guy Nicholson, James Wang, Pete Warden.
29-
30-
The abstract from the paper is the following:
31-
32-
*This paper introduces Moonshine, a family of speech recognition models optimized for live transcription and voice command processing. Moonshine is based on an encoder-decoder transformer architecture and employs Rotary Position Embedding (RoPE) instead of traditional absolute position embeddings. The model is trained on speech segments of various lengths, but without using zero-padding, leading to greater efficiency for the encoder during inference time. When benchmarked against OpenAI's Whisper tiny-en, Moonshine Tiny demonstrates a 5x reduction in compute requirements for transcribing a 10-second speech segment while incurring no increase in word error rates across standard evaluation datasets. These results highlight Moonshine's potential for real-time and resource-constrained applications.*
33-
34-
Tips:
35-
36-
- Moonshine improves upon Whisper's architecture:
37-
1. It uses SwiGLU activation instead of GELU in the decoder layers
38-
2. Most importantly, it replaces absolute position embeddings with Rotary Position Embeddings (RoPE). This allows Moonshine to handle audio inputs of any length, unlike Whisper which is restricted to fixed 30-second windows.
39-
40-
This model was contributed by [Eustache Le Bihan (eustlb)](https://huggingface.co/eustlb).
41-
The original code can be found [here](https://github.com/usefulsensors/moonshine).
42-
43-
## Resources
25+
# Moonshine
4426

45-
- [Automatic speech recognition task guide](../tasks/asr)
27+
[Moonshine](https://huggingface.co/papers/2410.15608) is an encoder-decoder speech recognition model optimized for real-time transcription and recognizing voice command. Instead of using traditional absolute position embeddings, Moonshine uses Rotary Position Embedding (RoPE) to handle speech with varying lengths without using padding. This improves efficiency during inference, making it ideal for resource-constrained devices.
28+
29+
You can find all the original Moonshine checkpoints under the [Useful Sensors](https://huggingface.co/UsefulSensors) organization.
30+
31+
> [!TIP]
32+
> Click on the Moonshine models in the right sidebar for more examples of how to apply Moonshine to different speech recognition tasks.
33+
34+
The example below demonstrates how to transcribe speech into text with [`Pipeline`] or the [`AutoModel`] class.
35+
36+
<hfoptions id="usage">
37+
<hfoption id="Pipeline">
38+
39+
```py
40+
import torch
41+
from transformers import pipeline
42+
43+
pipeline = pipeline(
44+
task="automatic-speech-recognition",
45+
model="UsefulSensors/moonshine-base",
46+
torch_dtype=torch.float16,
47+
device=0
48+
)
49+
pipeline("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
50+
```
51+
52+
</hfoption>
53+
<hfoption id="AutoModel">
54+
55+
```py
56+
# pip install datasets
57+
import torch
58+
from datasets import load_dataset
59+
from transformers import AutoProcessor, MoonshineForConditionalGeneration
60+
61+
processor = AutoProcessor.from_pretrained(
62+
"UsefulSensors/moonshine-base",
63+
)
64+
model = MoonshineForConditionalGeneration.from_pretrained(
65+
"UsefulSensors/moonshine-base",
66+
torch_dtype=torch.float16,
67+
device_map="auto",
68+
attn_implementation="sdpa"
69+
).to("cuda")
70+
71+
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", split="validation")
72+
audio_sample = ds[0]["audio"]
73+
74+
input_features = processor(
75+
audio_sample["array"],
76+
sampling_rate=audio_sample["sampling_rate"],
77+
return_tensors="pt"
78+
)
79+
input_features = input_features.to("cuda", dtype=torch.float16)
80+
81+
predicted_ids = model.generate(**input_features, cache_implementation="static")
82+
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
83+
transcription[0]
84+
```
85+
</hfoption>
86+
</hfoptions>
4687

4788
## MoonshineConfig
4889

@@ -58,4 +99,4 @@ The original code can be found [here](https://github.com/usefulsensors/moonshine
5899

59100
[[autodoc]] MoonshineForConditionalGeneration
60101
- forward
61-
- generate
102+
- generate

0 commit comments

Comments
 (0)