The Whisper library is a powerful tool that offers a convenient way to transcribe or translate audio files using Node.js. It leverages the capabilities of the Whisper C++ library, which is responsible for performing the underlying audio processing tasks.
By utilizing native Node.js addons, the Whisper library seamlessly integrates with Node.js applications.
-
C++ Compiler: You will need a compatible C++ compiler installed on your system. Ensure that you have a C++ compiler installed. Refer to the documentation of your operating system or compiler for installation instructions.
-
CMake: CMake is required for building the whisper.cpp library. Make sure you have CMake installed on your system. You can download CMake from the official website: https://cmake.org
To use the Whisper library, follow these steps:
-
Ensure you have Node.js installed on your machine. You can download it from the official Node.js website: https://nodejs.org
-
Open your terminal or command prompt.
-
Navigate to your project directory.
-
Run the following command to install the Whisper library:
npm install @tech9app/whisper.js
To transcribe or translate an audio file using the Whisper library, follow these steps:
-
Import the necessary types and the whisper function from the Whisper library:
import { WhisperParams, SpeechData, whisper } from '@tech9app/whisper.js';
-
Define the parameters for the Whisper process by creating an instance of WhisperParams. This object contains various options and settings for the transcription or translation process. For example:
const whisperParams: WhisperParams = { language: 'en', model: '/path/to/models', fname_inp: '/path/to/input/file.wav', output_txt: true, };
-
Call the whisper function, passing the whisperParams as an argument. The whisper function returns a promise that resolves to an array of SpeechData objects or null if the process fails. For example:
const speechData: SpeechData[] | null = await whisper(whisperParams);
-
Handle the result of the whisper process as needed. The result is an array of SpeechData objects, where each object represents a segment of speech. Each SpeechData object contains the start and end timestamps and the corresponding speech content.
Note: Make sure to handle any potential errors or exceptions during the process.
An interface representing the parameters for the Whisper process. It includes the following properties:
Parameter | Description | Default Value |
---|---|---|
n_threads | The number of threads to use for processing | System-dependent |
n_processors | The number of processors to use for processing | 1 |
offset_t_ms | The offset time in milliseconds | 0 |
offset_n | The offset value | 0 |
duration_ms | The duration in milliseconds | 0 |
max_context | The maximum context value | -1 |
max_len | The maximum length | 0 |
best_of | The best of value | 5 |
beam_size | The beam size | -1 |
word_thold | The word threshold | 0.01 |
entropy_thold | The entropy threshold | 2.4 |
logprob_thold | The log probability threshold | -1.0 |
speed_up | Whether to enable speed-up optimization | false |
translate | Whether to enable translation | false |
diarize | Whether to enable diarization | false |
output_txt | Whether to output as text | false |
output_vtt | Whether to output as VTT | false |
output_srt | Whether to output as SRT | false |
output_wts | Whether to output as WTS | false |
output_csv | Whether to output as CSV | false |
print_special | Whether to print special characters | false |
print_colors | Whether to print with colors | false |
print_progress | Whether to print progress information | false |
no_timestamps | Whether to exclude timestamps | false |
language | The language to use for processing | undefined |
prompt | The initial prompt | undefined |
model (required) | The path to the model file | - |
fname_inp (required) | The path to the input audio file | - |
fname_out | The path to the output file | - |
A type representing the speech data of a segment. It includes the following properties:
Field | Description |
---|---|
start | The start timestamp of the segment |
end | The end timestamp of the segment |
speech | The speech content of the segment |
The main function of the Whisper library. It processes audio based on the provided parameters and returns a promise that resolves to an array of SpeechData objects or null.
whisper(params: WhisperParams): Promise<Array<SpeechData> | null>
Here's an example usage of the Whisper library:
import { WhisperParams, SpeechData, whisper } from '@tech9app/whisper.js';
const whisperParams: WhisperParams = {
language: 'en',
model: '/path/to/models',
fname_inp: '/path/to/input/file.wav',
output_txt: true,
};
whisper(whisperParams).then((result: Array<SpeechData> | null) => {
console.log('Result from whisper:', result);
});
In this example, the whisperParams object specifies the parameters for the Whisper process, such as the language, model path, input file path, and output format. The whisper function is called with these parameters, and the resulting promise is handled to obtain the transcribed speech segments.
This utility allows you to download a model using Whisper.js.
The Whisper.js Download Utility supports the following options:
-m, --modelName <modelName>
: Specify the model name.-p, --storagePath <storagePath>
: Specify the storage path.-h, --help
: Display help for the command.
To use the Whisper.js Download Utility, make sure you have Node.js installed on your system. Then, open your terminal and run the following command:
npx @tech9app/whisper.js download -m <modelName> -p <storagePath>
Replace <modelName>
with the desired model name and <storagePath>
with the desired storage path.
Model | Disk | RAM |
---|---|---|
tiny | 75 MB | ~390 MB |
tiny.en | 75 MB | ~390 MB |
base | 142 MB | ~500 MB |
base.en | 142 MB | ~500 MB |
small | 466 MB | ~1.0 GB |
small.en | 466 MB | ~1.0 GB |
medium | 1.5 GB | ~2.6 GB |
medium.en | 1.5 GB | ~2.6 GB |
large-v1 | 2.9 GB | ~4.7 GB |
large | 2.9 GB | ~4.7 GB |
Download the "base.en" model and store it in the "models" directory:
npx @tech9app/whisper.js download -m base.en -p models
This library is released under the MIT License. See the LICENSE file for more details.