You need a C99-compatible compiler to build these demos.
- The demo requires CMake version 3.13 or higher.
- For Windows Only: MinGW is required to build the demo.
Rhino requires a valid Picovoice AccessKey
at initialization. AccessKey
acts as your credentials when using Rhino SDKs.
You can get your AccessKey
for free. Make sure to keep your AccessKey
secret.
Signup or Login to Picovoice Console to get your AccessKey
.
cmake -S demo/c/. -B demo/c/build && cmake --build demo/c/build --target rhino_demo_mic
cmake -S demo/c/. -B demo/c/build -G "MinGW Makefiles" && cmake --build demo/c/build --target rhino_demo_mic
Running the executable without any commandline arguments prints the usage info to the console.
./demo/c/build/rhino_demo_mic
Usage : ./demo/c/build/rhino_demo_mic -a ACCESS_KEY -l LIBRARY_PATH -m MODEL_PATH -c CONTEXT_PATH [-d AUDIO_DEVICE_INDEX] [-t SENSITIVITY] [-u, --endpoint_duration_sec] [-e, --require_endpoint (true,false)]
./demo/c/build/rhino_demo_mic [-s, --show_audio_devices]
.\\demo\\c\\build\\rhino_demo_mic.exe
Usage : .\\demo\\c\\build\\rhino_demo_mic.exe -a ACCESS_KEY -l LIBRARY_PATH -m MODEL_PATH -c CONTEXT_PATH [-d AUDIO_DEVICE_INDEX] [-t SENSITIVITY] [-u, --endpoint_duration_sec] [-e, --require_endpoint (true,false)]
.\\demo\\c\\build\\rhino_demo_mic.exe [-s, --show_audio_devices]
The following commands shows the available audio input devices to the console.
./demo/c/build/rhino_demo_mic --show_audio_devices
.\\demo\\c\\build\\rhino_demo_mic.exe --show_audio_devices
The following commands start up a microphone audio stream and infers follow-on commands within the context of a smart
lighting system. Replace ${AUDIO_DEVICE_INDEX}
with the index of the audio device and ${ACCESS_KEY}
with your
Picovoice AccessKey.
./demo/c/build/rhino_demo_mic -l lib/linux/x86_64/libpv_rhino.so -m lib/common/rhino_params.pv \
-c resources/contexts/linux/smart_lighting_linux.rhn -d ${AUDIO_DEVICE_INDEX} -a ${ACCESS_KEY}
./demo/c/build/rhino_demo_mic -l lib/mac/${PROCESSOR}/libpv_rhino.dylib -m lib/common/rhino_params.pv \
-c resources/contexts/mac/smart_lighting_mac.rhn -d ${AUDIO_DEVICE_INDEX} -a ${ACCESS_KEY}
Replace ${PROCESSOR}
with one of the Raspberry Pi processors defined here
(e.g., for Raspberry Pi 4 this would be "cortex-a72") and run:
./demo/c/build/rhino_demo_mic -l lib/raspberry-pi/${PROCESSOR}/libpv_rhino.so -m lib/common/rhino_params.pv \
-c resources/contexts/raspberry-pi/smart_lighting_raspberry-pi.rhn -d ${AUDIO_DEVICE_INDEX} -a ${ACCESS_KEY}
.\\demo\\c\\build\\rhino_demo_mic.exe -l lib/windows/amd64/libpv_rhino.dll -m lib/common/rhino_params.pv -c resources/contexts/windows/smart_lighting_windows.rhn -d ${AUDIO_DEVICE_INDEX} -a ${ACCESS_KEY}
Once the demo is running, it will start listening for context. For example, you can say:
"Turn on the lights."
If understood correctly, the following prints to the console:
{
'is_understood' : 'true',
'intent' : 'changeLightState',
'slots' : {
'state' : 'on',
}
}
cmake -S demo/c/. -B demo/c/build && cmake --build demo/c/build --target rhino_demo_file
Running the executable without any commandline arguments prints the usage info to the console.
./demo/c/build/rhino_demo_file
Usage : ./demo/c/build/rhino_demo_file -a ACCESS_KEY -l LIBRARY_PATH -m MODEL_PATH -c CONTEXT_PATH -w WAV_PATH [-t SENSITIVITY] [-u, --endpoint_duration_sec] [-e, --require_endpoint (true,false)]
.\\demo\\c\\build\\rhino_demo_file.exe
usage : .\\demo\\c\\build\\rhino_demo_file.exe -a ACCESS_KEY -l LIBRARY_PATH -m MODEL_PATH -c CONTEXT_PATH -w WAV_PATH [-t SENSITIVITY] [-u, --endpoint_duration_sec] [-e, --require_endpoint (true,false)]
Note that the demo expects a single-channel WAV file with a sampling rate of 16kHz and 16-bit linear PCM encoding. If you provide a file with incorrect format the demo does not perform any format validation and simply outputs incorrect results.
The following processes a WAV file under the audio_samples directory and infers the intent
in the context of a coffee-maker system. Replace ${ACCESS_KEY}
with your Picovoice AccessKey.
./demo/c/build/rhino_demo_file -l lib/linux/x86_64/libpv_rhino.so -m lib/common/rhino_params.pv \
-c resources/contexts/linux/coffee_maker_linux.rhn -w resources/audio_samples/test_within_context.wav -a ${ACCESS_KEY}
./demo/c/build/rhino_demo_file -l lib/mac/${PROCESSOR}/libpv_rhino.dylib -m lib/common/rhino_params.pv \
-c resources/contexts/mac/coffee_maker_mac.rhn -w resources/audio_samples/test_within_context.wav -a ${ACCESS_KEY}
Replace ${PROCESSOR}
with one of the Raspberry Pi processors defined here
(e.g., for Raspberry Pi 4 this would be "cortex-a72") and run:
./demo/c/build/rhino_demo_file -l lib/raspberry-pi/${PROCESSOR}/libpv_rhino.so -m lib/common/rhino_params.pv \
-c resources/contexts/raspberry-pi/coffee_maker_raspberry-pi.rhn -w resources/audio_samples/test_within_context.wav -a ${ACCESS_KEY}
.\\demo\\c\\build\\rhino_demo_file.exe -l lib/windows/amd64/libpv_rhino.dll -m lib/common/rhino_params.pv -c resources/contexts/windows/coffee_maker_windows.rhn -w resources/audio_samples/test_within_context.wav -a ${ACCESS_KEY}
The following prints to the console:
{
'is_understood' : 'true',
'intent' : 'orderBeverage'
'slots' : {
'size' : 'medium',
'numberOfShots' : 'double shot',
'beverage' : 'americano',
}
}
real time factor : 0.011