Skip to content
Go to file


Failed to load latest commit information.


GitHub release

Made in Vancouver, Canada by Picovoice

Porcupine is a highly-accurate and lightweight wake word engine. It enables building always-listening voice-enabled applications. It is

  • using deep neural networks trained in real-world environments.
  • compact and computationally-efficient making it perfect for IoT.
  • cross-platform. It is implemented in fixed-point ANSI C. Raspberry Pi (all variants), Beagle Bone, Android, iOS, watchOS, Linux (x86_64), Mac (x86_64), Windows (x86_64), and web browsers are supported. Furthermore, Support for various ARM Cortex-A microprocessors and ARM Cortex-M microcontrollers is available for enterprise customers.
  • scalable. It can detect multiple always-listening voice commands with no added CPU/memory footprint.
  • self-service. Developers can train custom wake phrases using Picovoice Console.

Table of Contents


This repository is licensed under Apache 2.0 which allows running the engine on all supported platforms (except microcontrollers) using a set of freely-available models. You may create custom wake-word models using Picovoice Console for non-commercial and personal use free of charge. The free-tier only allows model training for x86_64 (Linux, Mac, and Windows).

Custom wake-words for other platforms are only provided with the purchase of the Picovoice enterprise license. To enquire about the Picovoice development and commercial license terms and fees, contact Picovoice.

Use Cases

Porcupine is the right product if you need to detect one or a few simple voice commands. Voice activation (wake word detection), music control (e.g. volume up/down, play next/last), and voice navigation are a few examples.

  • If you need to understand complex and naturally-spoken voice commands within a specific domain, check out Rhino.
  • If you need open-domain transcription, checkout Leopard.
  • If you need open-domain transcription with real-time feedback (i.e. partial results), checkout Cheetah.

Try It Out

Porcupine in Action

  • Porcupine and Rhino on an ARM Cortex-M4

Porcupine in Action


A comparison between accuracy and runtime metrics of Porcupine and two other widely-used libraries, PocketSphinx and Snowboy, is provided here. Compared to the best-performing engine of these two, Porcupine's standard model is 5.4 times more accurate and 6.5 times faster (on Raspberry Pi 3).

Model Variants

The library in this repository is the standard trim of the engine. The standard trim is suitable for applications running on microprocessors (e.g. Raspberry Pi and BeagleBone) and mobile devices (Android and iOS). Picovoice has developed several trims of the engine targeted at a wide range of applications. These are only available to enterprise customers.

Structure of Repository

Porcupine is shipped as an ANSI C precompiled library. The binary files for supported platforms are located under lib and header files are at include. Bindings are available at binding to facilitate usage from higher-level languages. Demo applications are located at demo. Finally, resources is a placeholder for data used by various applications within the repository.

Running Demo Applications

Python Demos


Install Porcupine using PIP. Then with a working microphone connected to your device run the following in the terminal

pvporcupine_mic --keywords porcupine

The engine starts processing the audio input from the microphone in realtime and outputs to the terminal when it detects utterances of wake-word "porcupine".

In order to process audio files (e.g. WAV) run

pvporcupine_file --input_audio_file_path ${PATH_TO_AN_AUDIO_FILE} --keywords bumblebee

Then the engine scans the given audio file for occurrences of keyword "bumblebee". For more information about Python demos go to demo/python.


This demo application allows testing Porcupine using your computer's microphone. It opens an input audio stream, monitors it, and logs the detection events into the console. Below is an example of running the demo for hotword picovoice from the command line. Replace ${SYSTEM} with the name of the operating system on your machine (e.g. linux, mac, windows, or raspberry-pi).

python3 demo/python/ \
--keyword_file_paths resources/keyword_files/${SYSTEM}/picovoice_${SYSTEM}.ppn

Android Demos

Using Android Studio, open demo/android/Activity as an Android project and then run the application. You will need an Android device (with developer options enabled) connected to your machine.

In order to learn about how to use Porcupine in long running services go to demo/android/Service.

iOS Demos

Using Xcode, open demo/ios/PorcupineDemoNoWatch.xcodeproj and run the application. You will need an iOS device connected to your machine and a valid Apple developer account.

JavaScript Demos

You need yarn or npm installed first. Install the demo dependencies by executing either of the following sets of yarn or npm commands from demo/javaScript:


yarn copy
yarn start


npm install
npm install -g copy-files-from-to
npx serve

Web Browser

The last command will launch a local server running the demo. Open http://localhost:5000 in your web browser and follow the instructions on the page.

C Demos

This demo runs on Linux-based systems (e.g. Ubuntu, Raspberry Pi, and BeagleBone) and Mac. You need GCC and ALSA installed to compile it. Compile the demo using

gcc -O3 -o demo/c/porcupine_demo_mic -I include/ demo/c/porcupine_demo_mic.c -ldl -lasound -std=c99

Find the name of audio input device (microphone) on your computer using arecord -L. Finally execute the following

demo/c/porcupine_demo_mic ${LIBRARY_PATH} lib/common/porcupine_params.pv \
resources/keyword_files/${SYSTEM}/porcupine_${SYSTEM}.ppn 0.5 ${INPUT_AUDIO_DEVICE}

Replace ${LIBRARY_PATH} with path to appropriate library available under lib, ${SYSTEM} with the name of the operating system on your machine (e.g. linux, mac, windows, or raspberry-pi), and ${INPUT_AUDIO_DEVICE} with the name of your microphone device. The demo opens an audio stream and detects utterances of keyword "porcupine".

In order to learn more about file-based C demo go to demo/c.


Below are code snippets showcasing how Porcupine can be integrated into different applications.



The PIP package exposes a factory method to create instances of the engine as below

import pvporcupine

handle = pvporcupine.create(keywords=['picovoice', 'bumblebee'])

keywords argument is a shorthand for accessing default keyword files shipped with the library. The default keyword files available can be retrieved via

import pvporcupine


If you wish to use a non-default keyword file you need to identify its path as below

import pvporcupine

handle = pvporcupine.create(keyword_file_paths=['path/to/non/default/keyword/file'])

In order to learn how to use the created object continue reading the section below.


/binding/python/ provides a Python binding for Porcupine library. Below is a quick demonstration of how to construct an instance of it to detect multiple keywords concurrently.

library_path = ... # Path to Porcupine's C library available under lib/
model_file_path = ... # It is available at lib/common/porcupine_params.pv
keyword_file_paths = ['path/to/keyword/1', 'path/to/keyword/2', ...]
sensitivities = [0.5, 0.4, ...]
handle = Porcupine(

Sensitivity is the parameter that enables developers to trade miss rate for false alarm. It is a floating number within [0, 1]. A higher sensitivity reduces miss rate at cost of increased false alarm rate.

When initialized, valid sample rate can be obtained using handle.sample_rate. Expected frame length (number of audio samples in an input array) is handle.frame_length. The object can be used to monitor incoming audio as below.

def get_next_audio_frame():

while True:
    keyword_index = handle.process(get_next_audio_frame())
    if keyword_index >= 0:
        # detection event logic/callback

Finally, when done be sure to explicitly release the resources as the binding class does not rely on the garbage collector.



There are two possibilities for integrating Porcupine into an Android application.

Low-Level API

Porcupine provides a binding for Android using JNI. It can be initialized using.

    final String modelFilePath = ... // It is available at lib/common/porcupine_params.pv
    final String keywordFilePath = ...
    final float sensitivity = 0.5f;

    Porcupine porcupine = new Porcupine(modelFilePath, keywordFilePath, sensitivity);

Sensitivity is the parameter that enables developers to trade miss rate for false alarm. It is a floating number within [0, 1]. A higher sensitivity reduces miss rate at cost of increased false alarm rate.

Once initialized, porcupine can be used to monitor incoming audio.

    private short[] getNextAudioFrame();

    while (true) {
        final boolean result = porcupine.process(getNextAudioFrame());
        if (result) {
            // detection event logic/callback

Finally, be sure to explicitly release resources acquired by porcupine as the binding class does not rely on the garbage collector for releasing native resources.


High-Level API

PorcupineManager provides a high-level API for integrating Porcupine into Android applications. It manages all activities related to creating an input audio stream, feeding it into the Porcupine library, and invoking a user-provided detection callback. The class can be initialized as below.

    final String modelFilePath = ... // It is available at lib/common/porcupine_params.pv
    final String keywordFilePath = ...
    final float sensitivity = 0.5f;

    PorcupineManager manager = new PorcupineManager(
            new KeywordCallback() {
                public void run() {
                    // detection event logic/callback

Sensitivity is the parameter that enables developers to trade miss rate for false alarm. It is a floating number within [0, 1]. A higher sensitivity reduces miss rate at cost of increased false alarm rate.

When initialized, input audio can be monitored using manager.start(). When done be sure to stop the manager using manager.stop().


There are two approaches for integrating Porcupine into an iOS application.


Porcupine is shipped as a precompiled ANSI C library and can directly be used in Swift using module maps. It can be initialized to detect multiple wake words concurrently using:

let modelFilePath: String = ... // It is available at lib/common/porcupine_params.pv
let keywordFilePaths: [String] = ["path/to/keyword/1", "path/to/keyword/2", ...]
let sensitivities: [Float] = [0.3, 0.7, ...];
var handle: OpaquePointer?

let status = pv_porcupine_init(
    Int32(keywordFilePaths.count), // Number of different keywords to monitor for{ UnsafePointer(strdup($0)) },
if status != PV_STATUS_SUCCESS {
    // error handling logic

Then handle can be used to monitor incoming audio stream.

func getNextAudioFrame() -> UnsafeMutablePointer<Int16> {

while true {
    let pcm = getNextAudioFrame()
    var keyword_index: Int32 = -1

    let status = pv_porcupine_process(handle, pcm, &keyword_index)
    if status != PV_STATUS_SUCCESS {
        // error handling logic
    if keyword_index >= 0 {
        // detection event logic/callback

When finished, release the resources via



The PorcupineManager class manages all activities related to creating an input audio stream, feeding it into Porcupine's library, and invoking a user-provided detection callback. The class can be initialized as below:

let modelFilePath: String = ... // It is available at lib/common/porcupine_params.pv
let keywordCallback: ((WakeWordConfiguration) -> Void) = {
    // detection event callback

let wakeWordConfiguration1 = WakeWordConfiguration(
    name: "1",
    filePath: "path/to/keyword/1",
    sensitivity: 0.5)
let wakewordConfiguration2 = WakeWordConfiguration(
    name: "2",
    filePath: "path/to/keyword/2",
    sensitivity: 0.7)
let configurations = [ wakeWordConfiguration1, wakewordConfiguration2 ]

let manager = try PorcupineManager(
    modelFilePath: modelFilePath,
    wakeKeywordConfigurations: configurations,
    onDetection: keywordCallback)

When initialized, input audio can be monitored using manager.startListening(). When done be sure to stop the manager using manager.stopListening().


Porcupine is available on modern web browsers in WebAssembly. The JavaScript binding makes it trivial use Porcupine within a JavaScript environment. Instantiate a new instance of engine using the factory method as below

let keywordModels = [new Uint8Array([...]), ...];
let sensitivities = new Float32Array([0.5, ...]);

let handle = Porcupine.create(keywordModels, sensitivities)

When instantiated handle can process audio via its .process method.

    let getNextAudioFrame = function() {

    while (true) {
        let keywordIndex = handle.process(getNextAudioFrame());
        if (keywordIndex !== -1) {
            // detection event callback

When done be sure to release resources acquired by WebAssembly using .release.



Porcupine is implemented in ANSI C and therefore can be directly linked to C applications. include/pv_porcupine.h header file contains relevant information. An instance of Porcupine object can be constructed as follows.

const char *model_file_path = ... // The file is available at lib/common/porcupine_params.pv
const char *keyword_file_path = ...
const float sensitivity = 0.5f;

pv_porcupine_t *handle;

const pv_status_t status = pv_porcupine_init(

if (status != PV_STATUS_SUCCESS) {
    // error handling logic

Sensitivity is the parameter that enables developers to trade miss rate for false alarm. It is a floating-point number within [0, 1]. A higher sensitivity reduces miss rate (false reject rate) at cost of increased false alarm rate.

Now the handle can be used to monitor incoming audio stream. Porcupine accepts single channel, 16-bit PCM audio. The sample rate can be retrieved using pv_sample_rate(). Finally, Porcupine accepts input audio in consecutive chunks (aka frames) the length of each frame can be retrieved using pv_porcupine_frame_length().

extern const int16_t *get_next_audio_frame(void);

while (true) {
    const int16_t *pcm = get_next_audio_frame();
    int32_t keyword_index;
    const pv_status_t status = pv_porcupine_process(handle, pcm, &keyword_index);
    if (status != PV_STATUS_SUCCESS) {
        // error handling logic
    if (keyword_index != -1) {
        // detection event logic/callback

Finally, when done be sure to release the acquired resources.



v1.8.0 - May 27th, 2020

  • Improved accuracy.
  • Runtime optimization.

v1.7.0 - Feb 13th, 2020

  • Improved accuracy.
  • Runtime optimization.
  • Added support for Raspberry Pi 4.
  • Added service-based Android demo application.
  • Added C demo applications.
  • Updated documentation.

v1.6.0 - April 25th, 2019

  • Improved accuracy across all models.
  • Runtime optimization across all models
  • Added support for Beagle Bone
  • iOS build can run on simulator now.

v1.5.0 - November 13, 2018

  • Improved optimizer's accuracy.
  • Runtime optimization.
  • Added support for running within web browsers (WebAssembly).

v1.4.0 - July 20, 2018

  • Improved accuracy across all models (specifically compressed variant).
  • Runtime optimizations.
  • Updated documentation.

v1.3.0 - June 19, 2018

  • Added compressed model (200 KB) for deeply-embedded platforms.
  • Improved accuracy.
  • Runtime optimizations and bug fixes.

v1.2.0 - April 21, 2018

  • Runtime optimizations across platforms.
  • Added support for watchOS.

v1.1.0 - April 11, 2018

  • Added multiple command detection capability. Porcupine can now detect multiple commands with virtually no added CPU/memory footprint.

v1.0.0 - March 13, 2018

  • Initial release.


[Q] Which Picovoice speech product should I use?

[A] If you need to recognize a single phrase or a number of predefined phrases (dozens or fewer), in an always-listening fashion, then you should use Porcupine (wake word engine). If you need to recognize complex voice commands within a confined and well-defined domain with limited number of vocabulary and variations of spoken forms (1000s or fewer), then you should use Rhino (speech-to-intent engine). If you need to transcribe free-form speech in an open domain, then you should use Cheetah (speech-to-text engine).

[Q] What are the benefits of implementing voice interfaces on-device, instead of using cloud services?

[A] Privacy, minimal latency, improved reliability, runtime efficiency, and cost savings, to name a few. More detail is available here.

[Q] Does Picovoice technology work in far-field applications?

[A] It depends on many factors including the distance, ambient noise level, reverberation (echo), quality of microphone, and audio frontend used (if any). It is recommended to try out our technology using the freely-available sample models in your environment. Additionally, we often publish open-source benchmarks of our technology in noisy environments 1 2 3. If the target environment is noisy and/or reverberant and the user is few meters away from the microphone, a multi-microphone audio frontend can be beneficial.

[Q] Does Picovoice software work in my target environment and noise conditions?

[A] It depends on variety of factors. You should test it out yourself with the free samples made available on Picovoice GitHub pages. If it does not work, we can fine-tune it for your target environment.

[Q] Does Picovoice software work in presence of noise and reverberation?

[A] Picovoice software is designed to function robustly in presence of noise and reverberations. We have benchmarked and published the performance results under various noisy conditions 1 2 3. The end-to-end performance depends on the type and amount of noise and reverberation. We highly recommend testing out the software using freely-available models in your target environment and application.

[Q] Can I use Picovoice software for telephony applications?

[A] We expect audio with 16000Hz sampling rate. PSTN networks usually sample at 8000Hz. It is possible to upsample, but then the frequency content above 4000Hz is missing and performance will be suboptimal. It is possible to train acoustic models for telephony applications, if the commercial opportunity is justified.

[Q] My audio source is 48kHz/44.1KHz. Does Picovoice software support that?

[A] Picovoice software expects a 16000Hz sampling rate. You will need to resample (downsample). Typically, operating systems or sound cards (Audio codecs) provide such functionality; otherwise, you will need to implement it.

[Q] Can Picovoice help with building my voice enabled product?

[A] Our core business is software licensing. That being said, we do have a wide variety of expertise internally in voice, software, and hardware. We consider such requests on a case-by-case basis and assist clients who can guarantee a certain minimum licensing volume.

[Q] If I am using GitHub to evaluate the software, do you provide technical support?

[A] Prior to commercial engagement, basic support solely pertaining to software issues or bugs is provided via GitHub issues by the open-source community or a member of our team. We do not offer any free support with integration or support with any platform (operating system or hardware) that is not officially supported via GitHub.

[Q] Why does Picovoice have GitHub repositories?

[A] To facilitate performance evaluation, for commercial prospects, and to enable the open source community to use the technology for personal and non-commercial applications.

[Q] What is the engagement process?

[A] You may use what is available on GitHub while respecting its governing license terms, without engaging with us. This facilitates initial performance evaluation. Subsequently, you may acquire a development license to get access to custom speech models or use the software for development and internal evaluation within a company; the development license is for building a proof-of-concept or prototype. When ready to commercialize your product, you need to acquire a commercial license.

[Q] Does Picovoice offer AEC, VAD, noise suppression, or microphone array beamforming?

[A] No. But we do have partners who provide such algorithms. Please add this to your inquiry when reaching out and we can help to connect you.

[Q] Can you build a voice-enabled app for me?

[A] We do not provide software development services, so most likely the answer is no. However, via a professional services agreement we can help with proofs-of-concept (these will typically be rudimentary apps focused on voice user interface or building the audio pipeline), evaluations on a specific domain/task, integration of SDKs in your app, training of custom acoustic and language models, and porting to custom hardware platforms.

[Q] How do I evaluate Porcupine software performance?

[A] We have benchmarked the performance of Porcupine software rigorously and published the results here. We have also open-sourced the code and audio files used for benchmarking on the same repository to make it possible to reproduce the results. You can also use the code with your own audio files (noise sources collected from your target environment or utterances of your own wake word) to benchmark the performance. Additionally, we have made a set of sample wake words freely available on this GitHub repository on all platforms to facilitate evaluation, testing, and integration.

[Q] Can Porcupine wake word detection software detect non-English keywords?

[A] It depends. If English speakers can easily pronounce the non-English wake word, then we can most likely generate it for you. We recommend sending us a few audio samples including the utterance of the requested wake word so that our engineering team can review and provide feedback on feasibility.

[Q] What is Porcupine’s wake word detection accuracy?

[A] We have extensive benchmarking on Porcupine performance compared accuracy against alternatives, and published the result here. Porcupine can achieve 91%+ accuracy (detection rate) with less than 1 false alarm in 10 hours in the presence of ambient noise with 10dB SNR at microphone level.

[Q] Can Porcupine detect the wake word if the speaker is yelling/shouting in anger, excitement, or pain?

[A] Porcupine does not have a profile to recognize emotionally-coloured utterances such as yelling, dragging, mumbling, etc. We do require the speaker to somewhat clearly vocalize the phrase.

[Q] Does Porcupine’s detection accuracy depend on the choice of wake word?

[A] Generally speaking yes, however it is difficult to quantify the cause-and-effect accurately. We have published a guide here to help you pick a wake word that would achieve optimal performance. You will need to avoid using short phrases, and make sure your wake word includes diverse sounds and at least six phonemes. Long phrases are also not recommended due to the poor user experience.

[Q] Is there a guideline for picking a wake word?

[A] We have published a guide here to help you pick a wake word that would achieve optimal performance.

[Q] How much CPU and memory does Picovoice wake word detection software consume?

[A] We offer several trims for our wake word detection model. The standard model, which is recommended on most platforms, uses roughly 1.5MB of readonly memory (ROM / FLASH) and 5% of a single core on a Raspberry Pi 3.

[Q] What should I set the sensitivity value to?

[A] You should pick a sensitivity parameter that suits your application requirements. A higher sensitivity value gives a lower miss rate at the expense of higher false alarm rate. If your application places tighter requirements on false alarms, but can tolerate misses, then you should lower the sensitivity value.

[Q] What is an ROC curve?

[A] The accuracy of a binary classifier (any decision-making algorithm with a “yes” or “no” output) can be measured by two parameters: false rejection rate (FRR) and false acceptance rate (FAR). A wake word detector is a binary classifier. Hence, we use these metrics to benchmark it.

The detection threshold of binary classifiers can be tuned to balance FRR and FAR. A lower detection threshold yields higher sensitivity. A highly sensitive classifier has a high FAR and low FRR value (i.e. it accepts almost everything). A receiver operating characteristic (ROC) curve plots FRR values against corresponding FAR values for varying sensitivity values

To learn more about ROC curves and benchmarking a wake word detection, you may read the blog post here and Porcupine benchmark published here.

[Q] If I use Porcupine wake word detection in my mobile application, does it function when the app is running in the background?

[A] Developers have been able to successfully run Porcupine wake word detection software on iOS and Android in background mode. However, this feature is controlled by the operating system, and we cannot guarantee that this will be possible in future releases of iOS or Android. Please check iOS and Android guidelines, technical documentation, and terms of service before choosing to run Porcupine wake word detection in the background. We recommend using the sample demo applications made available on this repository to test this capability in your end application before acquiring a development or commercial license.

[Q] Which platforms does Porcupine wake word detection support?

[A] Porcupine wake word detection software is supported on Raspberry Pi (all models), BeagleBone, Android, iOS, Linux (x86_64), macOS, Windows, and modern web browsers (excluding Internet Explorer). Additionally, we have support for various ARM Cortex-A and ARM Cortex-M (M4/M7) MCUs by NXP and STMicro.

[Q] What is required to support additional languages?

[A] Porcupine is architected to work with any language, and there are no technical limitations on supporting most languages. However, supporting a new language requires significant effort and investment. The undertaking is a business decision which depends on our current priorities, pipeline, and the scale of commercial opportunity for which the language support is required.

[Q] Does Porcupine wake word detection software work with everyone’s voice (universal) or does it only work with my voice (personal)?

[A] Porcupine wake word detection software is universal and trained to work with a variety of accents and people’s voices.

[Q] Does Porcupine wake word detection work with children’s voices?

[A] Porcupine may not work well with very young children as their voices are different from adult voices. We have made the software available for free evaluation with a set of sample wake words. We recommend that you test the engine with speech of children within your target age range before acquiring a development or commercial license.

[Q] Do users need to pause and remain silent before saying the wake word?

[A] By default, no. But if that is a requirement, we can customize the software (as part of our professional services for you) to require silence either before or after the wake word.

[Q] If my wake phrase is made of two words (e.g., “Hey Siri”), does the software detect if the user inserts silence/pause in between each word?

[A] By default, the engine ignores silence in between the words. However, if that is a requirement, we can customize the software (as part of our standard professional services) to require silence between each word.

[Q] Our marketing team is having difficulty deciding on the choice for wake word, can you help?

[A] Yes, we can help you with the process of choosing the right wake word for your brand. We also offer the option for revision if you change your mind after the purchase of a development license.

[Q] Does Porcupine wake word detection work with accents?

[A] Yes, it works generally well with accents. However, it’s impossible to objectively quantify it. We recommend you try the engine for yourself and perhaps evaluate with an accented dataset of your choice to see if it meets your requirements.

[Q] How does Picovoice wake word detection software work when UK and US wake word pronunciations sometimes differ?

[A] For words that have different pronunciations in UK and US English, like “tomato”, we recommend listening for both pronunciations simultaneously with two separate wake word model files, each targeting a distinct pronunciation.

[Q] How many wake words can Porcupine detect simultaneously?

[A] There is no technical limit on the number of wake words the software can listen to simultaneously.

[Q] How much additional memory and CPU is needed for detecting additional wake word or trigger phrases?

[A] Listening to additional wake words does not increase the CPU usage. However it will require 1 KB of memory per additional wake word model.

[Q] Is the Picovoice “Alexa” wake word verified by Amazon?

[A] Amazon Alexa Certification requirements are different for near, mid, and far-field applications (AVS, AMA, etc.). Also, the certification is typically performed on the end hardware, and the outcome depends on many design choices such as microphone, enclosure acoustics, audio front end, and wake word. Picovoice can assist with new product introduction (NPI) and Alexa certification under our technical support package.

[Q] Does Picovoice wake word detection software work with Google Assistant?

[A] Yes. However, your product may have to go through a certification procedure with Google. Please check Google’s guidelines and terms of service for related information.

[Q] Can you use Picovoice wake word detection software with Cortana, IBM Watson, or Samsung Bixby?

[A] Yes, Picovoice can generate any third-party wake words at your request. However, you are responsible for any necessary integration with such platforms and potential areas of compliance.

[Q] What’s the power consumption of Picovoice wake word detection engine?

[A] The absolute power consumption (in wattage) depends on numerous factors such as processor architecture, vendor, fabrication technology, and system level power management design. If your design requires low power consumption in the (sub) milliwatt range for always-listening wake word detection, you will likely need to consider MCU (ARM Cortex-M) or DSP implementation.

[Q] Can Porcupine distinguish words with similar pronunciation?

[A] The rigidity of rejecting words with similar pronunciation would have several side effects such as rejecting accented pronunciations, as well as higher rejection rate in noisy conditions. By lowering the detection sensitivity you can achieve lower false acceptance of words with similar pronunciations at the cost of higher miss rate.

[Q] How can I run Picovoice software on my ARM-based MPU running a Yocto customized embedded Linux?

[A] As part of our standard professional services, we can port our software to custom platforms for a one-time engineering fee and prepaid license royalties. We review these on a case-by-case basis and provide a quotation based on the complexity and type of the platform. Please note that the port must be performed in-house by our engineering team, since it requires direct access to our IP, proprietary technology, and toolchains. We would also require at least one development board running your target OS to perform this task.

[Q] What is your software licensing model?

[A] The software published on this repository is available under Apache 2.0. If you need custom wake word models on a specific platform for commercial development (building PoC, prototyping, or product development) you need to acquire a development license. To install and use Picovoice software on commercial products with custom wake word models you need to acquire a commercial license. If you are developing a product within a company and working towards commercialization please reach out to us to acquire the appropriate license by filling out the form here.

[Q] Can I use wake word models generated by the Picovoice Console in a commercial product?

[A] The Picovoice Console and keyword files it generates can only be used for non-commercial and evaluation purposes. If you are developing a commercial product, you must acquire a development license. To acquire a development license fill out the form here.

You can’t perform that action at this time.