Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 40 additions & 19 deletions fern/customization/speech-configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,31 +6,55 @@ slug: customization/speech-configuration

## Overview

Speech configuration lets you control exactly when your assistant starts and stops speaking during a conversation. By tuning these settings, you can make your assistant feel more natural, avoid interrupting the customer, and reduce awkward pauses.
Speech configuration lets you control exactly when your assistant starts and stops speaking during a conversation. By tuning these settings, you can make your assistant feel more natural, avoid interrupting the customer, and reduce awkward pauses.

<Note>
Speech speed can be controlled, but only PlayHT currently supports this feature with the `speed` field. Other providers do not currently support speed.
Speech speed can be controlled, but only PlayHT currently supports this
feature with the `speed` field. Other providers do not currently support
speed.
</Note>

The two main components are:

- **Speaking Plan**: Controls when the assistant begins speaking after the customer finishes or pauses.
- **Stop Speaking Plan**: Controls when the assistant stops speaking if the customer starts talking.

Fine-tuning these plans helps you adapt the assistant's responsiveness to your use case—whether you want fast, snappy replies or a more patient, human-like conversation flow.
Fine-tuning these plans helps you adapt the assistant's responsiveness to your use case—whether you want fast, snappy replies or a more patient, human-like conversation flow.

<Note>
Currently, these configurations can only be set via API.
</Note>
<Note>Currently, these configurations can only be set via API.</Note>

The rest of this page explains each setting and provides practical examples for different scenarios.

## Start Speaking Plan

This plan defines the parameters for when the assistant begins speaking after the customer pauses or finishes.

- **Wait Time Before Speaking**: You can set how long the assistant waits before speaking after the customer finishes. The default is 0.4 seconds, but you can increase it if the assistant is speaking too soon, or decrease it if there's too much delay.
**Example:** For tech support calls, set `waitSeconds` for the assistant to more than 1.0 seconds to give customers time to complete their thoughts, even if they have some pauses in between.
**Example:** For tech support calls, set `waitSeconds` for the assistant to more than 1.0 seconds to give customers time to complete their thoughts, even if they have some pauses in between.

- **Smart Endpointing Plan**: This feature uses advanced processing to detect when the customer has truly finished speaking, especially if they pause mid-thought.

In general, turn-taking includes the following tasks:

- **End-of-turn prediction** - predicting when the current speaker is likely to finish their turn
- **Backchannel prediction** - detecting moments where a listener may provide short verbal acknowledgments like "uh-huh", "yeah", etc. to show engagement, without intending to take over the speaking turn.

We offer different providers that can be audio-based, text-based, or audio-text based:

**Audio-based providers:**

- **Krisp**: Audio-based model that analyzes prosodic and acoustic features such as changes in intonation, pitch, and rhythm to detect when users finish speaking. Since it's audio-based, it always notifies when the user is done speaking, even for brief acknowledgments. Vapi offers configurable acknowledgement words and a well-configured stop speaking plan to handle this properly.

Configure Krisp with a threshold between 0 and 1 (default 0.5), where 1 means the user definitely stopped speaking and 0 means they're still speaking. Use lower values for snappier conversations and higher values for more conservative detection.

When interacting with an AI agent, users may genuinely want to interrupt to ask a question or shift the conversation, or they might simply be using backchannel cues like "right" or "okay" to signal they're actively listening. The core challenge lies in distinguishing meaningful interruptions from casual acknowledgments. Since the audio-based model signals end-of-turn after each word, configure the stop speaking plan with the right number of words to interrupt, interruption settings, and acknowledgement phrases to handle backchanneling properly.

**Audio-text based providers:**

- **Assembly**: Transcriber that also reports end-of-turn detection. To use Assembly, choose it as your transcriber without setting a separate smart endpointing plan. As transcripts arrive, we consider the `end_of_turn` flag that Assembly sends to mark the end-of-turn, stream to the LLM, and generate a response.

**Text-based providers:**

- **Smart Endpointing Plan**: This feature uses advanced processing to detect when the customer has truly finished speaking, especially if they pause mid-thought. It can be configured in three ways:
- **Off**: Disabled by default
- **LiveKit**: Recommended for English conversations as it provides the most sophisticated solution for detecting natural speech patterns and pauses. LiveKit can be fine-tuned using the `waitFunction` parameter to adjust response timing based on the probability that the user is still speaking.
- **Vapi**: Recommended for non-English conversations or as an alternative when LiveKit isn't suitable
Expand All @@ -39,51 +63,48 @@ This plan defines the parameters for when the assistant begins speaking after th

**LiveKit Smart Endpointing Configuration:**
When using LiveKit, you can customize the `waitFunction` parameter which determines how long the bot will wait to start speaking based on the likelihood that the user has finished speaking:

```
waitFunction: "200 + 8000 * x"
```

This function maps probabilities (0-1) to milliseconds of wait time. A probability of 0 means high confidence the caller has stopped speaking, while 1 means high confidence they're still speaking. The default function (`200 + 8000 * x`) creates a wait time between 200ms (when x=0) and 8200ms (when x=1). You can customize this with your own mathematical expression, such as `4000 * (1 - cos(pi * x))` for a different response curve.

**Example:** In insurance claims, smart endpointing helps avoid interruptions while customers think through complex responses. For instance, when the assistant asks "do you want a loan," the system can intelligently wait for the complete response rather than interrupting after the initial "yes" or "no." For responses requiring number sequences like "What's your account number?", the system can detect natural pauses between digits without prematurely ending the customer's turn to speak.

- **Transcription-Based Detection**: Customize how the assistant determines that the customer has stopped speaking based on what they're saying. This offers more control over the timing. **Example:** When a customer says, "My account number is 123456789, I want to transfer $500."
- **Transcription-Based Detection**: Customize how the assistant determines that the customer has stopped speaking based on what they're saying. This offers more control over the timing. **Example:** When a customer says, "My account number is 123456789, I want to transfer $500."
- The system detects the number "123456789" and waits for 0.5 seconds (`WaitSeconds`) to ensure the customer isn't still speaking.
- If the customer were to finish with an additional line, "I want to transfer $500.", the system uses `onPunctuationSeconds` to confirm the end of the speech and then proceed with the request processing.
- In a scenario where the customer has been silent for a long and has already finished speaking but the transcriber is not confident to punctuate the transcription, `onNoPunctuationSeconds` is used for 1.5 seconds.

- In a scenario where the customer has been silent for a long and has already finished speaking but the transcriber is not confident to punctuate the transcription, `onNoPunctuationSeconds` is used for 1.5 seconds.

## Stop Speaking Plan

The Stop Speaking Plan defines when the assistant stops talking after detecting customer speech.

- **Words to Stop Speaking**: Define how many words the customer needs to say before the assistant stops talking. If you want immediate reaction, set this to 0. Increase it to avoid interruptions by brief acknowledgments like "okay" or "right". **Example:** While setting an appointment with a clinic, set `numWords` to 2-3 words to allow customers to finish brief clarifications without triggering interruptions.

- **Voice Activity Detection**: Adjust how long the customer needs to be speaking before the assistant stops. The default is 0.2 seconds, but you can tweak this to balance responsiveness and avoid false triggers.
**Example:** For a banking call center, setting a higher `voiceSeconds` value ensures accuracy by reducing false positives. This avoids interruptions caused by background sounds, even if it slightly delays the detection of speech onset. This tradeoff is essential to ensure the assistant processes only correct and intended information.

**Example:** For a banking call center, setting a higher `voiceSeconds` value ensures accuracy by reducing false positives. This avoids interruptions caused by background sounds, even if it slightly delays the detection of speech onset. This tradeoff is essential to ensure the assistant processes only correct and intended information.

- **Pause Before Resuming**: Control how long the assistant waits before starting to talk again after being interrupted. The default is 1 second, but you can adjust it depending on how quickly the assistant should resume.
**Example:** For quick queries (e.g., "What's the total order value in my cart?"), set `backoffSeconds` to 1 second.
**Example:** For quick queries (e.g., "What's the total order value in my cart?"), set `backoffSeconds` to 1 second.

Here's a code snippet for Stop Speaking Plan -

```json
"stopSpeakingPlan": {
"numWords": 0,
"voiceSeconds": 0.2,
"backoffSeconds": 1
"backoffSeconds": 1
}
```


## Considerations for Configuration

- **Customer Style**: Think about whether the customer pauses mid-thought or provides continuous speech. Adjust wait times and enable smart endpointing as needed.

- **Background Noise**: If there's a lot of background noise, you may need to tweak the settings to avoid false triggers. Default for phone calls is 'office' and default for web calls is 'off'.


```json
"backgroundSound": "off",
```
Expand Down
152 changes: 143 additions & 9 deletions fern/customization/voice-pipeline-configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -174,22 +174,27 @@ Uses AI models to analyze speech patterns, context, and audio cues to predict wh
```
</Tab>
<Tab title="Providers">
**livekit**
Advanced model trained on conversation data (recommended for English)

**vapi**
Alternative VAPI-trained model
**Text-based providers:**
- **livekit**: Advanced model trained on conversation data (English only)
- **vapi**: VAPI-trained model (non-English conversations or LiveKit alternative)

**Audio-based providers:**
- **krisp**: Audio-based model analyzing prosodic features (intonation, pitch, rhythm)

**Audio-text based providers:**
- **assembly**: Transcriber with built-in end-of-turn detection (English only)

</Tab>
</Tabs>

**When to use:**

- English conversations
- Natural conversation flow requirements
- Reduced false endpointing triggers
- **LiveKit**: English conversations requiring sophisticated speech pattern analysis
- **Vapi**: Non-English conversations with default stop speaking plan settings
- **Krisp**: Non-English conversations with a robustly configured stop speaking plan
- **Assembly**: Best used when Assembly is already your transcriber provider for English conversations with integrated end-of-turn detection

### Wait function
### LiveKit's Wait function

Mathematical expression that determines wait time based on speech completion probability. The function takes a confidence value (0-1) and returns a wait time in milliseconds.

Expand Down Expand Up @@ -223,6 +228,83 @@ Mathematical expression that determines wait time based on speech completion pro
- **Use case:** Healthcare, formal settings, sensitive conversations
- **Timing:** ~2700ms wait at 50% confidence, ~700ms at 90% confidence

### Krisp threshold configuration

Krisp's audio-base model returns a probability between 0 and 1, where 1 means the user definitely stopped speaking and 0 means they're still speaking.

**Threshold settings:**

- **0.0-0.3:** Very aggressive detection - responds quickly but may interrupt users mid-sentence
- **0.4-0.6:** Balanced detection (default: 0.5) - good balance between responsiveness and accuracy
- **0.7-1.0:** Conservative detection - waits longer to ensure users have finished speaking

**Configuration example:**

```json
{
"startSpeakingPlan": {
"smartEndpointingPlan": {
"provider": "krisp",
"threshold": 0.5
}
}
}
```

**Important considerations:**
Since Krisp is audio-based, it always notifies when the user is done speaking, even for brief acknowledgments. Configure the stop speaking plan with appropriate `acknowledgementPhrases` and `numWords` settings to handle backchanneling properly.

### Assembly turn detection

AssemblyAI's turn detection model uses a neural network to detect when someone has finished speaking. The model understands the meaning and flow of speech to make better decisions about when a turn has ended.

When the model detects an end-of-turn, it returns `end_of_turn=True` in the response.

**Quick start configurations:**

To use Assembly's turn detection, set Assembly as your transcriber provider and configure these fields in the assistant's transcriber (**do not set any smartEndpointingPlan**):

**Aggressive (Fast Response):**

```json
{
"endOfTurnConfidenceThreshold": 0.4,
"minEndOfTurnSilenceWhenConfident": 160,
"maxTurnSilence": 400
}
```

- **Use cases:** Agent Assist, IVR replacements, Retail/E-commerce, Telecom
- **Behavior:** Ends turns very quickly, optimized for short responses

**Balanced (Natural Flow):**

```json
{
"endOfTurnConfidenceThreshold": 0.4,
"minEndOfTurnSilenceWhenConfident": 400,
"maxTurnSilence": 1280
}
```

- **Use cases:** Customer Support, Tech Support, Financial Services, Travel & Hospitality
- **Behavior:** Natural middle ground, allowing enough pause for conversational turns

**Conservative (Patient Response):**

```json
{
"endOfTurnConfidenceThreshold": 0.7,
"minEndOfTurnSilenceWhenConfident": 800,
"maxTurnSilence": 3600
}
```

- **Use cases:** Healthcare, Mental Health Support, Sales & Consulting, Legal & Insurance
- **Behavior:** Holds the floor longer, optimized for reflective or complex speech

For detailed information about how Assembly's turn detection works, see the [AssemblyAI Turn Detection documentation](https://www.assemblyai.com/docs/speech-to-text/universal-streaming/turn-detection).

### Wait seconds

Final audio delay applied after all processing completes, before the assistant speaks.
Expand Down Expand Up @@ -454,6 +536,58 @@ User Interrupts → Assistant Audio Stopped → backoffSeconds Blocks All Output

**Optimized for:** Text-based endpointing with longer timeouts for different speech patterns and international support.

### Audio-based endpointing (Krisp example)

```json
{
"startSpeakingPlan": {
"waitSeconds": 0.4,
"smartEndpointingPlan": {
"provider": "krisp",
"threshold": 0.5
}
},
"stopSpeakingPlan": {
"numWords": 2,
"voiceSeconds": 0.2,
"backoffSeconds": 1.0,
"acknowledgementPhrases": [
"okay",
"right",
"uh-huh",
"yeah",
"mm-hmm",
"got it"
]
}
}
```

**Optimized for:** Non-English conversations with robust backchanneling configuration to handle audio-based detection limitations.

### Audio-text based endpointing (Assembly example)

```json
{
"transcriber": {
"provider": "assembly",
"endOfTurnConfidenceThreshold": 0.4,
"minEndOfTurnSilenceWhenConfident": 400,
"maxTurnSilence": 1280
},
"startSpeakingPlan": {
"waitSeconds": 0.4
},
"stopSpeakingPlan": {
"numWords": 0,
"voiceSeconds": 0.2,
"backoffSeconds": 1.0
}
}
```

**Optimized for:** English conversations with integrated transcriber and sophisticated end-of-turn detection.

### Education and training

```json
Expand Down
Loading