Conversation
1c061bb to
0efcbe5
Compare
a281561 to
172a2d3
Compare
tinalenguyen
left a comment
There was a problem hiding this comment.
thank you for the PR! could you also add the plugin to this pyproject file as well: https://github.com/livekit/agents/blob/main/livekit-agents/pyproject.toml
livekit-plugins/livekit-plugins-keyframe/livekit/plugins/keyframe/types.py
Outdated
Show resolved
Hide resolved
| logger.warning("set_emotion() called before start()") | ||
| return | ||
|
|
||
| await self._room.local_participant.publish_data( |
There was a problem hiding this comment.
just wondering, are there no plans to support setting the emotion via an API call? i think that would be more ideal if possible
There was a problem hiding this comment.
Do you mean by way of a REST call to api.keyframelabs.com?
If so, the latency would be pretty high. Additionally, the avatar session itself on the backend isn't listening to any central server calls once connected to the room, it's just listening to data channels (which is what we use here).
Are you imagining that the user of the avatar plugin wouldn't want to access the underlying avatar object and call this function themselves? Or that accessing the underlying avatar isn't ergonomic?
There was a problem hiding this comment.
ah yes, i was just curious how users would update settings mid-session in other scenarios/use cases, but the latency using the data channels is much better as you said. iirc most of our other providers don't allow for dynamic updates like this, so this would be a new feature (very cool to see the changes in real-time!)
livekit-plugins/livekit-plugins-keyframe/livekit/plugins/keyframe/avatar.py
Outdated
Show resolved
Hide resolved
livekit-plugins/livekit-plugins-keyframe/livekit/plugins/keyframe/version.py
Outdated
Show resolved
Hide resolved
3fa87a0 to
ca5a831
Compare
bec11d0 to
b7a84e7
Compare
tinalenguyen
left a comment
There was a problem hiding this comment.
looks good to me! small ask - could you update the examples to use this setup instead for the AgentSession:
session = AgentSession(
stt=inference.STT("deepgram/nova-3"),
llm=inference.LLM("google/gemini-2.5-flash"),
tts=inference.TTS("cartesia/sonic-3"),
resume_false_interruption=False,
)
i noticed that this setup called set_emotion more often as well, so it really seems like the avatar is reacting throughout the conversation. i didn't notice much of a difference in latency either
Keyframe Labs API Docs