🔑 This is a muli component. You can find the main repository here.
A Live2D View.
Goto the main repository for more information.
浏览器打开:
http://localhost:51070/#/?driver=ws://localhost:51071/live2d
另有一个 debug 版界面:
http://localhost:51070/#/debug/
连接到 driver 之后,driver 通过发送 WebSocket 消息对 Live2D 模型进行控制:
{ model: "http://url.to.live2d/xxx.model.json" }
- 从 url 加载 live2d 模型
{ motion: { group?: string, index?: number, priority?: number} }
- 开始并维持(循环播放)传进来的动作, 直到传入新的动作或者 undefined (即 idle) 为止。
- 参数详见 pixi-live2d-display docs: motions
{ expression: { id: string | number | undefined } }
- id: expression index (number) or name (string) or undefined for a random one
{ speak: audio?: string, volume?: number, motion?: Motin, expression?: Expression }
- speak out an audio with lip sync
- audio: url to audio file (mp3 or wav) or base64 encoded audio data (wav)
- volume: audio volume: 0 ~ 1
- motion: motion to play while speaking: The value is the same as the
motion
field in{ motion: { ... } }
- expression: expression to play while speaking. The value is the same as the
expression
field in{ expression: { ... } }
现在的一个缺陷是,不能给出 speak 开始、结束的反馈。所以无法精准控制何时能“说下一句”。可能需要:
- 由 RaSan147/pixi-live2d-display 那边提供一个回调,通知 speak 开始、结束。我再把信号通过 WebSocket 传回 driver。
- 由 driver 根据音频时常自行估算。Best-effort playback.
- 使用 audioview 进行播放。live2dview 只是做视觉上的 lip sync。也就是把音频和动作的同步性放松了。Best-effort lip-sync. (I prefer this one. More progressive. Will work on it.)
yarn
# or
npm install
quasar dev
yarn lint
# or
npm run lint
yarn format
# or
npm run format
quasar build