Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using synthesized audio stream #24

Closed
DC2009 opened this issue Sep 23, 2020 · 3 comments
Closed

Using synthesized audio stream #24

DC2009 opened this issue Sep 23, 2020 · 3 comments

Comments

@DC2009
Copy link

DC2009 commented Sep 23, 2020

Hi, is it possible to use already synthesized audio streams instead of text in order to animate visemes/fonemes?
Or can we control viseme/foneme animations?
We connect to Polly to sythesize speech from our server and provide the frontend with ready audio streams.

@c-morten
Copy link

Hi @DC2009. Unfortunately we don't currently have this as an option for hosts through the exposed feature API. Part of this is because it opens up room for error and confusion since you need to create both speech audio and speechmarks, we didn't want people to assume that you could pass in any audio file and get working lipsync. We also keep track of Polly usage so we can determine how much use people are getting out of our open source hosts solution. But you could certainly fork the repository and create a custom build that allows for this. There is only a single method you would need to override: AbstractTextToSpeechFeature._updateSpeech. At the bottom where we wait for the speechmarks and speech audio to be synthesized you could just skip that and use pre-existing objects.

You also do have full control over the visemes. You can manually blend them on and off using the AnimationFeature.setAnimationBlendWeight method.

@DC2009
Copy link
Author

DC2009 commented Oct 1, 2020

Instead of forking the repository, we decided to extend the class HOST.aws.TextToSpeechFeature and override a few methods in order to make it work without direct connection to Polly. I called this feature VcaFeature (from the name of our service).

I receive from our server a base64 audio and a string with speech marks, generated by Polly. The audio is played correctly using Babylon.

There is no lipsync. From what I found, LipsyncFeature listens to TextToSpeechFeature events, so I changed the name of our feature to TextToSpeechFeature. It seems LipsyncFeature listens properly to EVENT.play but I get the following error:

Uncaught (in promise) Error: Cannot interpolate property blendValueX to value NaN. Target value must be numeric.
    at Function.interpolateProperty (AnimationUtils.js?1148:96)
    at Blend2dState.setBlendWeight (Blend2dState.js?8163:105)
    at AnimationLayer.setAnimationBlendWeight (AnimationLayer.js?beec:272)
    at AnimationFeature.setAnimationBlendWeight (AnimationFeature.js?ebde:706)
    at eval (PointOfInterestFeature.js?dee5:869)
    at Array.forEach (<anonymous>)
    at PointOfInterestFeature.update (PointOfInterestFeature.js?dee5:803)
    at eval (HostObject.js?75e7:84)
    at Array.forEach (<anonymous>)
    at HostObject.update (HostObject.js?75e7:83)
    at r.callback (host.js:26)
    at e.notifyObservers (babylon.js:16)
    at t.render (babylon.js:16)
    at t._renderFrame (babylon.js:16)
    at t._renderLoop (babylon.js:16)

during the host update

  23 |  // Add the host to the render loop
  24 |  const host = new HOST.HostObject({ owner: character });
  25 |  scene.onBeforeAnimationsObservable.add(() => {
> 26 |    host.update();
     | ^  27 |  });
  28 | 

Any idea what is going on? Is it possible to extend the textToSpeechFeature class like that or is it better to fork the project? We didn't want to for the project as it's quite new and possibly will change.

@c-morten
Copy link

c-morten commented Oct 1, 2020

Hi @DC2009. I think you're on the right track for extending the TextToSpeechFeature class as opposed to forking, that's definitely a valid alternative. The error you pasted above is actually related to the PointOfInterestFeature and not LipsyncFeature or TextToSpeechFeature. Have you made any other changes along with speech and lipsync? I have seen the above error occur when the object the PointOfInterestFeature is targeting has an invalid transformation matrix. I would try removing the PointOfInterestFeature temporarily to see if your lipsync will work without it. If that is the case we can close this out and open a separate issue specific to point of interest for further investigation.

@DC2009 DC2009 closed this as completed Nov 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants