We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am making 3d models with text-voice AI conversations... yadaayaa.
My use case for using natural is like so:
I have a few parameters I can move the 3d mouth with:
export declare const VRMExpressionPresetName: { readonly Aa: "aa"; readonly Ih: "ih"; readonly Ou: "ou"; readonly Ee: "ee"; readonly Oh: "oh"; }
You get a sentence like:
"Hey their NaturalNode, love the project".
Passing to openai we can generate stuff like this:
Is this good enough? almost. But would rather not run in AI, just do it algorithmically in node.js.
Another key thing missing is it cannot interpret the "duration" of each spoken vowel, so the voice and movements will be out of sync.
The DREAM format output would be like example:
[ { "vowel": "Aa", "duration": 500 }, { "vowel": "Ih", "duration": 600 }, { "vowel": "Ou", "duration": 400 }, { "vowel": "Oh", "duration": 300 }, ]
If we can achieve this^. major things in 3D lips generation + AI voice can happen.
I'm hacking on it now. If any thoughts or advice, would appreciate the support.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I am making 3d models with text-voice AI conversations... yadaayaa.
My use case for using natural is like so:
I have a few parameters I can move the 3d mouth with:
You get a sentence like:
"Hey their NaturalNode, love the project".
Passing to openai we can generate stuff like this:
Is this good enough? almost. But would rather not run in AI, just do it algorithmically in node.js.
Another key thing missing is it cannot interpret the "duration" of each spoken vowel, so the voice and movements will be out of sync.
The DREAM format output would be like example:
If we can achieve this^. major things in 3D lips generation + AI voice can happen.
I'm hacking on it now. If any thoughts or advice, would appreciate the support.
The text was updated successfully, but these errors were encountered: