Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference from Raw Input #165

Closed
Tox1cPhantom opened this issue Feb 3, 2024 · 5 comments
Closed

Inference from Raw Input #165

Tox1cPhantom opened this issue Feb 3, 2024 · 5 comments

Comments

@Tox1cPhantom
Copy link

Might not be related to this repo, i was using the original DiffSinger, and since you guys are maintaining it i thought you guys might be able to help with inference from raw input on English words.

I'm trying to inference but it keeps saying with English that you need to separate Notes with | or says that the notes don't align with the number of words in English, is there any way to fix that.

This is according to the README.md from the original repo:
inp = {
'text': '小酒窝长睫毛AP是你最美的记号',
'notes': 'C#4/Db4 | F#4/Gb4 | G#4/Ab4 | A#4/Bb4 F#4/Gb4 | F#4/Gb4 C#4/Db4 | C#4/Db4 | rest | C#4/Db4 | A#4/Bb4 | G#4/Ab4 | A#4/Bb4 | G#4/Ab4 | F4 | C#4/Db4',
'notes_duration': '0.407140 | 0.376190 | 0.242180 | 0.509550 0.183420 | 0.315400 0.235020 | 0.361660 | 0.223070 | 0.377270 | 0.340550 | 0.299620 | 0.344510 | 0.283770 | 0.323390 | 0.360340',
'input_type': 'word'
}

And this is what i'm trying to inference:
'text': 'I paid my dues Time after times I done my sentences but committed no crime', 'notes': 'C4 | A3 | C4 | E4 | C4 | B3 | A3 | E4 | D4 | C4 | G4 | B4 | C5 | D5 | E5', 'notes_duration': '0.25 | 0.25 | 1.5 | 2.0 | 0.25 | 1.75 | 2.0 | 0.25 | 0.25 | 1.5 | 2.0 | 0.375 | 0.25 | 1.375 | 0.875', 'input_type': 'word'

@yqzhishen
Copy link
Member

The pretrained model provided by the original DiffSinger repo is Chinese-only and cannot sing English. Also as far as I know, the code of original DiffSinger is not compatible with languages like English, and is only suitable for two-phase phoneme systems like Chinese. Many things are hard-coded and cannot be changed easily :(.

@Tox1cPhantom
Copy link
Author

Thanks for the reply. I assume English language will not be compatible with the fork you guys are maintaining as well right?
Moreover, do you know of any other tool which might be compatible with English and can generate singing given it the notes and notes duration aligned with lyrics. So far whatever i've tested seems to be heavily focused on either Chinese, Japanese or Korean.

@yqzhishen
Copy link
Member

This repo supports any language. You can find documentation for the making process.

@Tox1cPhantom
Copy link
Author

Oh i see, a few things i would like some clarity on:

  • Are there any test models available somewhere which i can use to test for inference
  • I see from the sample .ds files that there are quite a few values in there: (offset, text, ph_seq, ph_dur, ph_num, note_seq, note_dur, note_slur, f0_seq, f0_timestep) suppose if i want to use it with English text will i need to use all the values mentioned, as i tried to align lyrics and i only had text, notes, and notes_duration from the original DiffSinger version. Also i assume all these values in .ds file within {} explain the tone and style for each lyrics line or chorus part
  • And also can you shed some light on these variables and also on how will i go on extracting f0 for example
  • Whats the main difference between the acoustic and variance model?

I am planning to use all of this via command line so that's why i'm asking all of this stuff. Thanks in advance for the help!

@yqzhishen
Copy link
Member

  1. There are no test models, you need to train by yourself, or you can ask for people from the English voicebank developing community.
  2. offset: the start position in seconds of each segment; text: not in use now; ph_seq: phone sequence; ph_dur: phone duration sequence in seconds; ph_num: the number of phones of each word/syllable (each word/syllable usually starts with an onset vowel); note_seq: note name sequence; note_dur: note duration sequence in seconds; note_slur: whether each note is a slur (1) or not (0) (notes that share the same word/syllable with other notes are slurs); f0_seq: F0 sequence, in Hz; f0_timestep: the interval between two neighbor F0 curve points. For the alignment method see https://github.com/openvpi/MakeDiffSinger/tree/main/variance-temp-solution#4-estimate-note-values.
  3. There is a method in utils/binarizer_utils.py called get_pitch_parselmouth that you can use to extract F0.
  4. Difference of the two models is shown in the image in README.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants