Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Effects on WaveNet predicted wavs #57

Closed
begeekmyfriend opened this issue May 29, 2018 · 59 comments
Closed

Effects on WaveNet predicted wavs #57

begeekmyfriend opened this issue May 29, 2018 · 59 comments

Comments

@begeekmyfriend
Copy link
Contributor

It seems the decreasing of loss during WaveNet training unsteady. Is it all right for the results or should I wait more steps? The predicted wavs under logs-WaveNet/wavs sound OK but the ones under logs-WaveNet/eval-dir/wavs sound like a mess...
image

@Rayhane-mamah
Copy link
Owner

I think it's normal that the Wavenet loss is shaky, log wavs sound better than eval wavs simply because the model is making predictions conditioned on ground truth during training, while eval wavs are synthesized sequentially which means the model is conditioned on its previous outputs. Since wavenet is still at early stages of training, conditioning on previously sampled outputs will cause loss cumulation and thus trash wavs..

@begeekmyfriend
Copy link
Contributor Author

Here are the log wav plot and the eval one. I will report again when it reaches 200K steps.
step-95000-waveplot 2
step-95000-waveplot

@begeekmyfriend
Copy link
Contributor Author

Should the hyper parameter train_with_GTA be set as True on the training of wavenet? The wave plot still seems not good until 190K steps while in that issue r9y9/wavenet_vocoder#79 the plot until 250K steps looks better than this one.
step-190000-waveplot

@butterl
Copy link

butterl commented May 31, 2018

@begeekmyfriend that plot in r9y9/wavenet_vocoder#79 is trained with real wav not GTA ,and GTA training wavplot for me is all mass when 50K step

@begeekmyfriend
Copy link
Contributor Author

begeekmyfriend commented Jun 4, 2018

It seems failure to convergence within 36K steps on 10h dataset on WaveNet model. @Rayhane-mamah You have found ways to reduce the quantity of training data on Tacotron. I still need to find similar ways to get to convergence on small dataset on WaveNet...
step-360000-waveplot

@begeekmyfriend
Copy link
Contributor Author

I have used 34h THCHS-30 as dataset long enough to train but still failed to convergence within 110K steps while in r9y9/wavenet_vocoder#79 it seems to convergence on r9y9's version. I doubt there is something wrong with the porting...
step-110000-waveplot

@begeekmyfriend
Copy link
Contributor Author

This is from r9y9's original repo, on 10h dataset. We can see the wave begins to convergence.
step000099972_waveplots

@butterl
Copy link

butterl commented Jun 7, 2018

@begeekmyfriend what‘s the step and lost in your latest post?

@azraelkuan
Copy link

@begeekmyfriend what mode do u use? raw or mu-law-quantize?
i am now testing the code of wavenet, i have found that may be there are errors in incremental step.
here is training step, i am using mu-law-quantize:
step-26000-waveplot
but when i using incremental step:
step-25000-waveplot

@begeekmyfriend
Copy link
Contributor Author

image
image

@begeekmyfriend
Copy link
Contributor Author

I am just using the raw mode and I have not looked into the code closely.

@QLQL
Copy link

QLQL commented Jun 7, 2018

The same happened to my WaveNet training (tested until step 250000 ). Note that, I used the groundtruth Mels to train the vocoder instead of these force-aligned Mels synthesised via Tacotron.

Under logs-WaveNet/plots, the Target waveform is almost identical with Prediction waveform, this is also confirmed by listening to audio clips under logs-WaveNet/wavs. Yet, the results under logs-WaveNet/eval-dir/plot are not converged, though the envelope of the predicted signal does look similar to that of the target. However, the predicted audio clips under logs-WaveNet/eval-dir/wavs sound totally a mess---total noise.

An example under logs-WaveNet/plots
step-250000-waveplot

An example under logs-WaveNet/eval-dir/plot
step-250000-waveplot

@azraelkuan
Copy link

@QLQL Do u use the raw mode?

@begeekmyfriend
Copy link
Contributor Author

@QLQL Please use r9y9's original repo.

@a3626a
Copy link

a3626a commented Jun 8, 2018

In my case, I am training WaveNet on mel spectrograms of SpectrogramNet with gta is on always.

I found that mel spectrograms from SpectrogramNet is really bad when gta is turned off for synthesis. This affects WaveNet's quality, too.
What mel spectrogram are you using during synthesis? STFT generated? GTA=on? GTA=off?

@begeekmyfriend
Copy link
Contributor Author

begeekmyfriend commented Jun 8, 2018

@a3626a STFT generated, ground truth, both for this repo and r9y9's, on 10h long dataset, and the evaluation results are quite different as you can see in #57 (comment).

@QLQL
Copy link

QLQL commented Jun 8, 2018

@azraelkuan Yes, I was using raw mode. But I will also test with r9r9's repo as suggested by @begeekmyfriend . BTW, @begeekmyfriend did you manage to train the WaveNet Vocoder part with Rayhane-mamah repo? If so, how many steps did you use, and what is your batch size? I can cope with batch size of 2 instead of the default 4 due to OOM problem.

@begeekmyfriend
Copy link
Contributor Author

@QLQL It failed to convergence with Rayhane's wavenet vocoder when it got to 360K steps. I trained it with batch size as 3 with 11GB GTX 1080ti.

@azraelkuan
Copy link

indeed, there are servious problem in incremental step,

if self.convolution_queue is None:
self.convolution_queue = tf.zeros((batch_size, (kw - 1) + (kw - 1) * (dilation - 1), tf.shape(inputs)[2]))
else:
#shift queue
self.convolution_queue = self.convolution_queue[:, 1:, :]

in the while_loop, when we call incremental_step, the queue will be defined as None, so we don't get the correct convolution queue, any way to slove the problem?

  1. like the queue in the ibab's implement, we can create some different queues to save the state. not good...
  2. we can use the tfe-tensorflow eager model to run the wavenet model, it will save the attribute in the running process. i have write a small test code and it works well.
    3.use tf.get_variable to save the convolution queue

hope somebody can raise some better solutions!

@QLQL
Copy link

QLQL commented Jun 11, 2018

@begeekmyfriend I waited three days with another ~200 k steps, and there is no improvement in the loss, still between 6--7, as also shown in the following example under eval-dir/plots. I assume, maybe that has something to do with the issue mentioned by @azraelkuan ?

step-455000-waveplot

@azraelkuan
Copy link

@QLQL i think u can predict a wav just use the train mode, u will find that the predict wav is very good

@QLQL
Copy link

QLQL commented Jun 11, 2018

@azraelkuan , Yes. As quoted in @Rayhane-mamah reply earlier:

the train mode is

conditioned on ground truth during training, while eval wavs are synthesized sequentially which means the model is conditioned on its previous outputs.

In real synthesis applications (eval mode), we don't have groundtruth samples. We have to rely on previous predicted samples.

@a3626a
Copy link

a3626a commented Jun 15, 2018

@azraelkuan

in the while_loop, when we call incremental_step, the queue will be defined as None, so we don't get the correct convolution queue, any way to slove the problem?

I have tested value of self.convolution_queue during synthesis using tf.Print. It is always 0. You are right.
I think using variables are better choice. ibab's implementation requires multiple sess.run call, so synthesizer must be modified a lot, and inefficient(multiple sess.run call slow the operation), too.

def __init__(self, ... ) :
(...)
            if kernel_size > 1 :
                self.convolution_queue = tf.get_variable("conv_queue_{}".format(name),
                                                         (1, kernel_size + (kernel_size - 1) * (dilation - 1), in_channels),
                                                         tf.float32,
                                                         initializer=tf.zeros_initializer(),
                                                         trainable=False)
(...)
def incremental_step(self, inputs):
(...)
        #append next input
        op_assign = tf.assign(self.convolution_queue, tf.concat([self.convolution_queue[:, 1:, :], tf.expand_dims(inputs[:, -1, :], axis=1)], axis=1))
            
        with tf.control_dependencies([op_assign]):
            inputs = self.convolution_queue
            if dilation > 1:
                inputs = inputs[:, 0::dilation, :]
(...)
    def clear_queue(self):
        pass

@a3626a
Copy link

a3626a commented Jun 15, 2018

@QLQL
Why don't you lower your learning rate? I am using 1e-4 instead of default value 1e-3. Loss is around 0.2 now. (It was 3~4 with 1e-3)

@QLQL
Copy link

QLQL commented Jun 15, 2018

@a3626a , Thank you very much for the nice suggestion! Didn't think about that earlier!

@azraelkuan
Copy link

@a3626a i found a much better way for this problem, we can set a input_buffer list to the while_loop, and return the input_buffer back at the incremental step, i test this, it works well.
create variable will need to assign the variable to zero but the assign function will return a op, so it is not good for coding i think.

@JK1532
Copy link

JK1532 commented Jun 24, 2018

@azraelkuan Have you get acceptable result after using input_buffer?And ,does the input_buffer list contains the convolution_queue for every dilation layers? Thanks

@a3626a
Copy link

a3626a commented Jun 28, 2018

@JK1532
I also fix incremental() by providing more variables like input_buffer to tf.while_loop. However, synthesized audio is not very good, yet.

    def incremental(self, initial_input, c=None, g=None,
        time_length=100, test_inputs=None,
        softmax=True, quantize=True, log_scale_min=-7.0):

        (...)

        initial_queue = self.clear_queue()
        # this returns python dictionary of empty queues for each layers.
        # {"conv0":tf.zeros(...), "conv1":tf.zeros(...)}

        (....)

        def condition(time, unused_outputs_ta, unused_current_input, unused_loss_outputs_ta, queue):
            return tf.less(time, time_length)

        def body(time, outputs_ta, current_input, loss_outputs_ta, queue):
            #conditioning features for single time step
            ct = None if self.c is None else tf.expand_dims(self.c[:, time, :], axis=1)
            gt = None if self.g_btc is None else tf.expand_dims(self.g_btc[:, time, :], axis=1)

            x = self.first_conv.incremental_step(current_input, queue)
            skips = None
            for conv in self.conv_layers:
                x, h = conv.incremental_step(x, ct, gt, queue)
                skips = h if skips is None else (skips + h)
            x = skips
            for conv in self.last_conv_layers:
                try:
                    x = conv.incremental_step(x, queue)
                except AttributeError: #When calling Relu activation
                    x = conv(x)

            #Save x for eval loss computation
            loss_outputs_ta = loss_outputs_ta.write(time, tf.squeeze(x, [1])) #squeeze time_length dimension (=1)

            #Generate next input by sampling
            if self.scalar_input:
                x = sample_from_discretized_mix_logistic(
                    tf.reshape(x, [batch_size, -1, 1]), log_scale_min=log_scale_min)
            else:
                x = tf.nn.softmax(tf.reshape(x, [batch_size, -1]), axis=1) if softmax \
                    else tf.reshape(x, [batch_size, -1])
                if quantize:
                    x = tf.reshape(x, [batch_size, -1])
                    sample = tf.multinomial(tf.reshape(x, [batch_size, -1]), 1)[0] #Pick a sample using x as probability
                    x = tf.one_hot(sample, depth=self.quantize_channels)

            outputs_ta = outputs_ta.write(time, x)
            time = time + 1
            #output = x (maybe next input)
            if test_inputs is not None:
                next_input = tf.expand_dims(test_inputs[:, time, :], axis=1)
            else:
                if is_mulaw_quantize(self.input_type):
                    next_input = tf.expand_dims(x, axis=1) #Expand on the time dimension
                else:
                    next_input = tf.expand_dims(x, axis=-1) #Expand on the channels dimension

            return (time, outputs_ta, next_input, loss_outputs_ta, queue)

        res = tf.while_loop(
            condition,
            body,
            loop_vars=[
                initial_time,
                initial_outputs_ta,
                initial_input,
                initial_loss_outputs_ta,
                initial_queue
            ],
            parallel_iterations=32,
            swap_memory=self.wavenet_swap_with_cpu)

        outputs_ta = res[1]
        #[time_length, batch_size, channels]
        outputs = outputs_ta.stack()

        #Save eval prediction for eval loss computation
        eval_outputs = res[3].stack()

        if is_mulaw_quantize(self.input_type):
            self.y_hat_eval = tf.transpose(eval_outputs, [1, 0, 2])
        else:
            self.y_hat_eval = tf.transpose(eval_outputs, [1, 2, 0])

        return tf.transpose(outputs, [1, 2, 0])

@azraelkuan
Copy link

@JK1532 sorry, recently, i am busying with my exam.
@a3626a i think there are some problem with your code, did u check input_queue use tf.Print() in the running process, i think u should return your queue of each conv
image
image

@azraelkuan
Copy link

image
image
this is my code

@Rayhane-mamah
Copy link
Owner

@azraelkuan,
It should give exact same results during training actually, and I would have preferred to use that. unfortunately, I think I found some issues for picking the kernels and biases of such layer for inference time (to use the fast wavenet synthesis approach). If I remember well, I was facing some issue where at inference time the kernel and bias variables where only being initialized at the build function call of the convolution layer and it wasn't working with how I make my Wavenet code structure or something like that. In other words I just found it easier to create and use my own kernel and bias variables.
Plus I was kind of worried that using tf.layers.Conv1D would some way or another not work as I expect them to be.. So it's also a safety measure I took to make sure the network is as I'm thinking of it. I went to a lower level and used tf.nn.conv1d with our own kernel and bias variables to "rewrite" tf.layers.Conv1D if both give the same results.

When using tf.layers.Conv1D, are you able to get its kernel and bias without problem during synthesis mode?

@azraelkuan
Copy link

yes, i also have the same issue, but i can get the kernel and bias by the collections. because of tensorflow will save all the variables in tf.GraphKeys.GLOBAL_VARIABLES, so we can get it by scope.
this is a better way to implement this.
image

@Rayhane-mamah
Copy link
Owner

Ah yes this is very clever @azraelkuan! Really well thought! I will try it out for train+eval and synthesis time, if everything goes well and results are also correct I will most probably switch to this approach. If you get samples in the meantime using your code, feel free to share and suggest a PR ;)

@azraelkuan
Copy link

@Rayhane-mamah yeah,i have finish my exam, so i will fix the bugs in my code these days. :)

@HyperGD1994
Copy link

@Rayhane-mamah hi, i'm training the WaveNet with your revised code, feeding raw audio input and the eval result seems not good at 20k steps. i wonder if it will be good to train more steps?

Meanwhile, i have trained the ibab's wavenet with local condition, mu-quantize audio and mel-spec input, and it can generate correct audio after a days training. i'm not satisfied with the acoustic quality so i try to use your code, raw input and bigger net.

i'm confused about the eval result, is the revised still not right or i just need to train it more steps? thanks ~

@azraelkuan
Copy link

@HyperGD1994 yes, there is problem in the eval code, i try to fix the bugs, but like @a3626a, still can not get good result.

@HyperGD1994
Copy link

@azraelkuan have you tried ibab's generate method, will that be more brief?

@butterl
Copy link

butterl commented Jul 3, 2018

@azraelkuan

I found it gives good result wav in training sample

image

but would stuck when run synthesize with pretrained model ,not sure it's related with the problem you found

in synthesizer.py in wavnet_vocoder

	def synthesize(self, mel_spectrogram, speaker_id, index, out_dir, log_dir):
		hparams = self._hparams
		local_cond, global_cond = self._check_conditions()

		c = mel_spectrogram
		g = speaker_id
		feed_dict = {}
		print("Hi I'm here")
		if local_cond:
			feed_dict[self.local_conditions] = [np.array(c, dtype=np.float32)]
		else:
			feed_dict[self.synthesis_length] = 100
		print("Hi I'm here 2")
		if global_cond:
			feed_dict[self.global_conditions] = [np.array(g, dtype=np.int32)]

		generated_wav = self.session.run(self.model.y_hat, feed_dict=feed_dict)   <<< stucked here
		print("Hi I'm here 3")

@azraelkuan
Copy link

i test the incremental step using test_inputs, it works well, my be i need to train longer for without test inputs.
this is the incremental result using test inputs.
image

@begeekmyfriend
Copy link
Contributor Author

@azraelkuan @butterl Can you open your fork for every one interested in it?

@HyperGD1994
Copy link

@azraelkuan wow that's wonderful, may i ask how many steps have you trained to get this? do you input only mel spec or with audio file too?

@azraelkuan
Copy link

@HyperGD1994 this result use test inputs,only 1500 steps, i am testing the real evaluating step

@butterl
Copy link

butterl commented Jul 4, 2018

@begeekmyfriend @HyperGD1994 I used the Head of repo and just suit for THCHS30 dataset, and the waveplot is from the eval in the training of wavenet, the training step of tacotron is 100k and wavenet 160K

@azraelkuan any suggestion on real evaluating step modification ? I can only see it stuck in tqdm 0%

@v-yunbin
Copy link

@butterl i have same problem with you, do you have solve it?

@WendongGan
Copy link

@azraelkuan @begeekmyfriend@HyperGD1994 I also encountered the same problem. Do you have the final solution? Looking forward to your help。

@Yeongtae
Copy link
Contributor

Has anyone found a solution?
Even if you do not release your code, please share results and ideas, how to fix it.

@HyperGD1994
Copy link

@UESTCgan what's your problem? tqdm 0%? I do not use the whole code, i just use part of it, so i did not encounter this problem, but i think this bug will not be difficult to fix, you can try to debug it.

as for the true evaluate problem, have @azraelkuan fix it? the raw input is so hard to train, and the mu-law input seems to have a lot of bugs. However, i modified ibab code with local condition and a bigger net, and i finally got wonderful result with mu-law input. I suggest that you guys can try that.

@azraelkuan
Copy link

Indeed, the answer has been given https://github.com/Rayhane-mamah/Tacotron-2/files/2145382/models.tar.gz, because training a mol model need much time about two weeks, so i don't test this, but in my test, the mu law works well.
below is the plot of mu law (eval step)
image

@Yeongtae
Copy link
Contributor

@azraelkuan Thank you for your answer.

@WendongGan
Copy link

@HyperGD1994 Thank for your help!Based on my current 20h Chinese data set, I have trained 240K times. The predicted values produced during training sound good, but only noise is obtained when synthesizing sound. I will try the latest code next.

@Yeongtae
Copy link
Contributor

Yeongtae commented Jul 27, 2018

@azraelkuan Did you test in LJspeech dataset?
If you test it, How many iterations have you learned to get the above result?

In my case using 'mulaw', it makes bad wave files with noise.
image

@azraelkuan
Copy link

@Yeongtae about 40k, i can generate good wav, above is about 300k.
attached is 44k.
44000.zip

@Yeongtae
Copy link
Contributor

Yeongtae commented Jul 27, 2018

I had a mistake.
I didn't change a value of quantize_channels in hparams.py
image

I'm testing with the parameters in above images.
I can see that the model is reducing loss.
image

@azraelkuan Thank you for your advice.

@Rayhane-mamah
Copy link
Owner

Thank you all for your valuable contributions, this issue has been fixed with latest commit. If any further problems appear, feel free to open new issues :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests