From 30056f72d4a2a9bf5314dfd9313903eee49ff176 Mon Sep 17 00:00:00 2001 From: 0xflotus <0xflotus@gmail.com> Date: Sat, 2 Jan 2021 02:55:22 +0100 Subject: [PATCH] fix: small error --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 9f86585..d0ce519 100644 --- a/README.md +++ b/README.md @@ -37,7 +37,7 @@ Implementation (PyTorch) of Google Brain's high-fidelity WaveGrad vocoder ([pape ___ ## About -WaveGrad is a conditional model for waveform generation through estimating gradients of the data density with WaveNet-similar sampling quality. **This vocoder is neither GAN, nor Normalizing Flow, nor classical autoregressive model**. The main concept of vocoder is based on *Denoising Diffusion Probabilistic Models* (DDPM), which utilize *Langevin dynamics* and *score matching* frameworks. Furthemore, comparing to classic DDPM, WaveGrad achieves super-fast convergence (6 iterations and probably lower) w.r.t. Langevin dynamics iterative sampling scheme. +WaveGrad is a conditional model for waveform generation through estimating gradients of the data density with WaveNet-similar sampling quality. **This vocoder is neither GAN, nor Normalizing Flow, nor classical autoregressive model**. The main concept of vocoder is based on *Denoising Diffusion Probabilistic Models* (DDPM), which utilize *Langevin dynamics* and *score matching* frameworks. Furthermore, comparing to classic DDPM, WaveGrad achieves super-fast convergence (6 iterations and probably lower) w.r.t. Langevin dynamics iterative sampling scheme. ___ ## Installation