forked from alshedivat/al-folio
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
new papers and removed a gem for compilation on my home computer
- Loading branch information
1 parent
60557fc
commit a1ec432
Showing
10 changed files
with
52 additions
and
28 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,8 +1,10 @@ | ||
--- | ||
layout: post | ||
date: 2023-06-03 15:59:00-0400 | ||
date: 2023-07-15 15:59:00-0400 | ||
inline: true | ||
related_posts: false | ||
--- | ||
|
||
I am moving my old [personal page](https://sites.google.com/site/marcandrecarbonneau/publications) to github. | ||
Ubisoft had published a [blog page](https://www.ubisoft.com/en-us/studio/laforge/news/5ADkkY0BMG9vNSDuUMtkeg/zeroeggs-zeroshot-examplebased-gesture-generation-from-speech) describing our system for gesture generation conditioned on speech. | ||
\ | ||
This system was presented in ["ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech"](https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14734) and showcased on [2 minute papers](https://www.youtube.com/watch?v=Dt0cA2phKfU&ab_channel=TwoMinutePapers). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
--- | ||
layout: post | ||
date: 2023-10-27 15:59:00-0400 | ||
inline: true | ||
related_posts: false | ||
--- | ||
|
||
Our paper EDMSound: Spectrogram Based Diffusion Models for Efficient and High-Quality Audio Synthesis has been accepted for presentation at the NeurIPS Workshop on ML for Audio. This work has been done in collaboration with colleagues from Rochester University. | ||
\ | ||
\ | ||
In this paper, we propose a diffusion-based generative model in spectrogram domain under the framework of elucidated diffusion models (EDM). We also revealed a potential concern regarding diffusion based audio generation models that they tend to generate duplication of the training data. | ||
\ | ||
\ | ||
Check out the [project page](https://agentcooper2002.github.io/EDMSound/)! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,11 +1,13 @@ | ||
--- | ||
layout: post | ||
date: 2023-07-31 15:59:00-0400 | ||
date: 2023-09-21 15:59:00-0400 | ||
inline: true | ||
related_posts: false | ||
--- | ||
|
||
We released on Arxiv our latest research effort on voice conversion. In this paper we model the natural rhythm of speakers to perform conversion while respecting the target speaker's natural rhythm. We do more than approximating the global speech rate, we model duration for sonorants, obstruents, and silences. | ||
|
||
Our paper ["Rhythm Modeling for Voice Conversion"](https://ieeexplore.ieee.org/document/10246359) has been published in IEEE Signal Processing Letters. We also released it on [Arxiv](https://arxiv.org/abs/2307.06040). | ||
\ | ||
In this paper we model the natural rhythm of speakers to perform conversion while respecting the target speaker's natural rhythm. We do more than approximating the global speech rate, we model duration for sonorants, obstruents, and silences. | ||
\ | ||
Check out the [demo page](https://ubisoft-laforge.github.io/speech/urhythmic/)! | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Binary file not shown.
Binary file not shown.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.