Skip to content

Commit

Permalink
v2meow website
Browse files Browse the repository at this point in the history
  • Loading branch information
qqhuang-google committed Jun 24, 2023
1 parent 35eb0a3 commit 82f463d
Show file tree
Hide file tree
Showing 61 changed files with 702 additions and 0 deletions.
4 changes: 4 additions & 0 deletions v2meow/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# V2Meow: Meowing to the Visual Beat via Music Generation
This website accompanies the paper:

[Meowing to the Visual Beat via Music Generation](https://arxiv.org/pdf/2305.06594v1.pdf).
1 change: 1 addition & 0 deletions v2meow/README.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
This is not an officially supported Google product.
25 changes: 25 additions & 0 deletions v2meow/helper.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
window.table_1_page = 0;
window.table_2_page = 0;
window.table_3_page = 0;
window.table_4_page = 0;
function paginateTable(table_name, page, page_state_name) {
let current_page = window[page_state_name];
if (current_page === page) {
return;
}

let active_button = document.getElementById(table_name + "-page-" + current_page.toString() + '-button');
active_button.parentElement.classList.toggle("active");
let active_rows = document.getElementById(table_name + "-page-" + current_page.toString());
active_rows.toggleAttribute("hidden");

window[page_state_name] = page;
let next_button = document.getElementById(table_name + "-page-" + page.toString() + '-button');
next_button.parentElement.classList.toggle('active');
let next_rows = document.getElementById(table_name + "-page-" + page.toString());
next_rows.toggleAttribute("hidden");
let audio_players = next_rows.getElementsByTagName('audio');
for (let audio_player of audio_players) {
audio_player.setAttribute("preload", "auto");
}
}
270 changes: 270 additions & 0 deletions v2meow/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,270 @@
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<link
rel="stylesheet"
href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"
integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u"
crossorigin="anonymous"
/>
<link
href="https://fonts.googleapis.com/css?family=Roboto:300,400,900"
rel="stylesheet"
type="text/css"
/>
<link href="style.css" rel="stylesheet" />
<script src="helper.js"></script>
<title>V2Meow</title>
</head>
<body>
<div id="header" class="container-fluid">
<div class="row" style="text-align: center">
<div class="row">
<h1>V2Meow</h1>
<div class="title">V2Meow: Meowing to the Visual Beat via Music Generation</div>
<div class="authors">Google Research, Google Deepmind</div>

<div class="assets">
<a href="https://arxiv.org/pdf/2305.06594v1.pdf" target="_blank"> [paper] </a>
</div>
<!-- <div class="assets">
<a href="go/arxiv-paper" target="_blank"> [paper] </a>
</div> -->
</div>
</div>

<div id="overview" class="container">
<h2>Overview</h2>
<span>
<h3>Controllable Video to Music Generation via Multi-stage Sequential Modeling</h3>
We propose a novel approach called V2Meow that is able to generate high-quality music audio that aligns well with the visual semantic of the input video and is controllable by text prompt. Specifically, the proposed music generation system is a multi-stage autoregressive model which is trained with a number of O(100K) music audio clips paired with video frames, which are mined from in-the-wild music videos, and no parallel symbolic music data is involved. V2Meow is able to synthesize high-fidelity music audio waveform solely conditioned on an arbitrary silent video clip, and it also allows high-level control over the music style of generation examples via supporting text prompts in addition to the video frames conditioning. Through both qualitative and quantitative evaluations, we demonstrate that our model outperforms several existing music generation systems in terms of both visual-audio correspondence and audio quality.
<br /><br />
<br />
<img src="v2meow_overview.png" alt="" class="center">
<br>
</span>
</div>

<div id="table-0" class="container">

<h2>Zero-shot Evaluation on AIST++ Video Inputs</h2>
<span>
Music samples generated by V2Meow on <a href="https://google.github.io/aistplusplus_dataset/factsfigures.html" target="_blank"> AIST++ </a> Dance Videos. No dance motion data is used as inputs, and only visual features (CLIP + I3D FLOW embedding) at 1fps are used to generate 10s music audio.
<br /><br />
</span>
<table class="table table-responsive">
<tbody id="table-3-page-0">
<tr class="pure-table-odd">
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table4/table4_row1_col1.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table4/table4_row1_col2.mp4" type="video/mp4">
</video>
</td>
</tr>

<tr class="pure-table-odd">
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table4/table4_row2_col1.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table4/table4_row2_col2.mp4" type="video/mp4">
</video>
</td>
</tr>

<tr class="pure-table-odd">
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table4/table4_row3_col1.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table4/table4_row3_col2.mp4" type="video/mp4">
</video>
</td>
</tr>

<tr class="pure-table-odd">
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table4/table4_row4_col1.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table4/table4_row4_col2.mp4" type="video/mp4">
</video>
</td>
</tr>

<tr class="pure-table-odd">
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table4/table4_row5_col1.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table4/table4_row5_col2.mp4" type="video/mp4">
</video>
</td>
</tr>
</tbody>
</table>
<div id="table-0" class="container">

<h2>V2Meow Generates Visually Relevant Music to Non-Dance Video Inputs</h2>
<span>
Music samples generated by V2Meow on Cat Videos. Visual features (CLIP + I3D FLOW embedding) at 1fps are used to generate 10s music audio. Compared to dance videos, cat videos do not have easily identifiable visual cues like dance motions that directly correlate with music rhythm. As cat videos are not included in the training data, thus it is considered as zero-shot evaluation.
<br /><br />
</span>
<table class="table table-responsive">
<tbody id="table-3-page-0">
<tr class="pure-table-odd">
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row1_col1.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row1_col2.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row1_col3.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row1_col4.mp4" type="video/mp4">
</video>
</td>
</tr>
<tr class="pure-table-odd">
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row2_col1.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row2_col2.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row2_col3.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row2_col4.mp4" type="video/mp4">
</video>
</td>
</tr>
</tr>
<tr class="pure-table-odd">
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row3_col1.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row3_col2.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row3_col3.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row3_col4.mp4" type="video/mp4">
</video>
</td>
</tr>
</tr>
<tr class="pure-table-odd">
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row4_col1.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row4_col2.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row4_col3.mp4" type="video/mp4">
</video>
</td>
<td class="text-center">
<video width="100%" height="auto" controls>
<source src="table3/table3_row4_col4.mp4" type="video/mp4">
</video>
</td>
</tr>
</tbody>
</table>
</div>

<script>
includeHTML();
</script>

<div class="container">
<h2>Broader Impact</h2>
<span>
Controllable generative models such as V2Meow can serve as the foundation for new tools, technologies, and practices for content creators. While our motivation is to support creators to enrich their creative pursuits, we acknowledge that these models need to be developed and deployed in a manner that takes into account the values and wellbeing of creators, their communities, and society.
<br /><br />
In particular, large generative models learn to imitate patterns and biases inherent in the training sets, and in our case, the model can propagate the potential biases built in the video and music corpora used to train our models. Such biases can be hard to detect as they manifest in often subtle, unpredictable ways, which are not fully captured by our current evaluation benchmarks. Demeaning or other harmful language may be generated in model outputs, due to learned associations or by chance. A thorough analysis of our training dataset shows that the genre distribution is skewed towards a few genres, and within each genre, gender, age or ethical groups are not represented equally. For example, male is dominant in hip-hop and heavy metal genre. These concerns extend to learned visual-audio associations, which may lead to stereotypical associations between video content (i.e. people, body movements/dance styles, locations, objects) and a narrow set of musical genres; or to demeaning associations between choreography in video content and audio output (i.e. minstrelsy, parody, miming). ML fairness testing is required to understand the likelihood of these patterns in any given model and effectively intervene in them. We expand on these concerns in our data and model card.
<br /><br />
As such, in tandem with our algorithmic advances, we are actively working both internally and externally on initiatives to support the understanding and mitigation of possible risks of bias inherited from the training data, cultural appropriation and stereotyping, and erasure of cultural and political context of music. Further work is required to make determinations about whether the audio generated is contextually appropriate, which extends beyond technical measurements, or tempo or rhythmic alignment. This requires understanding of social and musical context, and is best done in collaboration with cultural and musical experts. We stress that these issues and others are as important and valuable as algorithmic advances that sometimes overshadow the broader context in which models exist.
</span>
</div>

<div class="container">
<h2>Authors</h2>
<span>
Kun Su<sup>2</sup>*, &nbsp
Judith Yue Li<sup>1</sup>*, &nbsp
Qingqing Huang<sup>1,†</sup>, &nbsp
Dima Kuzmin<sup>1,†</sup>, &nbsp
Joonseok Lee<sup>1,†</sup>, &nbsp
Chris Donahue<sup>1</sup>, &nbsp
Fei Sha<sup>1</sup>, &nbsp
Aren Jansen<sup>1</sup>, &nbsp
Yu Wang<sup>2</sup>, &nbsp
Mauro Verzetti<sup>1</sup>, &nbsp
Timo Denk<sup>1</sup>, &nbsp
<br />
<br />
*Lead author <sup></sup>Core contributor <sup></sup>Google Research <sup>2</sup>Work done while at Google. Correspondence to: Judith Li(judithyueli@google.com).
</span>
</div>

<div class="container">
<h2>Acknowledgements</h2>
<span>
We are grateful for having the support from Jesse Engel, Ian Simon, Hexiang Hu, Christian Frank, Neil Zeghidour, Andrea Agostinelli, David Ross and authors of MusicLM project for sharing their research insights, tutorials and demos. Many thanks to Austin Tarango, Leo Lipsztein, Fernando Diaz, Renee Shelby, Rida Qadri and Cherish Molezion for reviewing the paper and supplementary materials and share valuable feedbacks regarding responsible AI practice. Thanks Sarvjeet Singh, John Anderson, Hugo Larochelle, Blake Cunningham, Jessica Colnago for supporting publication process. We owe thanks to Muqthar Mohammad, Rama Krishna Devulapalli, Sally Goldman, Yuri Vasilevski, Alena Butryna for supporting our human study. Special thanks to Joe Ng, Zheng Xu, Yu-Siang Wang, Ravi Ganti, Arun Chaganty, Megan Leszczynski, Li Yang for exchanging research ideas and sharing engineering best practice. Thanks Li Li, Jun Wang, Jeff Wang, Bruno Costa, Mukul Gupta for sharing early feedbacks to our demo.
</span>
</div>


</body>
</html>
Loading

0 comments on commit 82f463d

Please sign in to comment.