Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Style transfer for two whole songs with trained model #16

Closed
sbenthall opened this issue Nov 8, 2019 · 1 comment
Closed

Style transfer for two whole songs with trained model #16

sbenthall opened this issue Nov 8, 2019 · 1 comment
Milestone

Comments

@sbenthall
Copy link
Owner

Building on #12

Somehow (lots of free parameters here) do a style transfer of two whole songs, reconstructing the new merged song from the fragments.

Since the style transfer from #12 is between two fragment, this raises a question of how to match fragments from one song to fragments of the other.

Can do something messy to start with, such as looping the style song. Using 15 second beat loops from the Yrevocnu Organ could be appropriate here.

@sbenthall sbenthall added this to the 0.1 prototype milestone Nov 8, 2019
@sbenthall
Copy link
Owner Author

Be responsible and do #19 first

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant