New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Encountered errors while executing training process #2 #39
Comments
@Ma5onic try pip install --upgrade numpy |
Had the same issue. Fixed by installing old dependencies from around 2021. |
@KimberleyJensen Thanks, but the newest version of numpy is incompatible.
Maybe I should have used conda to update instead. Thanks anyways. @Satisfy256 Ouhhh! interesting, okay I'll nuke my current install and start over lol.
|
@KimberleyJensen, you're onto something though, the current requirements.txt seems to also contain an issue related the one you mentioned here. The requirements.txt that @Satisfy256 mentioned has Still waiting for conda to solve the environment 😢 |
@Ma5onic I modified the requirements.txt to use old versions. I tested it out and it works for me in Ubuntu 20.04 |
@Satisfy256 okay, sick. That gives me hope, i'll start from scratch and try again.
|
yay! it works!!! |
Linux users with rtx cards, or anyone using a cloud instances will encounter dependency issues unrelated to the solution above. The pytorch landing page shows how the commands differ based on your OS/env |
(Using Leaderboard_B)
First I was stuck solving the environment and I let it sit for 30 min, but conda never finished creating the env from the yml.
Because I was using a cloud instance, I didn't have time to wait and I did this instead:
It seems that the model doesn't allow me to train it with songs that don't contain vocals.
I deleted the songs that didn't contain vocals, then the data augmentation succeeded, but all attempts to train failed and I didn't have time to do debugging in the cloud GPU instance.
Here is the output from:
python run.py experiment=multigpu_other model=ConvTDFNet_other
The text was updated successfully, but these errors were encountered: