Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch size and GPU out of memory #37

Open
danielemirabilii opened this issue Nov 26, 2021 · 0 comments
Open

Batch size and GPU out of memory #37

danielemirabilii opened this issue Nov 26, 2021 · 0 comments

Comments

@danielemirabilii
Copy link

Hi, I have been trying to train the FullSubNet model for a while using the code in this repo. What I experienced is that I must use a batch size of maximum 12, resulting in a very slow and inefficient training (the loss decreases quite slowly). If I try with a larger batch size, I get a GPU out-of-memory message.

I have two Nvidia RTX 2080 Ti with 11 GB each. I see from train.toml that the default batch size is 48, any suggestion on that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant