-
Notifications
You must be signed in to change notification settings - Fork 274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the number of training epochs? #66
Comments
Hello @apavlo89 The default number of epochs is 300. If you want to know the default values for this and other arguments you can have a look at the API Reference section in our documentation: https://sdv-dev.github.io/CTGAN/api/ctgan.synthesizer.html#ctgan.synthesizer.CTGANSynthesizer |
Thank you very much! Is there a specific reason for choosing 300 epochs as the default? Is there some kind of optimum metric for the number of epochs based on the database? |
I assumed it was just a default number. Not sure if this helps but in the demo in the readme you can set the epochs by |
I'm quite new to machine learning especially in network techniques so would you say there's a pattern to look for in each/after a few epochs? What am I aiming for? I'd say after epoch 150 my Loss D and Loss G values were hovering around a specific range of values. My computer then ran out of RAM at 215 epochs. In Epoch 215 the Generator Loss and Discriminator Loss was: G: 1.6974, Loss D: -77.0800. It gave me the error DefaultCPUAllocator: not enough memory: you tried to allocate 2709625764 bytes. Buy new RAM! :( |
I'm learning also so I can't be too much help but I think you can experiment with different settings and datasets then see if they make sense to you? As for memory, try google colab you can add the line |
Wow, that is totally amazing! THANKS! |
The default values for the model hyperparameters are, in most cases, the ones that were used to generate the results on the paper. Regarding the value 300 in particular, the number was decided based on the performance obtained on the different datasets that were used for benchmarking, but different datasets might require different settings. In most cases, a lower number of epochs, of just a few dozens, can be more than enough to explore a particular problem a bit faster and get an idea of what the model can do on your data. However, if you want to get the most out of the model, you will probably need to tweak it a little bit and find the optimal value for each dataset you work on.
Yeah, that's indeed quite an annoying error message to get, but it comes directly from PyTorch. There isn't much that we can do about it! |
Closing this, as the question has already been responded. |
Description
Not so much an issue but more of a question. What is the default number of training epochs if I don't specify the number?
What I Did
The text was updated successfully, but these errors were encountered: