Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to change the logging setting for SGD and layerwise trainer? #27

Closed
AminSuzani opened this issue Jun 27, 2014 · 3 comments
Closed
Labels

Comments

@AminSuzani
Copy link

Hi,

I was wondering if there is a way to control the logging info for layerwise and SGD optimization? For example, I like to see the training error only every other 50 updates (not on each update). My training takes a couple days. Whenever I get to my computer I see the logging for the last hour at most, and I won't get a feeling of what's going on.

Thanks for your great package,
Amin

@kastnerkyle
Copy link
Contributor

If you are using Linux (maybe Mac too?), try

python -u myfile.py 2>&1 | tee log.log

This should log all data to a file called log.log, while still letting you
see the last hour or so in the terminal. The -u is to run python in
unbuffered mode, which is what allows tee to work properly. While logging
thresholds may be nice, being able to see all execution is still
important!

On Fri, Jun 27, 2014 at 3:56 PM, Amin Suzani notifications@github.com
wrote:

Hi,

I was wondering if there is a way to control the logging info for
layerwise and SGD optimization? For example, I like to see the training
error only every other 50 updates (not on each update). My training takes a
couple days. Whenever I get to my computer I see the logging for the last
hour at most, and I won't get a feeling of what's going on.

Thanks for your great package,
Amin


Reply to this email directly or view it on GitHub
#27.

@lmjohns3
Copy link
Owner

I agree with @kastnerkyle that it's useful to have a log with lots of information in it, and the tee strategy is one that I use personally (then later you can grep through the on-disk log file and make quick learning curve graphs, etc.).

However, you can also configure the number of training batches that get processed per log line, by using the --train-batches (command-line) or train_batches (programmatic) arguments when running your model. Set this to a large number to see less-frequent updates.

@AminSuzani
Copy link
Author

Thanks Kyle and Leif for your answers. using tee was a really good idea. Increasing train batches made the training faster, but led to less accuracy.

@lmjohns3 lmjohns3 closed this as completed Jul 1, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants