Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The "Test of Epoch [number]" displays an incorrect number #1425

Closed
Franck-Dernoncourt opened this issue Jul 2, 2018 · 3 comments
Closed

Comments

@Franck-Dernoncourt
Copy link

The "Test of Epoch [number]" seems to sometimes display an incorrect number.

In the following example, it says Test of Epoch 77263, even though there should be just 1 epoch from my understanding, since I gave --display_step 1 --limit_train 1 --limit_dev 1 --limit_test 1 --early_stop False --epoch 1 as arguments:

dernoncourt@ilcomp:~/asr/DeepSpeech$ ./DeepSpeech.py \
--train_files data/common-voice-v1/cv-valid-train.csv,data/common-voice-v1/cv-other-train.csv  \
--dev_files data/common-voice-v1/cv-valid-dev.csv --test_files data/common-voice-v1/cv-valid-test.csv \
--decoder_library_path /asr/DeepSpeech/libctc_decoder_with_kenlm.so --fulltrace True \
--display_step 1  --limit_train 1  --limit_dev 1  --limit_test 1 --early_stop False --epoch 1
W Parameter --validation_step needs to be >0 for early stopping to work
I Test of Epoch 77263 - WER: 1.000000, loss: 60.50202560424805, mean edit distance: 0.894737
I --------------------------------------------------------------------------------
I WER: 1.000000, loss: 58.900837, mean edit distance: 0.894737
I  - src: "how do you like her"
I  - res: "i "
I --------------------------------------------------------------------------------
I WER: 1.000000, loss: 60.517113, mean edit distance: 0.894737
I  - src: "how do you like her"
I  - res: "i "
I --------------------------------------------------------------------------------
I WER: 1.000000, loss: 60.668221, mean edit distance: 0.894737
I  - src: "how do you like her"
I  - res: "i "
I --------------------------------------------------------------------------------
I WER: 1.000000, loss: 61.921925, mean edit distance: 0.894737
I  - src: "how do you like her"
I  - res: "i "
I --------------------------------------------------------------------------------

Corresponding Discourse thread: https://discourse.mozilla.org/t/what-does-the-test-of-epoch-number-mean/29770/2

@Franck-Dernoncourt Franck-Dernoncourt changed the title The "Test of Epoch [number]" displays a incorrect number The "Test of Epoch [number]" displays an incorrect number Jul 2, 2018
@tilmankamp
Copy link
Contributor

tilmankamp commented Jul 2, 2018

@Franck-Dernoncourt This is actually not a bug, as the current epoch is computed on base of your current parameters and the snapshot-persisted global step-count. Take a close look into this excerpt:

# Number of GPUs per worker - fixed for now by local reality or cluster setup
gpus_per_worker = len(available_devices)

# Number of batches processed per job per worker
batches_per_job  = gpus_per_worker * max(1, FLAGS.iters_per_worker)

# Number of batches per global step
batches_per_step = gpus_per_worker * max(1, FLAGS.replicas_to_agg)

# Number of global steps per epoch - to be at least 1
steps_per_epoch = max(1, model_feeder.train.total_batches // batches_per_step)

# The start epoch of our training
# Number of GPUs per worker - fixed for now by local reality or cluster setup
gpus_per_worker = len(available_devices)

# Number of batches processed per job per worker
batches_per_job  = gpus_per_worker * max(1, FLAGS.iters_per_worker)

# Number of batches per global step
batches_per_step = gpus_per_worker * max(1, FLAGS.replicas_to_agg)

# Number of global steps per epoch - to be at least 1
steps_per_epoch = max(1, model_feeder.train.total_batches // batches_per_step)

# The start epoch of our training
self._epoch = step // steps_per_epoch

So what happens is that your set-size during training differs from your current set size. Thus the strange epoch number.

Simplified example (without confusing batch size):
If you once trained 5 epochs of a 1000 sample training-set, you got 5000 "global steps" (persisted as a number in your snapshot).
After this training you change the command line parameters to a set of size 1 (your --limit_* parameters).
"Suddenly" you'll get displayed epoch 5000, because 5000 global steps means applying a data-set of size 1 5000 times.

@Franck-Dernoncourt
Copy link
Author

Got it, thanks for the explanation!

@lock
Copy link

lock bot commented Jan 2, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Jan 2, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants