Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The variable "end_training" in Bert_Large training is wrongly used. #170

Open
taotod opened this issue Feb 26, 2024 · 1 comment
Open

The variable "end_training" in Bert_Large training is wrongly used. #170

taotod opened this issue Feb 26, 2024 · 1 comment

Comments

@taotod
Copy link

taotod commented Feb 26, 2024

In the code below, the variable "end_training" is defined with boolean type to decide when to end the training.

https://github.com/IntelAI/models/blob/cdd842a33eb9d402ff18bfb79bd106ae132a8e99/models/language_modeling/pytorch/bert_large/training/gpu/run_pretrain_mlperf.py#L838

In the code below to calculate the one iteration training time, the variable "end_training" is wrongly re-used to record the end training time.
https://github.com/IntelAI/models/blob/cdd842a33eb9d402ff18bfb79bd106ae132a8e99/models/language_modeling/pytorch/bert_large/training/gpu/run_pretrain_mlperf.py#L1006

"end_training" is set with a non-zero value in the code line 1006. As a result, after one data file is used for training, the training exits here and will never go to next data file.
https://github.com/IntelAI/models/blob/cdd842a33eb9d402ff18bfb79bd106ae132a8e99/models/language_modeling/pytorch/bert_large/training/gpu/run_pretrain_mlperf.py#L1079

@sramakintel
Copy link
Contributor

@taotod could you submit a PR to address the workaround if possible?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants