Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

high train loss #130

Open
eleveneee opened this issue Mar 28, 2023 · 0 comments
Open

high train loss #130

eleveneee opened this issue Mar 28, 2023 · 0 comments

Comments

@eleveneee
Copy link

First of all, thank you for your excellent work!
I have not modified any other parameters, but my losses are still high after 25 rounds of training. Do I still need to continue training? Is this normal?
Look forward to your reply!
Here is the specific situation:

23100.7s 1674 Mon Mar 27 23:05:44 2023 Epoch: 24 Sample 7120/7200 Loss: 0.4667869508266449
23104.6s 1675 Mon Mar 27 23:05:48 2023 Epoch: 24 Sample 7130/7200 Loss: 0.5839558243751526
23108.4s 1676 Mon Mar 27 23:05:52 2023 Epoch: 24 Sample 7140/7200 Loss: 0.4183468520641327
23112.2s 1677 Mon Mar 27 23:05:55 2023 Epoch: 24 Sample 7150/7200 Loss: 0.6217041611671448
23116.2s 1678 Mon Mar 27 23:05:59 2023 Epoch: 24 Sample 7160/7200 Loss: 0.5632153153419495
23120.8s 1679 Mon Mar 27 23:06:03 2023 Epoch: 24 Sample 7170/7200 Loss: 0.47710853815078735
23123.9s 1680 Mon Mar 27 23:06:07 2023 Epoch: 24 Sample 7180/7200 Loss: 0.6275425553321838
23127.7s 1681 Mon Mar 27 23:06:11 2023 Epoch: 24 Sample 7190/7200 Loss: 0.42571431398391724
23143.9s 1682 Current learning rate> 0.0001
23143.9s 1683 -------------------------------------------------------
23143.9s 1684 DexiNed, # of Parameters:
23143.9s 1685 35215245
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant