-
Notifications
You must be signed in to change notification settings - Fork 1.1k
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EarlyStopping's occurence #33
Comments
Hi, please try to set a larger |
Hi. |
Hello. Thank you for your swift response. I did what cookieminions told me and the code made it to Epoch 20 with EarlyStopping counter at 18 out of 100 before terminating. The result MSE and MAE were 0.45 and 0.50 respectively. However, soon after the code terminated, it ran the test again on its own but only made it to Epoch 20 again and with a EarlyStopping counter at 19 out of 100. The MSE and MAE were similar to those from the previous run. I have tried to find the error log files in the code but could not, and the code itself did not produce any error prompt anywhere during its run, so I was wondering if you could point me a place where I can look for it. Thank you |
Hi, |
I see. But is running the code at its default configuration supposed to end up with the same result as the one on the chart here? Also, what could be the cause for the validation loss to not drop continuously?
…Sent from my iPhone
On Feb 24, 2021, at 19:13, Cookie <notifications@github.com> wrote:
Hi,
The program run again because the default number of repeated experiments itr is set to 2. The earlystopping patience means if the number of times that validation loss did not drop continuously reaches patience, the experiment will stop. But if train_epochs is reached first, the experiment will also stop.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#33 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AHXQM2QSNFBVH3K4IBTLYYDTAW535ANCNFSM4YDFS3VA>.
|
If you use the default configuration, you can get a multivariate prediction result with a prediction length of 24, which is in the upper left corner of Figure 5. |
Oh now I understand it. I was looking at figure 4 and confused why it would be so off. Now that I am looking at the correct figure everything seems to be working just fine. |
Maybe you can reduce the number of |
Suppose there is no more discussion. I will close this issue in 12h. |
Sorry that I forgot to check back to this thread. The issue is solved for now and I will close it. |
Hello,
I am currently trying to run your code to see how it works, but every time the code terminates too soon based on EarlyStopping. The result MSE and MAE were also quite off compared with the results shown here. I have had no involvement with almost any programming-related things for a long time so my knowledge is too limited to solve the problem myself. With that being said, I did try to set the EarlyStopping patience to 100 instead, but the code still ended on its own despite saying that EarlyStopping counter is at 3 out of 100. Also, at the start, the code would show that Use GPU: cuda: 0, which made me concerned that if the training was done on CPU at first, but when I checked with the Task Manager the GPU use was at almost 100%, so I believed it was fine, but the fact that the code terminates itself too early every time still makes me wonder if it is using the GPU properly. It would be great if you could provide me some help on this.
In case if any information on specs are needed:
OS: Windows Server 2019 64-bit
Processor: Intel Xeon CPU @ 2.20GHz 2.20GHz
Memory: 30GB
GPU: Nvidia Tesla V100
Thank you in advance. Let me know if there is any additional information you need.
The text was updated successfully, but these errors were encountered: