Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A trained model #12

Open
hongzimao opened this issue Jun 20, 2020 · 9 comments
Open

A trained model #12

hongzimao opened this issue Jun 20, 2020 · 9 comments

Comments

@hongzimao
Copy link
Owner

Recently received many requests for our trained model. Here's one after 20,000 steps using the code in the master branch. Hope this helps saving some training time when reproducing our results. https://www.dropbox.com/sh/62niiuaa7103cth/AABxu3ekjOYakmr86gMECZ3Ca?dl=0

@Qinghao-Hu
Copy link

This link seems expired, would you please have a check?

@hongzimao
Copy link
Owner Author

hongzimao commented Aug 18, 2021

The link is indeed expired. I lost the access to MIT dropbox to retrieve the model since graduation. I searched my local storage but unfortunately I couldn't find the exact trained model. However,I believe there were others downloaded this model before. Can someone upload the model here? Thanks a lot!

@hongzimao
Copy link
Owner Author

hongzimao commented Aug 18, 2021

Here are some models I was able to retrieve from our local machine. Although the creation time of the model is a few months prior to this post, It should be in a similar setting. I'm attaching a few model snapshot at after 20,000 iterations. Check if the performance is good: models.zip

Still, if someone has the original model in this post, please do contact us with a copy. We will upload it here. Thanks!

@Qinghao-Hu
Copy link

Thanks for your reply.

I spend some time testing the performance (RL models with 20000 + epochs are given by you):

Scheduler Avg. JCT Executor Usage
FIFO 1842803 0.8197
Dynamic Partition 62783 0.7074
RL 20000 61190 0.6456
RL 24000 64663 0.6469
RL 25000 64044 0.6399
RL 26000 63494 0.6462
RL 10000 (Trained by us) 57241 0.6634

All settings are as default. Does this performance meet expectations?

@hongzimao
Copy link
Owner Author

Looks like your trained model performs better :) would you mind sharing the model so that others may use it too? Thank you!

@Qinghao-Hu
Copy link

Here are our trained_models. We trained 23000 epochs for 10 days on a server with 56 cores CPU and 4 V100 GPUs. (For training reference. All settings are as default.)

We also perform more evaluations on our trained models: (For testing reference. Each evaluation takes about 30 minutes.)

Scheduler Avg. JCT Executor Usage
RL 100 99540 0.6392
RL 500 89455 0.6397
RL 1000 62610 0.6456
RL 2000 58918 0.6493
RL 6000 56739 0.6665
RL 10000 57241 0.6634
RL 10900 105744 0.7236
RL 11000 66811 0.6396
RL 12000 62434 0.6886
RL 15000 73518 0.6029
RL 16000 61234 0.6694
RL 18000 61145 0.6629
RL 20000 65519 0.6518
RL 22000 61105 0.6689

It seems not stable for RL training and the performances are not much better than Dynamic Partition. I want to know how to choose the best model checkpoint without testing (which metric in the tensorboard is the most significant one)? Do you have any insight? Thank you.

@jahidhasanlinix
Copy link

@Tonyhao96 Would you like to give me some instructions on which command syntax did you used to train your model and how did you compare the performance?

@Qinghao-Hu
Copy link

@jahidhasanlinix I just use the command provided by the authors without modification. I trained Decima several months ago and I totally forget the details.

@jahidhasanlinix
Copy link

@Tonyhao96 Thank you for your response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants