Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to compress in my dataset #51

Closed
as754770178 opened this issue Nov 13, 2018 · 6 comments
Closed

how to compress in my dataset #51

as754770178 opened this issue Nov 13, 2018 · 6 comments

Comments

@as754770178
Copy link

When I compress model on my custom dataset, the function def setup_lrn_rate(global_step, model_name, dataset_name) raise the NotImplement error. I want to produce a function that return lr and batch numbers in normal dataset and normal module. How I set the value of nb_epoches, decay_rates ...? How to choose piecewise constant strategy or exponential decaying strategy?

@jiaxiang-wu
Copy link
Contributor

You can set these hyper-parameter similar to how you train a normal/uncompressed model on your own data set, without PocketFlow.

@as754770178
Copy link
Author

Can you give me an example? Which parameters should I set?

@jiaxiang-wu
Copy link
Contributor

Before PocketFlow, have you trained some models on this data set? What is your hyper-parameter setting, or, how do you adjust the learning rate during the training process at that time?

@as754770178
Copy link
Author

I mean that can I set some hyper-parameter so that PocketFlow does not call setup_lrn_rate function?

@jiaxiang-wu
Copy link
Contributor

Almost all model compression components need this function to setup the learning rate schedule and number of training iterations. I suggest you do not skip this function. If you do want to skip this, you should modify the corresponding model compression component, and manually setup the learning rate schedule.

@as754770178
Copy link
Author

OK,thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants