-
Notifications
You must be signed in to change notification settings - Fork 491
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to compress in my dataset #51
Comments
You can set these hyper-parameter similar to how you train a normal/uncompressed model on your own data set, without PocketFlow. |
Can you give me an example? Which parameters should I set? |
Before PocketFlow, have you trained some models on this data set? What is your hyper-parameter setting, or, how do you adjust the learning rate during the training process at that time? |
I mean that can I set some hyper-parameter so that PocketFlow does not call |
Almost all model compression components need this function to setup the learning rate schedule and number of training iterations. I suggest you do not skip this function. If you do want to skip this, you should modify the corresponding model compression component, and manually setup the learning rate schedule. |
OK,thanks |
When I compress model on my custom dataset, the function
def setup_lrn_rate(global_step, model_name, dataset_name)
raise the NotImplement error. I want to produce a function that return lr and batch numbers in normal dataset and normal module. How I set the value ofnb_epoches
,decay_rates
...? How to choose piecewise constant strategy or exponential decaying strategy?The text was updated successfully, but these errors were encountered: