-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
iteration_limit hit in OpRegularizerManager #35
Comments
Also here is the model I am considering, for the example above
|
Hi Vashisht, OpRegularizerManager makes several passes over the ops to determine the grouping* of the channels. If you hit the ITERATION_LIMIT, that means the manager ran into some configuration of ops that it did not know how to handle. Looking at the model, I suspect the issue is with the reduce_mean and argmax at the end. There are a couple options to try:
Hopefully that helps.
|
I use it in a bert model, and got the same error.
Anyone known how to fix it? |
hello, I got the same error. Did you resolve it? |
No. I just give up using morphnet on bert. |
I have a situation in which the
ITERATION_LIMIT
is even whenlen(self._all_ops)
is only about 3000 inOpRegularizerManager
. It seems that certain ops keep cycling back on theself._op_deque
. Is there a specific reason this happens? More generally, why are the same ops processed multiple times, even when they have the same type and handler?In a simple LeNet training example, I go through 34 iterations of assign_grouping when there are far fewer ops to process
The text was updated successfully, but these errors were encountered: