New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
latent weights performance measure #503
Comments
Sorry for the late respone, have you checked out this discussion about a very similar request? It also includes an example notebook and a workaround for your use case. In general the reason why we explicitely set We set quantized scope to The original idea behind Another question where I am not sure about the desired behaviour would be is how to treat input quantizers since to be consistent I guess they would also need to have a flag that turns them on or off then. Looking at this from a higher level it seems that you are looking for a global flag that changes the behaviour of the model during inference. Supporting this directly on the framework level seems to be quite tricky to get right since it could become a potential source of hard to debug issues or and confusion if not done right. What is the reason why using a separate non-quantized model without kernel quantizers to evaluate performance of your algorithm with latent weights doesn't work for you? This would help me to understand your use case a bit better so I can have a think about whether we can support this in a nice way without breaking existing code or reducing training performance. |
Thanks for the detailed answer, I looked at the notebook and I think it might be good enough for my purpose.
and with the previous changes of BaseLayer, it seems to works fine, I am just afraid I broke something. I still need to test your solution, even though it's a bit more complicated it might be more stable, another issue, like you stated, is the performance- that's why I tried to avoid copying the weights each time to another non-quantized model, and that's also my motivation to override keras.Model methods which, as far as I know, run faster. thanks again for the great help! |
Feature motivation
For a new optimizer I am trying to write, I need to test the model performance with the learned real valued weights, I think its very interesting to see how the latent weights behave also for other optimizers
Feature description
I would like to have something like:
and in my case it's also very useful to have:
Feature implementation
I think I solved this issue by changing BaseLayer behavior (under "larq/layers_base.py)
now low level loop of evaluation works as expected, but high level behavior of fit and evaluate seem to be off
maybe there is a simple way of achieving this?
thanks a lot for the good work!
The text was updated successfully, but these errors were encountered: