Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't override precision directly in the QKeras optimizer #567

Merged
merged 1 commit into from Jun 15, 2022

Conversation

vloncar
Copy link
Contributor

@vloncar vloncar commented Jun 10, 2022

output_rounding_saturation_mode optimizer directly manipulates precision object of the output variable of the node, which can trip up the later optimizers and make them skip the type conversion. It was observed in testing of accelerator backend with fifo depth optimization (by @thesps) as well as some GNN models (by @sznajder) This change ensures the update of precision also updates the entire type. Eventually we will make the precision a read-only property to ensure this type of error doesn't affect future optimizers, but this kind of change requires a bit more work.

@vloncar vloncar requested a review from thesps June 10, 2022 14:58
@thesps
Copy link
Contributor

thesps commented Jun 13, 2022

Can you push to a local branch to trigger the Gitlab CI? The convention so far has been to push to, in the case, pr/567, but any local name will do.

@vloncar
Copy link
Contributor Author

vloncar commented Jun 14, 2022

'tis done

@thesps thesps merged commit 8c45e42 into fastmachinelearning:master Jun 15, 2022
calad0i pushed a commit to calad0i/hls4ml that referenced this pull request Jul 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants