Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement one-padding and reduce number of SE blocks for QuickNet #136

Merged
merged 4 commits into from Mar 20, 2020

Conversation

koenhelwegen
Copy link
Contributor

@koenhelwegen koenhelwegen commented Mar 19, 2020

This PR introduces one-padding for QuickNet and QuickNet large, and reduces the number of squeeze & excite blocks on QuickNet Large. This leads to faster inference times at no cost in accuracy.

The new benchmark results are described in larq/compute-engine#294.

The new model files are released in:

Copy link
Member

@lgeiger lgeiger left a comment

I have a few minor comments for improved readability

larq_zoo/sota/quicknet_large.py Outdated Show resolved Hide resolved
larq_zoo/sota/quicknet_large.py Outdated Show resolved Hide resolved
larq_zoo/sota/quicknet_large.py Outdated Show resolved Hide resolved
Koen Helwegen and others added 2 commits Mar 20, 2020
Co-Authored-By: Lukas Geiger <lgeiger@users.noreply.github.com>
Copy link
Member

@lgeiger lgeiger left a comment

Could you upload the json model architectures (model.to_json()) to the releases as well, so the netron graph is up to date.

This requires updating this line and this one

@koenhelwegen koenhelwegen requested a review from lgeiger Mar 20, 2020
@lgeiger lgeiger added the feature label Mar 20, 2020
@lgeiger lgeiger merged commit d3e96b1 into master Mar 20, 2020
14 checks passed
@lgeiger lgeiger deleted the quicknet-one-padding branch Mar 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants