Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize Tiny YoloV2 model by fusing the conv + batchNormalization #231

Merged
merged 2 commits into from
May 10, 2024

Conversation

Honry
Copy link
Collaborator

@Honry Honry commented May 7, 2024

  • Fuse conv + batchNormalization would bring some performance benefit
  • Convert weights to fp16 for GPU and NPU backends

@Honry
Copy link
Collaborator Author

Honry commented May 7, 2024

@huningxin, PTAL, though Tiny Yolo doesn't work on NPU now, this optimization can still bring some benefit for GPU.

I am thinking of adding data type option to support both fp32 and fp16 and disabling the unsupported models for each backend in a follow up.

If this one looks good, please help merge the test-data change at webmachinelearning/test-data#21 firstly.

@huningxin
Copy link
Contributor

though Tiny Yolo doesn't work on NPU now

Does it make sense to hide TinyYolo when user select NPU until it is supported?

@Honry
Copy link
Collaborator Author

Honry commented May 10, 2024

Does it make sense to hide TinyYolo when user select NPU until it is supported?

I have a follow-up PR to support both fp16 and fp32, will hide TinyYolo fp16 for NPU.

Copy link
Contributor

@huningxin huningxin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@huningxin huningxin merged commit 09240eb into webmachinelearning:master May 10, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants