Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[New Bitnet Model Support Request] Deepgrove model Bonsai 0.5B - Add Channel Scales #12598

Open
zephrus9 opened this issue Mar 27, 2025 · 1 comment
Labels
enhancement New feature or request

Comments

@zephrus9
Copy link

zephrus9 commented Mar 27, 2025

A new SOTA bitnet model, Bonsai 0.5B, has come out. Seems to outperform larger bitnet models like Falcon 1B, 3B, TriLM 700M. Seems like they are going to release a new line of bitnet models which is really exciting.

Support is needed for these models. They adopt a channel wise scaling factor compared to the tensor level ones. Maybe a separate kennel can be built to apply scales outside of the matmul kernels? Probably would yield similar inference speeds. Note that the hugging face does have a custom Q-linear layer that applies the scales.

HF: https://huggingface.co/deepgrove/Bonsai

Seems super promising --seems to be pretty performant and match full precision models like qwen2.5 0.5 from the looks of the report.

pinging @Eddie-Wang1120 + @compilade + other bitnet kernels contributors

Other posts and information:

https://www.reddit.com/r/LocalLLaMA/comments/1jgkqio/new_bitnet_model_from_deepgrove/
https://x.com/deepgrove_ai/status/1903103798735761518

@zephrus9 zephrus9 added the enhancement New feature or request label Mar 27, 2025
@compilade
Copy link
Collaborator

compilade commented Mar 27, 2025

They adopt a channel wise scaling factor compared to the tensor level ones. Maybe a separate kennel can be built to apply scales outside of the matmul kernels?

Hmm, channel-wise scales are not really convenient since they cannot be applied after the matmul, they have to be applied before. But that means they can be applied on the activations, though, if I understand correctly.

But maybe I'm reading the shapes wrong and the scales are row-wise, in which case TQ1_0 and TQ2_0 should already work (since they include a block-wise scale), and the weights simply have to be prepared beforehand so that the model can actually be converted.

EDIT: they apply the scales post-matmul, so this means the block-wise scales of TQ1_0 and TQ2_0 should work correctly with this model. It's only a matter of making the convert script do the right thing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants