Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is this running in fp16 or fp32, or is it something different? #71

Closed
xzuyn opened this issue Apr 28, 2023 · 4 comments
Closed

Is this running in fp16 or fp32, or is it something different? #71

xzuyn opened this issue Apr 28, 2023 · 4 comments

Comments

@xzuyn
Copy link

xzuyn commented Apr 28, 2023

With other methods of running LLMs using fp16 or quantization methods down to 4bit/5bit/8bit I'm wondering if the web demo could be faster/smaller in the future with quantization or at least fp16.

@jinhongyii
Copy link
Member

Thanks for your advice. We are testing fp16 correctness and speed internally and will make it public soon

@xzuyn
Copy link
Author

xzuyn commented Apr 28, 2023

Thanks for your advice. We are testing fp16 correctness and speed internally and will make it public soon

I'm wondering about what the web demo uses currently. The model size is similar to a q4_0 ggml model, so is it running 4bit? I couldn't find any specific info on what precision you guys are using.

@jinhongyii
Copy link
Member

it is using 4bit quantization and fp32 for compute

@xzuyn
Copy link
Author

xzuyn commented Apr 28, 2023

it is using 4bit quantization and fp32 for compute

Thank you. Good luck with everything, I'm looking forward to seeing how this progresses.

@xzuyn xzuyn closed this as completed Apr 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants