New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ZeroQuant quantization kernels and LKD #2207
Comments
Looks like the inference kernels for zeroquant is not released. |
@gsujankumar have you by any chance been able to quantize gpt-x models like gpt-2 or gpt-j? |
Hi, The engine of ZeroQuant inference is not released yet. The code example in DeepSpeed-Example is only to help verify the accuracy of ZeroQuant. The kernel/engine released is on our calendar and we are actively working on it to make it compatible for various models. Please stay tuned. For LKD, we will also release it soon. For the last question, the code for training or accuracy testing is different than the final inference engine. Here, everything is simulated, so we can do quantization aware training or other things |
thanks for replying back @yaozhewei. Do you think you could provide any estimation on when the ZeroQuant inference will be released? any rough estimation would help! |
i have the same questions, is there any guide to running inference on compressed models(especially ZeroQuant)? |
hi ,when the ZeroQuant inference will be released? |
@yaozhewei any news on this? |
@david-macleod LKD example is just released (not merged yet): microsoft/DeepSpeedExamples#214 For kernel, please stay tuned |
Thanks @yaozhewei! Do you know whether there is a rough timeline for this? e.g. 1 month, 6 months, 1 year? It would be very useful to know as we'd like to decide where to wait or explore other options. Thanks again! |
I have the same problem, after zero-quant with DeepSpeed-Example reposity's code, but didn't see any throughput/latency gain from the quantization during inference, it only have model size decrease. |
@yaozhewei any update on this? Is the engine of ZeroQuant inference released? |
@yaozhewei the newest deepspeed>=0.9.0 can't run any model int INT8, many issue opened not solved yet. Can you tell us which version of deepspeed can run int8 model? I just want to reproduce the result in your paper ZeroQuant |
Hi,
I was trying out the compression library for ZeroQuant quantization (for GPT-J model). While I was able to compress the model, I didn't see any throughput/latency gain from the quantization during inference. I have a few questions regarding this:
CUDA error: an illegal memory access
errorAny help would be appreciated.
The text was updated successfully, but these errors were encountered: