Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gptq quantization fails ModuleNotFoundError #113

Closed
escorciav opened this issue Jun 6, 2023 · 5 comments
Closed

gptq quantization fails ModuleNotFoundError #113

escorciav opened this issue Jun 6, 2023 · 5 comments

Comments

@escorciav
Copy link

escorciav commented Jun 6, 2023

Dear team,

Thanks a lot for reducing the barrier of entrance to work & use open-source LLMs. I was not able to quantize a 2.4B model with gptq for my modest RTX2080.
I got the following error

python quantize/gptq.py --checkpoint_dir checkpoints/EleutherAI/pythia-2.8b-deduped --dtype bfloat16
Loading model 'checkpoints/EleutherAI/pythia-2.8b-deduped/lit_model.pth' with {'block_size': 2048, 'vocab_size': 50254, 'padding_multiple': 128, 'padded_vocab_size': 50304, 'n_layer': 32, 'n_head': 32, 'n_embd': 2560, 'rotary_percentage': 0.25, 'parallel_residual': True, 'bias': True, 'n_query_groups': 32, 'shared_attention_norm': False}
Time to load model: 9.79 seconds.
Traceback (most recent call last):
  File "/awesome-project/lit-parrot/quantize/gptq.py", line 376, in <module>
    CLI(main)
  File "/env/lib/python3.10/site-packages/jsonargparse/cli.py", line 85, in CLI
    return _run_component(component, cfg_init)
  File "/env/on-device-llm/lib/python3.10/site-packages/jsonargparse/cli.py", line 147, in _run_component
    return component(**cfg)
  File "/awesome-project/lit-parrot/quantize/gptq.py", line 357, in main
    test_string = get_sample_data()
  File "/awesome-project/lit-parrot/quantize/gptq.py", line 214, in get_sample_data
    from datasets import load_dataset
ModuleNotFoundError: No module named 'datasets'

The issue may be related to the module (package?) datasets. Could you kindly provide a pointer to fix it?

Thanks in advance!

@escorciav
Copy link
Author

escorciav commented Jun 6, 2023

THe ModueleNotFoundError might dissapear after running, pip install datasets. Refer to this doc for more details.

@escorciav
Copy link
Author

escorciav commented Jun 6, 2023

In case, anyone is using RTX2080, you might face the following error 😓

Starting to quantize blocks
0 attn.attn collecting stats quantizing bin /env/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda118.so
time 17s quantization error 29958.6
0 attn.proj collecting stats Traceback (most recent call last):
  File "/awesome-project/lit-parrot/quantize/gptq.py", line 376, in <module>
    CLI(main)
  File "/env/lib/python3.10/site-packages/jsonargparse/cli.py", line 85, in CLI
    return _run_component(component, cfg_init)
  File "/env/lib/python3.10/site-packages/jsonargparse/cli.py", line 147, in _run_component
    return component(**cfg)
  File "/awesome-project/lit-parrot/quantize/gptq.py", line 363, in main
    llama_blockwise_quantization(model, encoded_text, device, bits=4)
  File "/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/awesome-project/lit-parrot/quantize/gptq.py", line 265, in llama_blockwise_quantization
    outs[j : j + 1], _ = block(
  File "/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl
    return forward_call(*args, **kwargs)
  File "/awesome-project/lit-parrot/lit_parrot/model.py", line 161, in forward
    h, new_kv_cache = self.attn(n_1, rope, mask, max_seq_length, input_pos, kv_cache)
  File "/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl
    return forward_call(*args, **kwargs)
  File "/awesome-project/lit-parrot/lit_parrot/model.py", line 198, in forward
    qkv = self.attn(x)
  File "/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl
    return forward_call(*args, **kwargs)
  File "/awesome-project/lit-parrot/quantize/bnb.py", line 332, in forward
    return qlinear_4bit_weight(inp, self.quant_weight, self.scales, self.zeros)
  File "/awesome-project/lit-parrot/quantize/bnb.py", line 253, in qlinear_4bit_weight
    linear_kernel_4bit_weight[grid](
  File "/env/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 97, in run
    timings = {config: self._bench(*args, config=config, **kwargs)
  File "/env/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 97, in <dictcomp>
    timings = {config: self._bench(*args, config=config, **kwargs)
  File "/env/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 80, in _bench
    return do_bench(kernel_call, quantiles=(0.5, 0.2, 0.8))
  File "/env/lib/python3.10/site-packages/triton/testing.py", line 44, in do_bench
    fn()
  File "/env/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 78, in kernel_call
    self.fn.run(*args, num_warps=config.num_warps, num_stages=config.num_stages, **current)
  File "<string>", line 42, in linear_kernel_4bit_weight
  File "/env/lib/python3.10/site-packages/triton/compiler/compiler.py", line 465, in compile
    next_module = compile_kernel(module)
  File "/env/lib/python3.10/site-packages/triton/compiler/compiler.py", line 361, in <lambda>
    lambda src: ptx_to_cubin(src, arch))
  File "/env/lib/python3.10/site-packages/triton/compiler/compiler.py", line 160, in ptx_to_cubin
    return _triton.compile_ptx_to_cubin(ptx, ptxas, arch)
RuntimeError: Internal Triton PTX codegen error: 
ptxas /tmp/compile-ptx-src-d48630, line 83; error   : Feature '.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-d48630, line 83; error   : Feature 'cvt with .f32.bf16' requires .target sm_80 or higher

@dwahdany
Copy link

dwahdany commented Jun 8, 2023

I think this should be re-opened and closed when added to requirements.txt

@escorciav
Copy link
Author

escorciav commented Jun 8, 2023

Sure, I have an environment setup. Just lemme know, and I'm happy to try again 😊

  • BTW, I'm not sure if the issue is due to an "old" GPU (RTX2080)
  • I had to drop off lit-parrot as the requirements of my project changed. Namely, export any (L)LM to onnx with opset 9.
  • I'm working directly with transformers and optimum. If you provide support for T5, I'm happy to keep playing with LIT-:parrot:

Motivation is mentioned here

cc @rasbt

@rasbt
Copy link
Collaborator

rasbt commented Jun 9, 2023

Thanks for pointing this out and sharing @escorciav

I am pretty certain that the RTX2080 only supports float16 but not bfloat16 training. Bfloat16 is a relatively recent feature for NVIDIA cards. The RTX 3000 series (and A100 etc.) would support it.

(The alternative here would be to run it with float16 instead of bfloat16

@carmocca carmocca closed this as completed Jun 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants