Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: module 'bitsandbytes.nn' has no attribute 'Linear4bit'. Did you mean: 'Linear8bitLt'? #2227

Closed
1 task done
StingrayA opened this issue May 20, 2023 · 10 comments
Labels
bug Something isn't working

Comments

@StingrayA
Copy link

StingrayA commented May 20, 2023

Describe the bug

Using the current one click installer: After choosing a Model from hugging face (tried different ones), installing it and open start_windows batch file again, i get the log.
I don't know how to start the programm from here.

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

Using the current one click installer: After choosing a Model from hugging face (tried different ones), installing it and open start_windows batch file again, i get the issue above.

Screenshot

No response

Logs

bin G:\text_generation\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll
Traceback (most recent call last):
  File "G:\text_generation\oobabooga_windows\text-generation-webui\server.py", line 47, in <module>
    from modules import chat, shared, training, ui, utils
  File "G:\text_generation\oobabooga_windows\text-generation-webui\modules\training.py", line 14, in <module>
    from peft import (LoraConfig, get_peft_model, prepare_model_for_int8_training,
  File "G:\text_generation\oobabooga_windows\installer_files\env\lib\site-packages\peft\__init__.py", line 22, in <module>
    from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model
  File "G:\text_generation\oobabooga_windows\installer_files\env\lib\site-packages\peft\mapping.py", line 16, in <module>
    from .peft_model import (
  File "G:\text_generation\oobabooga_windows\installer_files\env\lib\site-packages\peft\peft_model.py", line 31, in <module>
    from .tuners import (
  File "G:\text_generation\oobabooga_windows\installer_files\env\lib\site-packages\peft\tuners\__init__.py", line 21, in <module>
    from .lora import LoraConfig, LoraModel
  File "G:\text_generation\oobabooga_windows\installer_files\env\lib\site-packages\peft\tuners\lora.py", line 735, in <module>
    class Linear4bit(bnb.nn.Linear4bit, LoraLayer):
AttributeError: module 'bitsandbytes.nn' has no attribute 'Linear4bit'. Did you mean: 'Linear8bitLt'?

Done!
Press any key to continue . . .

System Info

Windows 11
GPU: Nvidia RTX 3060ti
@StingrayA StingrayA added the bug Something isn't working label May 20, 2023
@ikt100
Copy link

ikt100 commented May 20, 2023

Hey, I might have the solution, atleast for me it worked that I went and edited out the part of the lora.py file from the line 735 and till the end.

Instruction: Go to your lora.py file, "G:\text_generation\oobabooga_windows\installer_files\env\lib\site-packages\peft\tuners\lora.py" and go to the line 735 and either comment everything out till the end of the file or delete the whole segment.

@jasonmcaffee
Copy link

We logged issues one minute apart.
I have an easy one line fix to modify requirements.txt spelled out here:
#2228

@FeuerDrachenEinhorn
Copy link

Hey, I might have the solution, atleast for me it worked that I went and edited out the part of the lora.py file from the line 735 and till the end.

Instruction: Go to your lora.py file, "G:\text_generation\oobabooga_windows\installer_files\env\lib\site-packages\peft\tuners\lora.py" and go to the line 735 and either comment everything out till the end of the file or delete the whole segment.

when doing this I get a bunch of other errors

@numbersmason
Copy link

Idk about you, but I comment each and one of the remaining lines with "#" and it seems to work

@StingrayA
Copy link
Author

We logged issues one minute apart. I have an easy one line fix to modify requirements.txt spelled out here: #2228

where do I add the one line to? i pasted it in requirement.txt and it didn't work

@VoodooDog
Copy link

Have the same problem... :(
Does deleting the thing section work or not?

@oobabooga
Copy link
Owner

See #2228 (comment)

@VoodooDog
Copy link

Hey, I might have the solution, atleast for me it worked that I went and edited out the part of the lora.py file from the line 735 and till the end.

Instruction: Go to your lora.py file, "G:\text_generation\oobabooga_windows\installer_files\env\lib\site-packages\peft\tuners\lora.py" and go to the line 735 and either comment everything out till the end of the file or delete the whole segment.

na, this works perfect!

@SpacePea
Copy link

Hey, I might have the solution, atleast for me it worked that I went and edited out the part of the lora.py file from the line 735 and till the end.

Instruction: Go to your lora.py file, "G:\text_generation\oobabooga_windows\installer_files\env\lib\site-packages\peft\tuners\lora.py" and go to the line 735 and either comment everything out till the end of the file or delete the whole segment.

Uhm, after thats done, what do I do now? Where do I get the local http link? tried running cmd and start .bat files in the oogabooga folder, nothing happened

@ihongxx
Copy link

ihongxx commented Oct 27, 2023

$ sudo apt install gcc-10 g++-10
$ export CC=/usr/bin/gcc-10
$ export CXX=/usr/bin/g++-10
$ export CUDA_ROOT=/usr/local/cuda
$ ln -s /usr/bin/gcc-10 $CUDA_ROOT/bin/gcc
$ ln -s /usr/bin/g++-10 $CUDA_ROOT/bin/g++
$ git clone https://github.com/timdettmers/bitsandbytes.git
$ cd bitsandbytes
$ CUDA_HOME=/usr/local/cuda-11.6 CUDA_VERSION=116 make cuda11x
$ python setup.py install
I succeeded!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

9 participants