-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
not working since commit 31f04dc bitsandbytes problem #614
Comments
I can confirm the same issue on my side |
Same. I get `Starting the web UI... ===================================BUG REPORT===================================
|
I'm facing the exact same issue, windows 11 w/3090. The libcuda and libcudart files requested don't seem to exist on my system. |
I was having the same issue, once I found this and saw everyone having it seemed to be using windows, I thought this was probably the culprit, it fixed the issue for me: from how_to_install_llama_8bit_and_4bit
After re-doing those steps, I ran into another issue that I believe was caused by #615, it was complaining about
It's possible you won't need to modify the |
8-bit should work more reliably with the new one-click installer https://github.com/oobabooga/text-generation-webui#one-click-installers |
I had similar issue on linux, probably caused by #615 since if I revert changes as @bmoconno mentioned it loads llama.
So I had to re-install GPTQ-for-LLaMa in ./repositories and then it works. |
I have a similar error @hdkiller,
How did you "reinstall" gptq-for-llama? I did cd repositories/GPTQ-for-LLaMa
git pull
pip install -r requirements.txt Still getting the same error with Edit: It did work after removing the GPTQ-for-LLama directory and literally performing a new git clone and pip install. No idea why. |
This worked for me as well. Seems like a fairly common occurrence. Happens every few commits. Might make myself a quick script to automate this for fixing it in the future. haha. edit - hmm. I thought it did, but maybe it didn't....? edit2 - Okay, so it says that it won't use my GPU, yet my GPU clock speed still spikes when I generate text and |
Same issue for me too under Windows 11. Tried removing the GPTQ folder and re-pull and reinstall but it is not working. Had to temporarily revert to 966168b |
Same here, fresh WSL install, got the "TypeError: make_quant() got an unexpected keyword argument 'faster'" message when trying to load ozcur's alpaca-native-4bit. |
It's necessary to clone the GPTQ-for-llama repository with
now. The default branch in that repository has been changed for one that breaks backward compatibility. This has been updated in the one-click installer, which must be re-downloaded manually (just the install.bat script) oobabooga/one-click-installers@85e4ec6 |
Excellent, that fixes it! 👍 Glad to be able to use the latest version of your text-generation-webui again (and special thanks for merging my PR ❤). |
Seems like something going on in that I had to revert to this commit.
This is the commit which removed a parameter from the function definition of make_quant which throws the error @Azeirah had. This way now I am able to CPU offload llama:
|
Confirming that - both the problem and the workaround. Thanks @hdkiller for figuring out the commit that broke compatibility (f1af89a). Here's what my WSL console reported before I reverted:
The last working commit is 608f3ba. Reverting to that made text-generation-webui work again:
|
Please use my fork of GPTQ-for-LLaMa. It corresponds to commit # activate the conda environment
conda activate textgen
# remove the existing GPT-for-LLaMa
cd text-generation-webui/repositories
rm -rf GPTQ-for-LLaMa
pip uninstall quant-cuda
# reinstall
git clone https://github.com/oobabooga/GPTQ-for-LLaMa.git -b cuda
cd GPTQ-for-LLaMa
python setup_cuda.py install I will keep using this until qwopqwop's branch stabilizes. Upstream changes will not be supported. This works with @USBhost's torrents for llama that are linked here. |
This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment. |
Describe the bug
Starting with commit 31f04dc I am getting a lot of CUDA errors related to bitsandbytes when running the start-webui.bat
RuntimeError:
CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs above to fix your environment!
If you cannot find any issues and suspect a bug, please open an issue with detals about your environment:
https://github.com/TimDettmers/bitsandbytes/issues
Reverting to 966168b makes it run again.
Is there an existing issue for this?
Reproduction
Update to the latest version and run the start-webui.bat
Screenshot
No response
Logs
RuntimeError: CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs above to fix your environment! If you cannot find any issues and suspect a bug, please open an issue with detals about your environment: https://github.com/TimDettmers/bitsandbytes/issues
System Info
The text was updated successfully, but these errors were encountered: