New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AMD thread #3759
Comments
Why no AMD for Windows? |
@MistakingManx there is, you have to diy a llama cpp python build. It will be harder to setup than Linux. |
Does someone has a working AutoGPTQ setup? Mine was really slow when I installed the wheel: https://github.com/PanQiWei/AutoGPTQ/releases/download/v0.4.2/auto_gptq-0.4.2+rocm5.4.2-cp310-cp310-linux_x86_64.whl When building from source, the text generation is much faster but the output is just gibberish. I am running on a RX 6750 XT if this is important. |
Why exactly do models prefer a GPU instead of a CPU? Mine is running quick on CPU, but OBS kills it off due to OBS using so much. |
users prefer. Since: an AMD gpu comparable with 3090 may work at ~20t/s for 34B model. |
I have an AMD Radeon RX 5500 XT, is that good? |
I'm having trouble getting the WebUI to even launch. I'm using ROCm 6.1 on openSuSE Tumbleweed Linux with a 6700XT. I used the 1 click installer to set it up (and I selected ROCm support) but after the installation finished it just threw an error:
|
same issue here. Still no solution for me. Anyone can gimme some light here? ty in advance. |
Okay, so this is definitely not idea but I found that VERY carefully following the manual installation guide and then uninstalling bitsandbytes makes it work. I'm still figuring things out but at least it works now. |
Then you installed that modified version of bitsandbytes for rocm? Or..? What exactly did you do? Tks in advance. |
I am not sure which version is newer, but I used https://github.com/agrocylo/bitsandbytes-rocm. git clone git@github.com:agrocylo/bitsandbytes-rocm.git
cd bitsandbytes-rocm/
export PATH=/opt/rocm/bin:$PATH #Add ROCm to $PATH
export HSA_OVERRIDE_GFX_VERSION=10.3.0 HCC_AMDGPU_TARGET=gfx1030
make hip
python setup.py install Make sure the environment variables are also set, when you start the webui. Depending on your GPU you might need to change the GPU target or GFX Version. |
Saying it takes 6 seconds is not that helpful to get an Idea of the performance you have. Because that depends on the length of the output. Take a look at the console. After every generation it spits out the generation speed in With my RX 6750 XT I got about 35 t/s with a 7B GPTQ Model |
I have |
@RBNXI This is caused by
Yes it worked really good on my PC until I broke my Installation with an update of the repository. I plan on improving the one click installer and/or the setup guide of the oobabooga webui for AMD to make the setup easier, if I ever get it running again :) |
Cool, I'll be waiting for that then.
I saw it and tried to build it, but gave an error and got tired of trying stuff, I just thought "well, having to do so many steps and then having so many errors must mean it's just not ready yet...". But I could try another day when I have more time if I can fix that error, thanks. |
@RBNXI What Error did you get?
Yes I can understand that. The setup with NVIDIA is definitely easier. |
I don't remember the error, I'm sorry. But I had a question for when I try again, the command you used to clone (git clone git@github.com:agrocylo/bitsandbytes-rocm.git) I remember it gave me an error, is it ok to just clone with the default link to the repo? It said the link you used is private or something like that |
Yes you can of course use the link from the repo directly. You probably mean this one: https://github.com/agrocylo/bitsandbytes-rocm.git |
I tried again and same result. I followed the installation tutorial, everything works fine, then run and get the split error, then I compiled bitsandbytes from that repo (now it worked) and then tried to run again and same split error again... |
installing bitsandbytes-rocm is the only way I've been able to make this work. The new install doesn't seem to work for the 7900XTX |
AMD Setup Step-by-Step Guide (WIP)I finally got my setup working again (by reinstalling everything). Here is a step by step guide on how I got it running: I tested all steps on Manjaro but they should work on other Linux distros. I have no Idea how the steps can be transferred to windows. Please leave a comment if you have a solution for Windows.
Step 1: Install dependencies (should be similar to the one click installer except the last step)
If you get an error installing torch try running Step 2: Fix bitsandbytes
I found the following forks which should work for ROCm but got none of them working. If you find a working version please give some feedback.
Step 3: Install AutoGPTQ
If the installation fails try applying the patch provided by this article.
Step 4: Exllama
Step 4.5: ExllamaV2
If you get an error running ExllamaV2 try installing the nightly version of torch for ROCm5.6 (Should be released as stable version soon) pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.6 --force-reinstall Step 5: llama-cpp-python
You might need to add the I hope you can get it working with this guide :) I would appreciate some feedback on how this guide worked for you so we can create a complete and robust setup guide for AMD devices (and maybe even updated the one click installer based on the guide) Notes on 7xxx AMD GPUsRemember that you have to change the GFX Version for the envrionment variables: As described by this article you should make sure to install/setup ROCm without opencl as this might cause problems with hip. You also need to install the nighly version of torch for ROCm 5.6 instead of ROCm 5.4.2 (Should be released as stable version soon): pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.6 |
@RBNXI What model are you using? Which loader are you using? Usually this error means the loader failed to load the model. As explained by my guide above you have to do extra steps for AutoGPTQ and Exllama/Exllama_HF. Also note that with AutoGPTQ you often have to define the |
Awesome guide, thanks, I'll try it when I can. I tried with different --n-gpu-layers and same result. Also, AutoGPTQ installation failed with
Edit 2: I tried running a GPTQ model anyways, and it starts to load in VRam so the GPU is detected, but fails with:
|
@RBNXI I found this Issue in the ROCm Repo discussing the RX 6600. According to this the RX 6600 should work. Usually for all 6xxx cards llama.cpp probably runs on CPU because the prebuild python package is only build with CPU support. This is why you need to install it with the command from my guide. Regarding AutoGPTQ: I think you just copied the last lines not the real error that broke the installation. Therefore I am not sure what the problem is. Maybe check your ROCm Version and change the I usually run the webui with |
I don't have rocminfo installed, should I?. But clinfo shows my GPU indeed. I'll try to reinstall again and see if it works now. I did install rocm-hip-sdk. And I'm using Arch. Also I'm running it in a miniconda environment, is that a problem? Also the ROCM I have installed is from arch repository, I think it's 5.6.0, is that a problem? if I change the version in the command (pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2 -> pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6.0) it says ERROR: Could not find a version that satisfies the requirement torchvision (from versions: none) |
I'm trying to install, still errors everywhere. First of all the bitsandbytes installation fails, so I have to use the pip one.
What am I doing wrong? I'm following the guide... this is so frustrating... could it be that I have to install ROCM 5.4.2 from some rare repository or compile it myself or something obscure like that? It says pytorch is installed without ROCM support? even if I installed it with Edit: The 1 and 2 steeps in the install dependencies section are in different orders, if you run pip install requirements_nocuda first, it will install pytorch without ROCM support... |
Can you confirm that flash_attn is working for you? I have read and tried to replicate your write-up on ROCm 6.0.2 dev, but I am unable to get flash_attn to load through exllamav2. The updated comments inside your repo seem to indicate that you got flash_attn to work. Tried the following on a AMD 7900XTX (tried most minimal setup to isolate components):
Without flash_attn used the exl2 model from hf (8GB) uses aroud 60% VRAM so around 14 GB after being loaded. I presume we are still bound by rocm/flash_attn not being updated to upstream see ROCm/flash-attention#35 |
As the guide does note - it had been giving me warnings, when it wasn't there... I note that I found some versions of Ubuntu didn't like new card - so I needed to run newer versions, that included some support for 7900s... As such I have found 23.04 and .10 have been functional. [ Note there is a line that links the old packages in through etc sources ] While I am at it, I'll maybe save folks some time and some frustration - The last time I tried ( as of a few days ago the newest won't work... I wasn't able to get through the installed for 24.04 - previously when I managed to get it installed drivers didn't work - the errors it gave, looked like they're kernel issues that weren't supported... now that may have changed with the newest ROCm drivers that are on their site...
Do you mean you used their automatic install driver system?
Do these appear in
Can you share which one(s)? Here are some that work for me, that I've tested -
I tried the same thing with "no_flash_attn" checked, and get the same thing...
Is that what it says in the TWG shell console like it's not installed? Are you sure that you are running inside conda where it is installed? source run.sh
I went and tried this, reinstalled, and alas it still says the same thing.
I have made a thread on exllamav2 github about the issue...
It appears it's stuck waiting for AMD folks to do an update... |
@nktice
The ROCm Flash Attention repo is waiting for the next AMD Composable Kernel update to update their version of Flash Attention as mentioned here ROCm/flash-attention#35 (comment) My guess is that without the updated kernel it would be quite hard to create a forked version to bump up the version? I am hoping that it will be soon to fully use my current setup of 3x7900xtx |
Could anyone help me out with this?
No matter what type of model I load, I get a fault of some kind unless I load to CPU only. I've been trying to get this working for hours and I have no clue what's going wrong. |
These are my steps for running on Arch (ROCm 6.0)I'm using a 6700xt.
Remove the
Also replace the index url for PyTorch in
I'm using exllamav2, and I have had the most success with building from the repository. Prevent exllamav2 from automatically installing:
Run the install script for
Enter the conda environment:
Install exllamav2:
From here I can exit the conda environment and use the program normally. |
As of a few days ago the ROCm BitsAndBytes passes tests on my machine and no longer spits I built a wheel with simply git clone --recurse https://github.com/ROCm/bitsandbytes --branch rocm_enabled --depth 1
cd bitsandbytes
cmake -DCOMPUTE_BACKEND=hip -S . # hipcc defaults to march=native
make
pip wheel --no-deps . Which resulted in this wheel that passes all un-skipped tests on my 7900 XTX A lot of tests are still skipped on ROCm so it may not work with all models. It's also wicked slow, like 1/4 of FP16 speed. But if you're vram starved and can't use a different backend it works at least. It'd be nice if someone on RDNA 2 could try. I don't know if the blasLT lib will compile on those cards. If it works maybe oobabooga could set up an action. |
Given #5921, which effectively breaks llama.cpp support on AMD/Intel cards, I somehow doubt that @oobabooga is particularly interested in setting up more actions. For those who want a working GPU accelerated llama.cpp, the following commands should sort it on Linux:
(Edited to go to a more stable commit) |
Why so verbose? |
I'm assuming not everyone has the full ROCM stack set up on their computer - the entire point of the project is it's supposed to be self-contained. |
If you don't have the ROCm SDK you'll run into issues with other libs anyways. Its installable through the native package manager on most distros nowadays. Using your distro's ROCm and the appropriate nightly/RC pytorch to match completely fixes all the random page faults. |
I think it really depends on what you're doing. Llama.cpp support was pretty darn solid, as far as I could tell, and I image that quite a few people would primarily be using that. |
llama.cpp wheels for AMD used to be provided by jllllll. After he stopped updating his wheels, I started running his workflows myself in a fork. At first this didn't work at all due to rate limit errors, but then I added long timeouts of 30 minutes between the sub-jobs of the workflow and it started working reliably as an overnight run of several hours. That changed a week ago when GitHub stopped uploading the compiled wheels for my jobs, possibly because I am using too much storage (with all the python, CUDA, ROCm, AVX, llama-cpp-python combinations, that must have added up to many GB over time). Debugging and maintaining this takes exponentially long due to how unreliable Github Actions are, so I have upstreamed the responsibility to the main llama-cpp-python repository simply due to lack of time. I think that abetlen would probably be open to ROCm workflows if someone wants to be a hero and come up with something reliable to submit in a PR (https://github.com/abetlen/llama-cpp-python). |
How likely is it that upstreaming those workflows causes the same issues to befall abetlen, assuming it is some kind of resource limit? They're already pushing 50 binaries for their releases. |
Worth mentioning that Vulkan is pretty good too. Compiling llama.cpp with |
Likely that'll happen as well. You could work around the compute limits by having one job build multiple wheels sequentially, but that would be annoying to manage and you might still hit the storage limits. Personally I feel that wheels should only be provided for Windows users due to the difficulty in compiling for that platform, Mac and Linux users can build it themselves. It's not hard to do if you have all the dependencies installed. |
Most of the wheels can be built with a single command on Linux assuming torch and nvcc/hipcc are visible so maybe it could be allocated to an install script? |
@oobabooga Following on from @Beinsezii's comment - Is there a case for modularising this project into two components? Specifically: a) The Gradio application I'm specifically thinking that this may be worth doing because if the Gradio app is packaged separately, it could be just another Python wheel published on PyPI. That would potentially make things quite a lot easier on your end, because then you can just follow the same route as |
@netrunnereve I agree that compiling wheels is not very difficult on Linux/Mac, but my previous experience with this project has shown that compiling anything at all is a daunting task for users on all platforms. @dgdguk that's not a bad idea but I think it would limit the project too much for the reason above. I thought about it and reached the conclusion that not providing custom llama.cpp wheels is too much functionality loss, so I took the third route: simply paying GitHub for the excess storage/compute so that I can continue running the jobs. With that I managed to compile a new version successfully again (#5964). |
@oobabooga I think splitting the Gradio app out of the Python distribution stuff might be a good idea regardless - if nothing else, it makes things substantially easier for someone else to step in and provide builds for a different platform. Right now, any such support has to step around your distribution code. For example, we're currently staring down ARM AI laptops, AMD's XDNA accelerators, and a whole bunch of other consumer facing AI accelerators - is it really reasonable for your support matrix to have everything in it? I don't necessarily think it matters where you draw the box around what you want to support, but I think it is worth acknowledging the box exists and that some hardware is likely to be out of what you want to support. At least until some proper cross-vendor APIs come into existence. |
@oobabooga Why not just use the official wheels for everything you can and only compile your own for unsuppported platforms? Might avoid the github storage issue. |
I took a look at your releases page and you've got like 4000 😮 wheels on there with some of the big ones over 50MB in size. Many of them are outdated and having a script to prune them will save a lot of space. Also Github Actions per minute billing gets expensive pretty fast with those 30 minute CUDA Windows builds, and it might be worth looking into a local CI runner for those long builds and rely on free minutes only for the stuff that you can't build yourself. Since you don't need to spin up a new VM and install CUDA every time on a local machine it should complete much faster. |
So we're on May 2024, Ollama and even LMStudio (and others) use the ROCm technology on Windows for AMD GPU but not Oobabooga who need Linux ? |
@oobabooga Is there a particular holdup or lack of capability to update to ROCm 6.0 (as in, it seems like you might not have access to AMD hardware)? I'm asking because I've been playing around with some ROCm 6.0 stuff and it seems that it's a pre-req to closing a lot of the feature gaps. For example, AMDs bitsandbytes version seems to work, as well as LORA training. |
@dgdguk have you tried use vulkan or kompute drivers? i bet they will work with all major ooba features with all AMD gpus even the old ones. Still surprice how they emulate bit level instructions |
@userbox020 No, but that's not actually an option for this project. As I mentioned before, this project would likely be more useful if the WebUI was separated from the distribution needed to run it. Right now, I think one of the problems is that the WebUI has a very proscribed environment, which discourages people from trying things - and I think this'll get worse with NPUs. |
Vulkan is already supported by llama.cpp on windows, it has both Kompute and Vulkan builds available https://github.com/ggerganov/llama.cpp/releases/tag/b2979. You just need to overwite the DLLs in the site-package for llama_cpp with the vulkan build. Copy the other DLLs into the env dir, and it works perfectly (once you reduce context and maybe use a smaller model as memory issues are a problem). There isn't really any reason the Vulkan llama DLLs can't be downloaded by the installer automatically and patched into the site-packages if the user choose to do so. ggml_vulkan: Found 1 Vulkan devices: Vulkan0: AMD Radeon RX 6700 XT | uma: 0 | fp16: 1 | warp size: 64 llm_load_tensors: ggml ctx size = 0.37 MiB llm_load_tensors: offloading 40 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 41/41 layers to GPU llm_load_tensors: CPU buffer size = 87.89 MiB llm_load_tensors: Vulkan0 buffer size = 7412.96 MiB I wouldn't suggest Kompute, whilst it can load models, it is still extremely buggy and only works on a limited set of models(ggerganov/llama.cpp#5540 (comment)) . That could just be my video card though. I havn't messed with using it for training, just plain generation, but Vulkan definitely works for generation, and it's definitely quicker than CPU for me. |
@druggedhippo While that's may be true, I'd point out that anything that starts with "overwrite the DLLs" is necessarily not supported by this project. Of course, it may be worth a feature request on it: if the Vulkan code path works sufficiently well, then in principle that provides a common target and saves @oobabooga having to target 20+ different configs. Of course, I'll point out that "runs faster than CPU" is an extremely low bar to clear. |
I think instead of the word |
@userbox020 Indeed, but the idea of modularizing web-ui has already been quashed by @oobabooga earlier in this thread, so presumably any PR along those lines will be rejected, and unfortunately I don't have the time to maintain a fork to any level of acceptable quality. |
This thread is dedicated to discussing the setup of the webui on AMD GPUs.
You are welcome to ask questions as well as share your experiences, tips, and insights to make the process easier for all AMD users.
The text was updated successfully, but these errors were encountered: