You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly, I'd like to extend my appreciation for your hard work and dedication in developing and maintaining the llama-cpp-python package. It has been an invaluable tool for our projects.
I am writing to request the inclusion of prebuilt CUDA 11.8 wheels in future releases of llama-cpp-python. Although the current requirement is CUDA 12.1 and above, I have successfully built llama-cpp-python with CUDA 11.8 on Windows 7 without any issues.
Our school PCs are running Windows 7 and cannot be upgraded to Windows 10 due to political and performance reasons. Having prebuilt CUDA 11.8 wheels would significantly ease our deployment process and allow us to continue leveraging the capabilities of llama-cpp-python in our educational environment.
I understand that maintaining compatibility with older CUDA versions may pose additional challenges, but it would greatly benefit those of us who are restricted to using legacy systems. Your consideration in this matter would be highly appreciated.
Thank you for your time and understanding.
Best regards,
i486
The text was updated successfully, but these errors were encountered:
Hello @abetlen
Firstly, I'd like to extend my appreciation for your hard work and dedication in developing and maintaining the llama-cpp-python package. It has been an invaluable tool for our projects.
I am writing to request the inclusion of prebuilt CUDA 11.8 wheels in future releases of llama-cpp-python. Although the current requirement is CUDA 12.1 and above, I have successfully built llama-cpp-python with CUDA 11.8 on Windows 7 without any issues.
Our school PCs are running Windows 7 and cannot be upgraded to Windows 10 due to political and performance reasons. Having prebuilt CUDA 11.8 wheels would significantly ease our deployment process and allow us to continue leveraging the capabilities of llama-cpp-python in our educational environment.
I understand that maintaining compatibility with older CUDA versions may pose additional challenges, but it would greatly benefit those of us who are restricted to using legacy systems. Your consideration in this matter would be highly appreciated.
Thank you for your time and understanding.
Best regards,
i486
The text was updated successfully, but these errors were encountered: