Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cmake file always assumes AVX2 support #1583

Closed
diwu1989 opened this issue May 24, 2023 · 6 comments
Closed

Cmake file always assumes AVX2 support #1583

diwu1989 opened this issue May 24, 2023 · 6 comments
Labels
bug Something isn't working build Compilation issues high priority Very important issue stale

Comments

@diwu1989
Copy link

When running cmake the default configuration sets AVX2 to be ON even when the current cpu does not support it.
AVX vs AVX2 is handled correctly in the plain makefile.

For cmake, the AVX2 has to be turned off via cmake -DLLAMA_AVX2=off . for the compiled binary to work on AVX-only system.

Can we make the cmake file smarter about whether to enable or disable AVX2 by looking at the current architecture?

@howard0su
Copy link
Collaborator

check this #809

@chen369
Copy link

chen369 commented May 28, 2023

This issue is causing an issue downstream on "llama-cpp-python" where we cant build a python binding on non supported AVX2 machines that require cuBLAS support.
Please read my workaround on here abetlen/llama-cpp-python#272 (comment)
Best Regards,

@gjmulder gjmulder added bug Something isn't working high priority Very important issue build Compilation issues labels May 29, 2023
@happysmash27
Copy link

As per my now-closed issue #1654 (currently closed by me because I figured out the workaround and wasn't sure if default configuration qualified as a "bug"), it assumes a bunch of other extensions as well: AVX, F16C, and FMA. It took me a while to figure out what the flags to disable them were and then add them one by one until it finally worked.

@JDunn3
Copy link
Contributor

JDunn3 commented Jun 6, 2023

Confirm the basically blocks installing llama-cpp-python on a machine without AVX2 available.

@TFWol
Copy link

TFWol commented Jul 31, 2023

Anyone have a straight forward way to get the combo of CUDA + no AVX2 to work?
My head is spinning from trying to follow all these threads.

Copy link
Contributor

github-actions bot commented Apr 9, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as completed Apr 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working build Compilation issues high priority Very important issue stale
Projects
None yet
Development

No branches or pull requests

7 participants