Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

disable LLAMA_NATIVE by default #3906

Merged
merged 1 commit into from
Nov 2, 2023
Merged

disable LLAMA_NATIVE by default #3906

merged 1 commit into from
Nov 2, 2023

Conversation

slaren
Copy link
Collaborator

@slaren slaren commented Nov 2, 2023

LLAMA_NATIVE doesn't work on MSVC, and this is causing default builds on Windows to not include AVX anymore, which effectively is a breaking change. This PR disables it by default to restore the previous behavior.

@ggerganov ggerganov linked an issue Nov 2, 2023 that may be closed by this pull request
@ggerganov ggerganov merged commit 21958bb into master Nov 2, 2023
32 checks passed
@slaren slaren deleted the disable-cmake-native branch November 2, 2023 12:16
@gsuuon gsuuon mentioned this pull request Nov 2, 2023
4 tasks
olexiyb pushed a commit to Sanctum-AI/llama.cpp that referenced this pull request Nov 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

llama : build with AVX support on Windows by default with CMake
2 participants