Skip to content

Conversation

@xiaobing318
Copy link
Contributor

Make sure to read the contributing guidelines before submitting a PR

If you compile a ggml project separately on a Windows platform, you may encounter misalignment issues when debugging the Debug configuration code in a non-English system locale. This is because the Visual Studio compiler (CL.exe) may incorrectly parse the number of bytes occupied by characters (such as Chinese characters) when encountering a "UTF-8 file without BOM" in a non-English system locale. This causes the compiler to think that the end of a line of code is inconsistent with the position you see.

I renamed the helper function from disable_msvc_warnings to configure_msvc_target. Since this function now handles encoding (/utf-8) in addition to warnings, the new name better reflects its purpose.

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Dec 2, 2025
@xiaobing318
Copy link
Contributor Author

Hi @ggerganov,
I noticed that ggml/CMakeLists.txt isn’t covered by the current CODEOWNERS. If you have time, would you mind taking a look at this PR to help review the changes there? Thanks in advance!

@ggerganov
Copy link
Member

We already set the compile options for MSVC here:

llama.cpp/CMakeLists.txt

Lines 53 to 60 in a2b0fe8

if (MSVC)
add_compile_options("$<$<COMPILE_LANGUAGE:C>:/utf-8>")
add_compile_options("$<$<COMPILE_LANGUAGE:CXX>:/utf-8>")
add_compile_options("$<$<COMPILE_LANGUAGE:C>:/bigobj>")
add_compile_options("$<$<COMPILE_LANGUAGE:CXX>:/bigobj>")
endif()

Does this not work?

@xiaobing318
Copy link
Contributor Author

We already set the compile options for MSVC here:

llama.cpp/CMakeLists.txt

Lines 53 to 60 in a2b0fe8

if (MSVC)
add_compile_options("$<$<COMPILE_LANGUAGE:C>:/utf-8>")
add_compile_options("$<$<COMPILE_LANGUAGE:CXX>:/utf-8>")
add_compile_options("$<$<COMPILE_LANGUAGE:C>:/bigobj>")
add_compile_options("$<$<COMPILE_LANGUAGE:CXX>:/bigobj>")
endif()

Does this not work?

Currently, the UTF-8 compilation options set in the llama.cpp project work for the ggml project. However, if I compile the ggml project separately from the llama.cpp project, the problem I described occurs. In the CONTRIBUTING.md file of the ggml project, I noticed: "For changes to the core ggml library (including to the CMake build system), please open a PR in https://github.com/ggml-org/llama.cpp. Doing so will make your PR more visible, better tested, and more likely to be reviewed." Therefore, I submitted this PR to the llama.cpp project.

@xiaobing318
Copy link
Contributor Author

@ggerganov

When you have time, could you please advise whether:

  1. this change should be handled here in llama.cpp, or
  2. additional adjustments are needed before it can be merged?

@ggerganov ggerganov merged commit e251e5e into ggml-org:master Dec 2, 2025
66 of 72 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants