Skip to content

Conversation

slaren
Copy link
Member

@slaren slaren commented Oct 11, 2024

Use the ggml log system more consistently. Some of the trace prints that previously were shown in debug builds unconditionally now use GGML_LOG_DEBUG and won't be shown unless verbose mode (-v) is used.

@slaren
Copy link
Member Author

slaren commented Oct 11, 2024

This causes some debug prints in ggml-backend (backend registering messages) to be shown without -v because they are generated during common_params_parse, which is called before the logging callback is set in common_init. Should common_init be called first instead? Or even called automatically from common_params_parse.

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Oct 11, 2024
@ggerganov
Copy link
Member

Yes, let's call it automatically in common_params_parse. I was hesitating to do it and the only reason to call it explicitly after common_params_parse was a very minor/esthetic one:

Currently on master with log prefix and timestamps the output at the start of the program looks like this:

# env
LLAMA_LOG_COLORS=1
LLAMA_LOG_PREFIX=1
LLAMA_LOG_TIMESTAMPS=1

./llama-cli
0.00.000.192 I build: 3910 (36815404) with Apple clang version 16.0.0 (clang-1600.0.26.3) for arm64-apple-darwin24.0.0
0.00.000.978 I main: llama backend init
0.00.001.901 I main: load the model and apply lora adapter, if any
...

While if common_init is called before common_params_parse it will look like this:

build: 3910 (36815404) with Apple clang version 16.0.0 (clang-1600.0.26.3) for arm64-apple-darwin24.0.0
0.00.001.902 I main: llama backend init
0.00.002.943 I main: load the model and apply lora adapter, if any
...

I.e. the first message with the build info does not have the log prefix since the env variables haven't been parsed yet.

Anyway, this is a very minor detail that I don't mind to ignore.

@slaren
Copy link
Member Author

slaren commented Oct 11, 2024

On a second though, calling common_init first won't fix this problem either for the same reason that you mentioned, the command line arguments are not parsed yet when these prints happens, so the verbose flag has no effect. I have reverted that change so that these messages are shown with debug builds only.

@slaren slaren merged commit 9677640 into master Oct 11, 2024
50 of 53 checks passed
@slaren slaren deleted the sl/backend-debug-prints branch October 11, 2024 13:34
drollings pushed a commit to drollings/llama.cpp that referenced this pull request Oct 18, 2024
* ggml : move more prints to the ggml log system

* show BLAS OpenMP warnings in all builds using debug print
dsx1986 pushed a commit to dsx1986/llama.cpp that referenced this pull request Oct 29, 2024
* ggml : move more prints to the ggml log system

* show BLAS OpenMP warnings in all builds using debug print
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 15, 2024
* ggml : move more prints to the ggml log system

* show BLAS OpenMP warnings in all builds using debug print
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 18, 2024
* ggml : move more prints to the ggml log system

* show BLAS OpenMP warnings in all builds using debug print
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants