Skip to content

Conversation

@ankurvdev
Copy link
Contributor

Make sure to read the contributing guidelines before submitting a PR

Add logging hooks to mtmd so that the log messages from the library can be redirected to appropriate log message sinks as needed by the clients.

The main underlying issue is that log messages from mtmd are blindly emitted to stdout which pollute the stdout from llama-mtmd-cli as just a stream of model output.

Rework of an earlier PullRequest #17223 that was rejected because mtmd cannot take a dependency on libcommon which allow for log messages

Copy link
Collaborator

@ngxson ngxson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need a whole callback system. Unless you have specific need, just a simple mtmd_helper_set_log_level(ggml_log_level) should do the trick

#include <stddef.h>
#include <stdint.h>
#include <stdio.h>
#include <stdarg.h>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why this is needed?

}
static mtmd_log_callback_t log_callback = log_callback_default;

void mtmd_set_log_callback(mtmd_log_callback_t callback) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
void mtmd_set_log_callback(mtmd_log_callback_t callback) {
void mtmd_helper_set_log_callback(mtmd_helper_log_callback_t callback) {

int32_t n_batch,
llama_pos * new_n_past);

typedef void (*mtmd_log_callback_t)(enum ggml_log_level level, const char * fmt, ...);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, prefix everything with mtmd_helper

@ngxson
Copy link
Collaborator

ngxson commented Nov 14, 2025

On second thought, it will be cleaner to implement the same pattern in llama.h, using callback:

LLAMA_API void llama_log_set(ggml_log_callback log_callback, void * user_data);

Reusing ggml_log_callback instead. I'll make a PR to add such API to both mtmd and mtmd-helper

@ngxson
Copy link
Collaborator

ngxson commented Nov 14, 2025

Superseded by #17268

@ngxson ngxson closed this Nov 14, 2025
@ankurvdev ankurvdev deleted the logging branch November 14, 2025 21:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

examples ggml changes relating to the ggml tensor library for machine learning

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants