Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AdminClient doesn't support specifying the logger #1699

Open
watpp opened this issue Jan 2, 2024 · 10 comments
Open

AdminClient doesn't support specifying the logger #1699

watpp opened this issue Jan 2, 2024 · 10 comments
Assignees
Labels
enhancement usage Incorrect usage

Comments

@watpp
Copy link

watpp commented Jan 2, 2024

Description

Admin client from the library doesn't support specification of a logger. It generates the error:

TypeError: __init__() got an unexpected keyword argument 'logger
However, the producer and consumer clients do. Is there something I am missing, or is there an alternative to specify the logger in the admin client?

How to reproduce

import logging
import sys
from confluent_kafka.admin import AdminClient

logger = logging.getLogger("kafka_admin")
logger.setLevel(logging.INFO)

handler = logging.StreamHandler(sys.stdout)
handler.formatter = JsonFormatter("%(message)s")

logger.addHandler(handler)
logger.propagate = False
admin_client = AdminClient(config, logger=logger)

Checklist

Please provide the following information:

  • confluent_kafka.version() is ('2.2.0', 33685504)
  • confluent_kafka.libversion() is ('2.2.0', 33685759)
  • OS = ubuntu
@pranavrth
Copy link
Member

You can use "logger" property inside config.

import logging
import sys
from confluent_kafka.admin import AdminClient

logger = logging.getLogger("kafka_admin")
logger.setLevel(logging.INFO)

handler = logging.StreamHandler(sys.stdout)
handler.formatter = JsonFormatter("%(message)s")

logger.addHandler(handler)
logger.propagate = False
config["logger"] = logger
admin_client = AdminClient(config)

@pranavrth pranavrth self-assigned this Jan 2, 2024
@watpp
Copy link
Author

watpp commented Jan 2, 2024

Will the AdminClient code recognize this logger passed as such?

@watpp
Copy link
Author

watpp commented Jan 15, 2024

@pranavrth Can you comment?

@pranavrth
Copy link
Member

It should work in the way I have mentioned. Is it not working?

@nhaq-confluent
Copy link

@watpp did @pranavrth's example solve your issue?

@geoff-va
Copy link

I've been unable to get this to work as well.

I've got kafka running locally but have it advertising a domain that doesn't exist so it will produce an error when I try to use describe_cluster. Using the following test code (w/o setting logger):

import logging

from confluent_kafka.admin import AdminClient

log = logging.getLogger("test")
log.addHandler(logging.FileHandler("test_log.log"))
log.setLevel("INFO")


if __name__ == "__main__":
    config = {
        "bootstrap.servers": "127.0.0.1:9092",
    }
    log.info("Creating Client")
    client = AdminClient(config)

    future = client.describe_cluster(request_timeout=5)
    future.result()

I get the following:

# stdout/stderr
%3|1710088788.195|FAIL|rdkafka#producer-1| [thrd:kafka:9092/1]: kafka:9092/1: Failed to resolve 'kafka:9092': nodename nor servname provided, or not known (after 2ms in state CONNECT)
%3|1710088789.203|FAIL|rdkafka#producer-1| [thrd:kafka:9092/1]: kafka:9092/1: Failed to resolve 'kafka:9092': nodename nor servname provided, or not known (after 2ms in state CONNECT, 1 identical error(s) suppressed)

# test_log.log
Creating Client

Then when I add the logger into the config:

    config = {
        "bootstrap.servers": "127.0.0.1:9092",
        "logger": log,
    }

I no longer get anything printed to the screen, but the errors are also not written to test_log.log. I've tried using the logging.StreamHandler(sys.stdout) as the handler, but only my logs are printed to the screen - the kafka errors don't appear.

@pranavrth pranavrth added the bug label Mar 12, 2024
@pranavrth
Copy link
Member

There is some issue for sure. I am marking it as a bug to further look into it.

@pranavrth
Copy link
Member

pranavrth commented May 31, 2024

What happens here is that the log operations are sent to the clients main queue which is served with poll in general. This happens even in Consumer and Producer. In those APIs there are other important callbacks which are served in the poll like rebalance_cb in Consumer and delivery_cb in Producer so we must call poll there and even explained in the examples.

For admin client, poll is not expected as it mainly serves only the log callback. For admin client to use the provided logger, please use admin_client.poll when those logs needs to be served through the provided logger.

This is by design right now. We are thinking of improving this in future where the logs will be served though the background thread.

TL;DR: Use admin_client.poll to serve the logs with the custom logger.

@pranavrth pranavrth added enhancement usage Incorrect usage and removed bug labels May 31, 2024
@pranavrth
Copy link
Member

  1. I am updating the example as well to reflect the same.
  2. Please use logger config property instead of using logger as an argument to the AdminClient for now. I am adding logger as an argument to the AdminClient as well.

@pranavrth
Copy link
Member

PR for the above changes - #1758

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement usage Incorrect usage
Projects
None yet
Development

No branches or pull requests

4 participants