Skip to content
This repository has been archived by the owner on May 27, 2022. It is now read-only.

When bootstrap.servers are not available, stops spring boot app from running #16

Open
rubesMN opened this issue Nov 6, 2015 · 6 comments
Milestone

Comments

@rubesMN
Copy link

rubesMN commented Nov 6, 2015

When I start a spring-boot application with a KafkaAppender, and the endpoint is not accessible, then regardless of how I've mucked with the values, the appender stops my application from running.. period. It just sits and waits for the connection and retries and retries.

2015-11-06 14:50:16,385 workfront-eureka-server 0 0 [kafka-producer-network-thread | workfront-eureka-server] WARN  org.apache.kafka.common.network.Selector - Error in I/O with /192.168.99.100
java.net.ConnectException: Connection refused

    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_20]
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) ~[na:1.8.0_20]
    at org.apache.kafka.common.network.Selector.poll(Selector.java:238) ~[kafka-clients-0.8.2.1.jar!/:na]
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) [kafka-clients-0.8.2.1.jar!/:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) [kafka-clients-0.8.2.1.jar!/:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) [kafka-clients-0.8.2.1.jar!/:na]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_20]

My config looks like this:

        <appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <!-- This is the default encoder that encodes every log message to an utf8-encoded string  -->
            <encoder class="com.github.danielwegener.logback.kafka.encoding.PatternLayoutKafkaMessageEncoder">
            <layout class="ch.qos.logback.classic.PatternLayout">
                <pattern>%date ${myappName} ${parentCallChainID} ${callChainID} [%thread] %.-5level %X{username} %logger - %msg%n</pattern>
            </layout>
            </encoder>
            <topic>logss</topic>
            <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.ContextNameKeyingStrategy" />
            <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />

            <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
            <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
            <!-- bootstrap.servers is the only mandatory producerConfig -->
            <producerConfig>bootstrap.servers=192.168.99.100:9094</producerConfig>
            <!-- wait up to 1000ms and collect log messages before sending them as a batch -->
            <producerConfig>linger.ms=1000</producerConfig>
            <!--  amount of time to wait before attempting to reconnect to a given host when a connection fails. This avoids a scenario where the client repeatedly attempts to connect to a host in a tight loop -->
            <producerConfig>reconnect.backoff.ms=490</producerConfig>
            <!-- use gzip to compress each batch of log messages. valid values: none, gzip, snappy  -->
            <producerConfig>compression.type=gzip</producerConfig>
            <!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
            <producerConfig>block.on.buffer.full=false</producerConfig>
            <!-- specify source of request in human readable form -->
            <producerConfig>client.id=${myappName}</producerConfig>


            <!-- there IS a fallback <appender-ref>. -->
            <appender-ref ref="STDOUT"/>
    </appender>

I do not want the logging to stop the system from running. I have other appenders that are running to local files. Please tell me I can configure this behavior somehow (ie.. let the system run and not use kafka appender .. but of course, keep trying to connect).

@danielwegener
Copy link
Owner

Interesting :) and a bit surprising. Indeed, with that configuration the appender is not supposed to block anything. I'll try to reproduce that behavior in a test tomorrow, but a minimal (realistic) example would also be very welcome (pom + java main). Thanks for your report!

@danielwegener
Copy link
Owner

Oh yeah I see what you mean. Actually it does not really block the application completely but lets just one log message through (join the buffer) like every 60 seconds (which is the default value of metadata.fetch.timeout.ms).

This is caused by the fact that the kafka-producer 0.8.2 does not support request timeouts for the send()'s. This feature most likely will be added in kafka 0.9 https://issues.apache.org/jira/browse/KAFKA-2120. I do not see a real workaround here beside a wrapping "async appender" with a bounded queue that keeps one thread hammering on the blocking kafka-producer send-method while throwing away new messages if the bounded queue becomes full (an even more lossy scenario). I guess this could be achieved by putting the KafkaAppender inside an http://logback.qos.ch/manual/appenders.html#AsyncAppender.

This problem is definitely worth to be mentioned in the readme.

@rubesMN
Copy link
Author

rubesMN commented Nov 9, 2015

I just tested this again, and the kafka-appender definitely stops my spring boot app from accepting requests defined in RestController annotated classes and stops it from registering with Eureka.

I followed the example within the logback doc you provided and was able to get around my not-accepting-request issue by wrapping the appender with the AsyncAppender. Thanks for that suggestion.

<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
    <appender-ref ref="kafkaAppender" />
</appender>

My only issue now is that my spring boot app hangs on shutdown. I'll try fiddling with maxFlushTime config and see if that helps. ... and it didnt. Not sure what to do about that.

@danielwegener
Copy link
Owner

I currently have no good Idea how to handle that. I think we have to wait for kafka 0.9 (https://issues.apache.org/jira/browse/KAFKA-2120) to solve this in nice way.

@danielwegener danielwegener added this to the 0.1.0 milestone Nov 25, 2015
@tendant
Copy link

tendant commented Dec 23, 2015

I am interested in this issue as well, since kafka 0.9 is released now and https://issues.apache.org/jira/browse/KAFKA-2120 has been resolved.

Thanks.

@danielwegener
Copy link
Owner

I released logback-kafka-appender 0.0.5 with kafka 0.9.0 today. Maybe you can try fiddling with the request.timeout.ms setting described here.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants