Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

replication-offset-checkpoint.tmp (No such file or directory) #194

Closed
MichalOrlowski opened this issue Oct 13, 2016 · 68 comments · Fixed by #1471
Closed

replication-offset-checkpoint.tmp (No such file or directory) #194

MichalOrlowski opened this issue Oct 13, 2016 · 68 comments · Fixed by #1471

Comments

@MichalOrlowski
Copy link

MichalOrlowski commented Oct 13, 2016

We are using Embedded Kafka for Integration testing.
What we're getting is FileNotFoundException (sometimes) on Jenkins.
Question - is there a possibility to turn off ReplicaManager or so? Maybe there's another solution.

2016-10-13 07:53:35.161 ERROR 9469 --- [fka-scheduler-1] kafka.server.ReplicaManager              : [Replica Manager on Broker 0]: Error writing to highwatermark file: 

java.io.FileNotFoundException: /tmp/kafka-3266968904825284552/replication-offset-checkpoint.tmp (No such file or directory)
    at java.io.FileOutputStream.open0(Native Method)
    at java.io.FileOutputStream.open(FileOutputStream.java:270)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
    at kafka.server.OffsetCheckpoint.write(OffsetCheckpoint.scala:37)
    at kafka.server.ReplicaManager$$anonfun$checkpointHighWatermarks$2.apply(ReplicaManager.scala:874)
    at kafka.server.ReplicaManager$$anonfun$checkpointHighWatermarks$2.apply(ReplicaManager.scala:871)
    at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:778)
    at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
    at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:777)
    at kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:871)
    at kafka.server.ReplicaManager$$anonfun$1.apply$mcV$sp(ReplicaManager.scala:153)
    at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
    at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
@artembilan
Copy link
Member

Doesn't that mean that you have a low ulimit on that server?
https://easyengine.io/tutorials/linux/increase-open-files-limit/

@MichalOrlowski
Copy link
Author

No, that's not it. I tried to increase it, but still got same problem.

@artembilan
Copy link
Member

Well, share, please, as much info then as possible: test-case to play from our side, how much concurrent tests you have, aren't there some process on that server which clean the /tmp dir, which Kafka version do you use, etc. etc..

Looks like it is more Apache Kafka issue than Spring Kafka...
I'm afraid that we can't switch off ReplicaManager...

@MichalOrlowski
Copy link
Author

MichalOrlowski commented Mar 8, 2017

I've noticed that this issue occures when Jenkins builds two microservices simultaneously (and both are using Kafka Test). I think first build cleans up and second one cannot access it.
/tmp file is Jenkins global directory (not per build).

Question - is there a way to customize /tmp dir name? We need to have possibility building services simultaneously.

@artembilan
Copy link
Member

As I said before: it is the question directly to Apache Kafka.

public EmbeddedZookeeper() {
        this.snapshotDir = kafka.utils.TestUtils..MODULE$.tempDir();
        this.logDir = kafka.utils.TestUtils..MODULE$.tempDir();

I'm not familiar with Scala, so can't debug and interpret what they do there...

@garyrussell
Copy link
Contributor

garyrussell commented Mar 8, 2017

It seems unlikely to be related to concurrent builds; in my experience, the /kafka-n/ directory gets a different n on each run.

Yeah - it uses Files.createTempDirectory() which uses random.nextLong() where random is a new SecureRandom so it's pretty unlikely that there would be a collision.

@MichalOrlowski
Copy link
Author

MichalOrlowski commented Mar 8, 2017

Isn't ReplicaManager.checkpointHighWatermarks() method clearing all available replicas?

I think there's possibility to remove /tmp/kafka-n by other thread.

@see core/src/main/scala/kafka/server/ReplicaManager.scala

@artembilan
Copy link
Member

I still don't understand why do you ask that question to us, Spring community when you see the problem in the Apache Kafka directly?
Let's move this discussion to StackOverflow: there is nothing from the Spring perspective to fix or improve.

If you insist it is ReplicaManager, so please go to the Apache Kafka community and ask there.
That is so low level of the integration that we are just not aware of.

Sorry, but it looks like we (at least me) are useless for your on the topic and I don't understand why do you spend time with us not Apache Kafka community?

Would be glad to see some cross-link from there to widen knowledge in this Kafka topic.

Thanks

@MichalOrlowski
Copy link
Author

Right now I just mocked ReplicaManager.shutdown() method to return false.

Thanks for you comments.

@hoaz
Copy link

hoaz commented Apr 20, 2017

I have found this thread via google looking for solution. Did not find it here, but this is the only place which refers to problem I encountered.
I did analysis and want to share it with others. Here is what happens:

  1. kafka TestUtils.tempDirectory method is used to create temporary directory for embedded kafka broker. It also registers shutdown hook which deletes this directory when JVM exits.
  2. when unit test finishes execution it calls System.exit, which in turn executes all registered shutdown hooks

If kafka broker runs at the end of unit test it will attempt to write/read data in a dir which is deleted and produces different FileNotFound exceptions.

Solution: shutdown embedded kafka broker at the end of the test before System.exit is called.

If KafkaEmbedded rule is used properly it will call KafkaEmbedded#after method, which destroys broker before System#exit is called.

I use KafkaEmbedded class in Spring integration test and create it as bean. Unfortunately spring context is destroyed in shutdown hook as well and it happens concurrently with other shutdown hooks, so kafka log directory is destroyed before embedded kafka broker is down. I did not find proper solution yet for this usage scenario.

@artembilan
Copy link
Member

@hoaz ,

thank you for your report.
The question is: why do you use KafkaEmbedded as a bean?
Why @ClassRule is not sufficient for you?

@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, INT_KEY_TOPIC, STRING_KEY_TOPIC);

If kafka broker runs at the end of unit test it will attempt to write/read data in a dir which is deleted and produces different FileNotFound exceptions.

Well, it would be better to stop writing in the end of test, e.g. call stop() of the endpoints or verify that sent message has been reached broker before existing from the test. Right?

@hoaz
Copy link

hoaz commented Apr 20, 2017

Because I have multiple integration tests which share the same Spring test context. I do not want to bring up and shut down kafka broker in each test but rather delegate it to Spring, which caches spring context between tests.
This approach speeds up integration tests with common test Spring context significantly.

@artembilan
Copy link
Member

Would you mind to share the code how do you do that?
According @ClassRule JavaDocs we have:

* For example, here is a test suite that connects to a server once before
 * all the test classes run, and disconnects after they are finished:
 * <pre>
 * &#064;RunWith(Suite.class)
 * &#064;SuiteClasses({A.class, B.class, C.class})
 * public class UsesExternalResource {
 *     public static Server myServer= new Server();
 *
 *     &#064;ClassRule
 *     public static ExternalResource resource= new ExternalResource() {

So, placing the KafkaEmbedded rule there on a suite level, should make us happy. Isn't it?

@hoaz
Copy link

hoaz commented Apr 20, 2017

OK, this is a valid solution, there is only one thing I do not like about it.
Our build configuration uses default surefire configuration which scans and runs all integration tests that match specific wildcard. So every time new test is added developer needs to make sure it is included in test suite, as well as changes to build configuration should be made.

It would be really good if spring-kafka-test could support embedded kafka as a bean, especially taking into account project name :)

Here is how we run embedded kafka broker inside of spring container:

@Configuration
public class EmbeddedKafkaConfiguration {

    @Bean(destroyMethod = "after")
    public static KafkaEmbedded kafkaEmbedded() throws Exception {
        return new KafkaEmbeddedBean(1, true, 1, TestApplication.TOPIC_NAME);
    }

    private static final class KafkaEmbeddedBean extends KafkaEmbedded {

        public KafkaEmbeddedBean(int count, boolean controlledShutdown, int partitions, String... topics) throws Exception {
            super(count, controlledShutdown, partitions, topics);
            before();
            System.setProperty("kafka.broker.list", getBrokersAsString());
        }
        
    }

}

@artembilan
Copy link
Member

Good, thank you for sharing that and for your ideas!

So, doesn't it mean that that destroyMethod = "after" may be called already after that:

Runtime.getRuntime().addShutdownHook(new Thread() {
           public void run() {
               Utils.delete(file);
           }
});

And what we get there is only some ERROR in logs?

@hoaz
Copy link

hoaz commented Apr 20, 2017

Yes, Spring context is being closed in parallel with other shutdown hooks, they are executed in separate threads (see java.lang.ApplicationShutdownHooks#runHooks). Spring shutdown hook is rather slow, so in most cases destroyMethod = "after" will be called after kafka log directories are already gone.

And yes again, you are right, ERRORs in logs do not fail build, they just sitting there making me nervous each time I analyze logs.

@artembilan
Copy link
Member

OK. Great!
So, now, please, share the latest ERROR for that case.
I think I have an idea what to do in the KafkaEmbedded.after() to avoid such a late shutdown because it just does not make sense any more.

@artembilan artembilan reopened this Apr 21, 2017
@hoaz
Copy link

hoaz commented Apr 21, 2017

I have two errors actually.

This one originates from KafkaEmbedded.after():

15:17:24,975 [                      Thread-8] FATAL ReplicaManager:118 - [Replica Manager on Broker 0]: Error writing to highwatermark file: 
java.io.FileNotFoundException: /tmp/kafka-1318430730057043027/replication-offset-checkpoint.tmp (No such file or directory)
	at java.io.FileOutputStream.open(Native Method)
	at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
	at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
	at kafka.server.OffsetCheckpoint.write(OffsetCheckpoint.scala:49)
	at kafka.server.ReplicaManager$$anonfun$checkpointHighWatermarks$2.apply(ReplicaManager.scala:948)
 	at kafka.server.ReplicaManager$$anonfun$checkpointHighWatermarks$2.apply(ReplicaManager.scala:945)
	at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
	at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
	at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
	at kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:945)
	at kafka.server.ReplicaManager.shutdown(ReplicaManager.scala:964)
	at kafka.server.KafkaServer$$anonfun$shutdown$7.apply$mcV$sp(KafkaServer.scala:590)
	at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:78)
	at kafka.utils.Logging$class.swallowWarn(Logging.scala:94)
	at kafka.utils.CoreUtils$.swallowWarn(CoreUtils.scala:48)
	at kafka.utils.Logging$class.swallow(Logging.scala:96)
	at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:48)
	at kafka.server.KafkaServer.shutdown(KafkaServer.scala:590)
	at org.springframework.kafka.test.rule.KafkaEmbedded.after(KafkaEmbedded.java:173)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.springframework.beans.factory.support.DisposableBeanAdapter.invokeCustomDestroyMethod(DisposableBeanAdapter.java:300)
	at org.springframework.beans.factory.support.DisposableBeanAdapter.destroy(DisposableBeanAdapter.java:226)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:499)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingletonBeanRegistry.java:475)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingletons(DefaultSingletonBeanRegistry.java:443)
	at org.springframework.context.support.AbstractApplicationContext.destroyBeans(AbstractApplicationContext.java:1078)
	at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:1052)
	at org.springframework.context.support.AbstractApplicationContext$1.run(AbstractApplicationContext.java:970)

And this one from Kafka LogCleaner, I guess it will be hard to get rid of it:

15:17:24,828 [    kafka-log-cleaner-thread-0] ERROR LogCleaner:105 - [kafka-log-cleaner-thread-0], Error due to 
java.io.FileNotFoundException: /tmp/kafka-1318430730057043027/cleaner-offset-checkpoint (No such file or directory)
	at java.io.FileInputStream.open(Native Method)
	at java.io.FileInputStream.<init>(FileInputStream.java:146)
	at java.io.FileReader.<init>(FileReader.java:72)
	at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:86)
	at kafka.log.LogCleanerManager$$anonfun$allCleanerCheckpoints$1.apply(LogCleanerManager.scala:81)
	at kafka.log.LogCleanerManager$$anonfun$allCleanerCheckpoints$1.apply(LogCleanerManager.scala:81)
	at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
	at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
	at scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
	at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
	at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
	at kafka.log.LogCleanerManager.allCleanerCheckpoints(LogCleanerManager.scala:81)
	at kafka.log.LogCleanerManager$$anonfun$grabFilthiestCompactedLog$1.apply(LogCleanerManager.scala:92)
	at kafka.log.LogCleanerManager$$anonfun$grabFilthiestCompactedLog$1.apply(LogCleanerManager.scala:89)
	at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
	at kafka.log.LogCleanerManager.grabFilthiestCompactedLog(LogCleanerManager.scala:89)
	at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:234)
	at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:220)
	at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

@artembilan
Copy link
Member

@hoaz ,

you don't need

System.setProperty("kafka.broker.list", getBrokersAsString());

The KafkaEmbedded does that like:

System.setProperty(SPRING_EMBEDDED_KAFKA_BROKERS, getBrokersAsString());

in the end of before().

@hoaz
Copy link

hoaz commented Apr 21, 2017

Right, long story short, test @Configuration I have provided above is appended to main application configuration. And this line overrides non-test property kafka.broker.list, which we use in DEV, UAT, PROD.

For test only purposes SPRING_EMBEDDED_KAFKA_BROKERS is sufficient.

artembilan added a commit to artembilan/spring-kafka that referenced this issue Apr 21, 2017
Fixes: spring-projects#194

To avoid concurrent calls of the shutdown hooks make an `KafkaEmbedded`
as a `Lifecycle` to let ApplicationContext to call `stop()` before
destroying itself

That way the Kafka servers and zookeeper and also working directory
are destroyed and removed before JMV shutdown hooks
and `FileNotFoundException /tmp/kafka-3266968904825284552/replication-offset-checkpoint.tmp`
should disappear
@Rouche
Copy link

Rouche commented Feb 28, 2020

This create another error:

KafkaException: Failed to acquire lock on file .lock in C:\tmp\kafka-data\MyService. A Kafka instance in another process or thread is using this directory.

Anyway you dont ned to specify a log dir, kafka already uses a random one with a seed in its name.

@mhyeon-lee
Copy link
Contributor

I solved this problem.

As in #194 (comment), the tmp folder is removed in JVM shutdown, and when using Kafka in another hook, Embedded Kafka calls Runtime.getRuntime().halt(1).

The following code can prevent halt(1) from ending the test.

Exit.setHaltProcedure((statusCode, message) -> {
    if (statusCode != 1) {
        Runtime.getRuntime().halt(statusCode);
    }
});

@garyrussell
Copy link
Contributor

Interesting; thanks.

@Rouche
Copy link

Rouche commented Apr 14, 2020

Where is the Exit object come from?
kafka.utils
or
org.apache.kafka.common.utils

As your code test for an int i guess the latter?

Anyway, it creates another error. But at least the tests are not failing.

java.nio.file.NoSuchFileException: C:\Users\user\AppData\Local\Temp\kafka-7154222364939470958\.kafka_cleanshutdown
	at java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:85)
	at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
	at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:108)
	at java.base/sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:235)
	at java.base/java.nio.file.Files.newByteChannel(Files.java:370)
	at java.base/java.nio.file.Files.createFile(Files.java:647)
	at kafka.log.LogManager.$anonfun$shutdown$17(LogManager.scala:478)
	at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:88)
	at kafka.log.LogManager.$anonfun$shutdown$11(LogManager.scala:478)
	at kafka.log.LogManager.$anonfun$shutdown$11$adapted(LogManager.scala:466)
	at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:877)
	at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
	at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
	at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
	at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
	at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
	at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:876)
	at kafka.log.LogManager.shutdown(LogManager.scala:466)
	at kafka.server.KafkaServer.$anonfun$shutdown$17(KafkaServer.scala:626)
	at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:88)
	at kafka.server.KafkaServer.shutdown(KafkaServer.scala:626)
	at org.springframework.kafka.test.EmbeddedKafkaBroker.destroy(EmbeddedKafkaBroker.java:384)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:571)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingletonBeanRegistry.java:543)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.destroySingleton(DefaultListableBeanFactory.java:1072)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingletons(DefaultSingletonBeanRegistry.java:504)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.destroySingletons(DefaultListableBeanFactory.java:1065)
	at org.springframework.context.support.AbstractApplicationContext.destroyBeans(AbstractApplicationContext.java:1060)
	at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:1029)
	at org.springframework.context.support.AbstractApplicationContext$1.run(AbstractApplicationContext.java:948)

@gustavomaia
Copy link

I solved this problem.

As in #194 (comment), the tmp folder is removed in JVM shutdown, and when using Kafka in another hook, Embedded Kafka calls Runtime.getRuntime().halt(1).

The following code can prevent halt(1) from ending the test.

Exit.setHaltProcedure((statusCode, message) -> {
    if (statusCode != 1) {
        Runtime.getRuntime().halt(statusCode);
    }
});

Hey, I am with the same issue so I am trying to understand your comment, where are you adding this, and how are you using the security manager?

Thanks in advance.

@mhyeon-lee
Copy link
Contributor

mhyeon-lee commented Apr 14, 2020

@Rouche
org.apache.kafka.common.utils.Exit

When JVM shutdown Hook is running, kafka log file is deleted and Exit.halt (1) is called when other shutdown hook accesses kafka log file at the same time.

Since halt is called here and status is 1, i only defend against 1.
https://github.com/a0x8o/kafka/blob/master/core/src/main/scala/kafka/log/LogManager.scala#L193

If you encounter a situation where the test fails with a different status value, you can add defense code.

An error log may occur, but the test will not fail because the command is not propagated to Runtime.halt.

@mhyeon-lee
Copy link
Contributor

@gustavomaia
Since it is a static method setting, you can put it anywhere you want.
However, this setting must be executed before the entire test is completed.

class SomeTest {
    static {
        Exit.setHaltProcedure((statusCode, message) -> {
            if (statusCode != 1) {
                Runtime.getRuntime().halt(statusCode);
            }
        });
    }

    @Test
    void test1()  {
     }

     @Test
     void test2() {
     }
}

Even using the SecurityManager, the test can fail.
Exit / halt can be prevented by throwing Exception in checkExit of SecurityManager.
However, the exception thrown in JVM shutdownHook cannot be catch and the test fails.

@garyrussell garyrussell reopened this Apr 27, 2020
garyrussell added a commit to garyrussell/spring-kafka that referenced this issue Apr 27, 2020
artembilan pushed a commit that referenced this issue Apr 27, 2020
See #194
See #345
See gradle/gradle#11195

**cherry-pick to 2.4.x, 2.3.x, 2.2.x, 1.3.x**
artembilan pushed a commit that referenced this issue Apr 27, 2020
See #194
See #345
See gradle/gradle#11195

**cherry-pick to 2.4.x, 2.3.x, 2.2.x, 1.3.x**
artembilan pushed a commit that referenced this issue Apr 27, 2020
See #194
See #345
See gradle/gradle#11195

**cherry-pick to 2.4.x, 2.3.x, 2.2.x, 1.3.x**
artembilan pushed a commit that referenced this issue Apr 27, 2020
See #194
See #345
See gradle/gradle#11195

**cherry-pick to 2.4.x, 2.3.x, 2.2.x, 1.3.x**

# Conflicts:
#	spring-kafka-test/src/main/java/org/springframework/kafka/test/EmbeddedKafkaBroker.java
artembilan pushed a commit that referenced this issue Apr 27, 2020
See #194
See #345
See gradle/gradle#11195

**cherry-pick to 2.4.x, 2.3.x, 2.2.x, 1.3.x**

# Conflicts:
#	spring-kafka-test/src/main/java/org/springframework/kafka/test/EmbeddedKafkaBroker.java
@garyrussell garyrussell reopened this May 1, 2020
garyrussell added a commit to garyrussell/spring-kafka that referenced this issue May 1, 2020
Resolves spring-projects#194

Create the temporary directory in EKB instead of the broker to avoid
`NoSuchFileException`s during shutdown.

**cherry-pick to 2.4.x, 2.3.x, 2.2.x**
garyrussell added a commit to garyrussell/spring-kafka that referenced this issue May 1, 2020
Resolves spring-projects#194

Create the temporary directory in EKB instead of the broker to avoid
`NoSuchFileException`s during shutdown.

**cherry-pick to 2.4.x, 2.3.x, 2.2.x**
garyrussell added a commit to garyrussell/spring-kafka that referenced this issue May 1, 2020
Resolves spring-projects#194

Create the temporary directory in EKB instead of the broker to avoid
`NoSuchFileException`s during shutdown.

**cherry-pick to 2.4.x, 2.3.x, 2.2.x**
garyrussell added a commit to garyrussell/spring-kafka that referenced this issue May 1, 2020
Resolves spring-projects#194

Create the temporary directory in EKB instead of the broker to avoid
`NoSuchFileException`s during shutdown.

**cherry-pick to 2.4.x, 2.3.x, 2.2.x**
garyrussell added a commit to garyrussell/spring-kafka that referenced this issue May 1, 2020
Resolves spring-projects#194

Create the temporary directory in EKB instead of the broker to avoid
`NoSuchFileException`s during shutdown.

**cherry-pick to 2.4.x, 2.3.x, 2.2.x**
artembilan pushed a commit that referenced this issue May 1, 2020
Resolves #194

Create the temporary directory in EKB instead of the broker to avoid
`NoSuchFileException`s during shutdown.

**cherry-pick to 2.4.x, 2.3.x, 2.2.x**
artembilan pushed a commit that referenced this issue May 1, 2020
Resolves #194

Create the temporary directory in EKB instead of the broker to avoid
`NoSuchFileException`s during shutdown.

**cherry-pick to 2.4.x, 2.3.x, 2.2.x**
artembilan pushed a commit that referenced this issue May 1, 2020
Resolves #194

Create the temporary directory in EKB instead of the broker to avoid
`NoSuchFileException`s during shutdown.

**cherry-pick to 2.4.x, 2.3.x, 2.2.x**
artembilan pushed a commit that referenced this issue May 1, 2020
Resolves #194

Create the temporary directory in EKB instead of the broker to avoid
`NoSuchFileException`s during shutdown.

**cherry-pick to 2.4.x, 2.3.x, 2.2.x**

# Conflicts:
#	spring-kafka-test/src/main/java/org/springframework/kafka/test/EmbeddedKafkaBroker.java
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.