You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Set up a dummy minestom server that starts and stops. The wrapper will not stop because there are some threads that aren't daemons so the wrapper just won't stop even though the application has terminated
MinecraftServer server = MinecraftServer.init();
server.start(System.getProperty("service.bind.host"), Integer.getInteger("service.bind.port"));
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
MinecraftServer.stopCleanly();
A potential fix is to make all threads in the wrapper daemon threads. Should be fine because the node shouldn't care if the wrapper just stops responding
Issue uniqueness
Yes, this issue is unique. There are no similar issues.
Edit:
Here is a thread dump of all non-daemon threads
"multithreadEventLoopGroup-1-1" #18 prio=10 os_prio=2 cpu=0.00ms elapsed=89.03s tid=0x000001fe547ef9c0 nid=0x5604 runnable [0x00000089a82ff000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.WEPoll.wait(java.base@17.0.4.1/Native Method)
at sun.nio.ch.WEPollSelectorImpl.doSelect(java.base@17.0.4.1/WEPollSelectorImpl.java:111)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(java.base@17.0.4.1/SelectorImpl.java:129)
- locked <0x00000000e0763928> (a io.netty5.channel.nio.SelectedSelectionKeySet)
- locked <0x00000000e07638c8> (a sun.nio.ch.WEPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(java.base@17.0.4.1/SelectorImpl.java:141)
at io.netty5.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62)
at io.netty5.channel.nio.NioHandler.select(NioHandler.java:578)
at io.netty5.channel.nio.NioHandler.run(NioHandler.java:361)
at io.netty5.channel.SingleThreadEventLoop.runIO(SingleThreadEventLoop.java:192)
at io.netty5.channel.SingleThreadEventLoop.run(SingleThreadEventLoop.java:176)
at io.netty5.util.concurrent.SingleThreadEventExecutor.lambda$doStartThread$4(SingleThreadEventExecutor.java:774)
at io.netty5.util.concurrent.SingleThreadEventExecutor$$Lambda$249/0x0000000800d8d248.run(Unknown Source)
at io.netty5.util.internal.ThreadExecutorMap.lambda$apply$1(ThreadExecutorMap.java:68)
at io.netty5.util.internal.ThreadExecutorMap$$Lambda$250/0x0000000800d8d468.run(Unknown Source)
at io.netty5.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(java.base@17.0.4.1/Thread.java:833)
Locked ownable synchronizers:
- None
"Packet-Dispatcher-0" #22 prio=5 os_prio=0 cpu=0.00ms elapsed=88.94s tid=0x000001fe54b4a920 nid=0x3ed0 waiting on condition [0x00000089a87ff000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@17.0.4.1/Native Method)
- parking to wait for <0x00000000e07656c0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(java.base@17.0.4.1/LockSupport.java:341)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(java.base@17.0.4.1/AbstractQueuedSynchronizer.java:506)
at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.4.1/ForkJoinPool.java:3463)
at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.4.1/ForkJoinPool.java:3434)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@17.0.4.1/AbstractQueuedSynchronizer.java:1623)
at java.util.concurrent.LinkedBlockingQueue.take(java.base@17.0.4.1/LinkedBlockingQueue.java:435)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.4.1/ThreadPoolExecutor.java:1062)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.4.1/ThreadPoolExecutor.java:1122)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.4.1/ThreadPoolExecutor.java:635)
at java.lang.Thread.run(java.base@17.0.4.1/Thread.java:833)
Locked ownable synchronizers:
- None
"Packet-Dispatcher-1" #23 prio=5 os_prio=0 cpu=15.63ms elapsed=88.94s tid=0x000001fe5389dce0 nid=0x6838 waiting on condition [0x00000089a88fe000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@17.0.4.1/Native Method)
- parking to wait for <0x00000000e07656c0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(java.base@17.0.4.1/LockSupport.java:341)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(java.base@17.0.4.1/AbstractQueuedSynchronizer.java:506)
at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.4.1/ForkJoinPool.java:3463)
at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.4.1/ForkJoinPool.java:3434)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@17.0.4.1/AbstractQueuedSynchronizer.java:1623)
at java.util.concurrent.LinkedBlockingQueue.take(java.base@17.0.4.1/LinkedBlockingQueue.java:435)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.4.1/ThreadPoolExecutor.java:1062)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.4.1/ThreadPoolExecutor.java:1122)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.4.1/ThreadPoolExecutor.java:635)
at java.lang.Thread.run(java.base@17.0.4.1/Thread.java:833)
Locked ownable synchronizers:
- None
"Packet-Dispatcher-2" #25 prio=5 os_prio=0 cpu=0.00ms elapsed=88.37s tid=0x000001fe53888b60 nid=0x59e4 waiting on condition [0x00000089a77ff000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@17.0.4.1/Native Method)
- parking to wait for <0x00000000e07656c0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(java.base@17.0.4.1/LockSupport.java:341)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(java.base@17.0.4.1/AbstractQueuedSynchronizer.java:506)
at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.4.1/ForkJoinPool.java:3463)
at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.4.1/ForkJoinPool.java:3434)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@17.0.4.1/AbstractQueuedSynchronizer.java:1623)
at java.util.concurrent.LinkedBlockingQueue.take(java.base@17.0.4.1/LinkedBlockingQueue.java:435)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.4.1/ThreadPoolExecutor.java:1062)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.4.1/ThreadPoolExecutor.java:1122)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.4.1/ThreadPoolExecutor.java:635)
at java.lang.Thread.run(java.base@17.0.4.1/Thread.java:833)
Locked ownable synchronizers:
- None
"DestroyJavaVM" #27 prio=5 os_prio=0 cpu=250.00ms elapsed=88.35s tid=0x000001fe538899d0 nid=0xc4 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Locked ownable synchronizers:
- None
"Packet-Dispatcher-3" #55 prio=5 os_prio=0 cpu=31.25ms elapsed=86.47s tid=0x000001fe547318a0 nid=0x3d94 waiting on condition [0x00000089a8bff000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@17.0.4.1/Native Method)
- parking to wait for <0x00000000e07656c0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(java.base@17.0.4.1/LockSupport.java:341)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionNode.block(java.base@17.0.4.1/AbstractQueuedSynchronizer.java:506)
at java.util.concurrent.ForkJoinPool.unmanagedBlock(java.base@17.0.4.1/ForkJoinPool.java:3463)
at java.util.concurrent.ForkJoinPool.managedBlock(java.base@17.0.4.1/ForkJoinPool.java:3434)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@17.0.4.1/AbstractQueuedSynchronizer.java:1623)
at java.util.concurrent.LinkedBlockingQueue.take(java.base@17.0.4.1/LinkedBlockingQueue.java:435)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base@17.0.4.1/ThreadPoolExecutor.java:1062)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@17.0.4.1/ThreadPoolExecutor.java:1122)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@17.0.4.1/ThreadPoolExecutor.java:635)
at java.lang.Thread.run(java.base@17.0.4.1/Thread.java:833)
Locked ownable synchronizers:
- None
The problem is that the netty threads aren't daemon threads and only shut down after System.exit but not after all threads created by the minestom server have exited normally. System.exit is a workaround
The text was updated successfully, but these errors were encountered:
<!--
Thanks for taking your time and creating a pull request. Please note
that we will not merge pull requests
which are not following our code style
(https://google.github.io/styleguide/javaguide.html). Most of these
rules are checked while compile using checkstyle. On the other hand,
please cover relevant code with tests.
These are showing the maintainers what to expect from your pull requests
and ensures that changes to your
code will be consistent over time. See for example
https://betterprogramming.pub/13-tips-for-writing-useful-unit-tests-ca20706b5368
if you need a bit of guidance while writing your tests.
-->
### Motivation
<!-- Explain the context and why you're making the change (what is the
problem solved by this pr) -->
Minestom is a very lightweight framework for implementing specialized
Minecraft servers.
Especially when the full implementation or behavior of vanilla Minecraft
is not needed, Minestom offers a way to run services with very little
resources.
One issue that arised in conjunction with CloudNet is that services
implemented using Minestom would not properly stop. Minestom would shut
down and the service would get unregistered from CloudNet but the
processes would be kept running.
### Modification
<!-- Describe the modification you've done to the codebase -->
This pull request mimics the behavior of vanilla implementations of the
Minecraft Server which all call `System.exit(0);` when their process is
done. This change makes sure that the JVM shuts down properly, even if
other threads might block an ordinary shutdown from happening (ie. Netty
and other networking related tasks).
### Result
<!-- Describe the result of the pull request (what changed compared to
before) -->
Services using the Minestom framework now properly shut down and allow
CloudNet to advance to the next lifecycle steps.
##### Other context
<!-- Other context of the pull request, a discussion, issue or anything
else related -->
Fixes#1304
---------
Co-authored-by: 0utplay <aldin@sijamhodzic.de>
Stacktrace
No response
Actions to reproduce
Set up a dummy minestom server that starts and stops. The wrapper will not stop because there are some threads that aren't daemons so the wrapper just won't stop even though the application has terminated
CloudNet version
[31.08 10:51:24.867] INFO:
[31.08 10:51:24.868] INFO: CloudNet Blizzard 4.0.0-RC9 f6ca4c3
[31.08 10:51:24.868] INFO: Discord: https://discord.cloudnetservice.eu/
[31.08 10:51:24.868] INFO:
[31.08 10:51:24.869] INFO: ClusterId: deebb2f9--41cd--246e55129822
[31.08 10:51:24.869] INFO: NodeId: Node-1
[31.08 10:51:24.869] INFO: Head-NodeId: Node-1
[31.08 10:51:24.870] INFO: CPU usage: (P/S) .36/.34/100%
[31.08 10:51:24.870] INFO: Node services memory allocation (U/R/M): 1536/1536/4096 MB
[31.08 10:51:24.871] INFO: Threads: 55
[31.08 10:51:24.871] INFO: Heap usage: 42/256MB
[31.08 10:51:24.871] INFO: JVM: Amazon.com Inc. 17 (OpenJDK 64-Bit Server VM 17.0.4.1+9-LTS)
[31.08 10:51:24.871] INFO: Update Repo: CloudNetService/launchermeta, Update Branch: beta
[31.08 10:51:24.872] INFO:
Other
A potential fix is to make all threads in the wrapper daemon threads. Should be fine because the node shouldn't care if the wrapper just stops responding
Issue uniqueness
Edit:
Here is a thread dump of all non-daemon threads
The problem is that the netty threads aren't daemon threads and only shut down after
System.exit
but not after all threads created by the minestom server have exited normally.System.exit
is a workaroundThe text was updated successfully, but these errors were encountered: