Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: too many open files #298

Closed
osc86 opened this issue Mar 31, 2023 · 4 comments
Closed

[Bug]: too many open files #298

osc86 opened this issue Mar 31, 2023 · 4 comments
Assignees

Comments

@osc86
Copy link

osc86 commented Mar 31, 2023

Controller Version

v5.9.31

Describe the Bug

Every time I start the container, I get the following error and the container changes status to unhealthy.
ERROR [main] [] c.t.s.o.s.t.DeviceConnectorSeverTask(): Failed to start up net server because of
java.lang.IllegalStateException: failed to create a child event loop
Caused by: io.netty.channel.ChannelException: timerfd_create() failed: Too many open files

The controller is working, the website is working, I can adopt devices but I can't download firmware updates and this error message is filling up my logs.

Expected Behavior

I don't know if it's an issue with the container image, the omada software or the docker configuration, but it doesn't look ok.
I'm using standard settings like with every other container, and never had this problem before.

Steps to Reproduce

  1. Download the image (in my case I'm using portainer running on Fedora CoreOS)
  2. Add volumes for persistent storage ( /opt/tplink/EAPController/{data,logs} )
  3. I'm not using any ENV parameters, Labels, didn't change anything in Runtime & Resource, Capabilities
  4. Start the container

How You're Launching the Container

"Args": [
            "/usr/bin/java",
            "-server",
            "-Xms128m",
            "-Xmx1024m",
            "-XX:MaxHeapFreeRatio=60",
            "-XX:MinHeapFreeRatio=30",
            "-XX:+HeapDumpOnOutOfMemoryError",
            "-XX:HeapDumpPath=/opt/tplink/EAPController/logs/java_heapdump.hprof",
            "-Djava.awt.headless=true",
            "-cp",
            "/opt/tplink/EAPController/lib/*::/opt/tplink/EAPController/properties:",
            "com.tplink.smb.omada.starter.OmadaLinuxMain"
        ],

Container Logs

/\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::                (v2.6.6)
03-31-2023 14:40:43.297 INFO [main] [] c.t.s.o.s.OmadaLinuxMain(): Starting OmadaLinuxMain v5.9.31 using Java 17.0.6 on omada with PID 1 (/opt/tplink/EAPController/lib/local-starter-5.9.31.jar started by omada in /opt/tplink/EAPController/lib)
03-31-2023 14:40:43.301 INFO [main] [] c.t.s.o.s.OmadaLinuxMain(): No active profile set, falling back to 1 default profile: "default"
03-31-2023 14:40:50.032 WARN [main] [] o.s.d.c.CustomConversions(): Registering converter from class java.time.LocalDateTime to class org.joda.time.LocalDateTime as reading converter although it doesn't convert from a store-supported type! You might want to check your annotation setup at the converter implementation.
03-31-2023 14:40:52.444 WARN [main] [] o.s.d.c.CustomConversions(): Registering converter from class java.time.LocalDateTime to class org.joda.time.LocalDateTime as reading converter although it doesn't convert from a store-supported type! You might want to check your annotation setup at the converter implementation.
03-31-2023 14:40:55.567 INFO [main] [] c.t.s.l.c.s.d.CacheLogConsumeHandler(): log mq consume task is start...
03-31-2023 14:40:55.569 INFO [main] [] c.t.s.o.s.OmadaLinuxMain(): Started OmadaLinuxMain in 12.801 seconds (JVM running for 15.696)
03-31-2023 14:40:55.661 INFO [main] [] c.t.s.o.c.p.s.e(): Handling event: org.springframework.boot.context.event.ApplicationStartedEvent[source=org.springframework.boot.SpringApplication@58b0ef59]
03-31-2023 14:40:55.673 INFO [main] [] c.t.s.o.m.d.p.b.a(): manager maintenance Handling event: org.springframework.boot.context.event.ApplicationStartedEvent[source=org.springframework.boot.SpringApplication@58b0ef59]
03-31-2023 14:40:55.694 INFO [main] [] c.t.s.o.l.p.c.c.b.b(): Handling event: org.springframework.boot.context.event.ApplicationStartedEvent[source=org.springframework.boot.SpringApplication@58b0ef59]
03-31-2023 14:40:55.737 INFO [main] [] c.t.s.o.s.t.SpringBootStartUpTask(): record: task SpringBootStartupTask finished
03-31-2023 14:40:55.737 INFO [main] [] c.t.s.o.s.t.OmadacInitTask(): record: before OmadacInitTask
03-31-2023 14:40:55.755 INFO [main] [] c.t.s.o.s.t.OmadacInitTask(): record: before init bean
03-31-2023 14:40:56.463 INFO [main] [] c.t.s.e.s.c.c(): start schedule remove expire device... period = 10
03-31-2023 14:40:56.470 INFO [main] [] c.t.s.e.s.c.c(): update rateLimiterCache, permitsPerSecond = 5.0
03-31-2023 14:40:56.594 INFO [main] [] c.t.s.o.m.d.p.t.TransportConfiguration(): manager workGroup core thread  num is 16, max thread num is 16
03-31-2023 14:40:56.646 INFO [main] [] c.t.s.o.m.d.d.m.m.c.DeviceMsgConfig(): setMsgThreadPool thread coreSize is 16, maxSize is 16,queue size is 4500
03-31-2023 14:40:57.995 INFO [main] [] c.t.s.o.m.d.p.t.TransportConfiguration(): upgradeSendReq workGroup core thread num is 1, max thread num is 1
03-31-2023 14:40:58.641 INFO [main] [] c.t.s.o.m.c.d.m.s.w.s.c.WirelessGroupConfig(): ssidWorkerGroup thread size is 16, queue size is 1500
03-31-2023 14:40:59.994 INFO [main] [] c.t.s.o.s.t.OmadacInitTask(): record: after init bean
03-31-2023 14:41:00.075 INFO [main] [] c.t.s.o.s.d.a(): Db unchanged. No need to compatible or init data.
03-31-2023 14:41:00.114 INFO [main] [] c.t.s.o.s.t.OmadacInitTask(): succeed get default omadac OmadacVO(id=27e4d2c243c628e197c25e59998793e5, name=Omada Controller)
03-31-2023 14:41:00.146 INFO [main] [] c.t.s.o.s.t.OmadacInitTask(): record: before init for hwc
03-31-2023 14:41:00.146 INFO [main] [] c.t.s.o.s.t.OmadacInitTask(): record: after init for hwc
03-31-2023 14:41:00.261 INFO [main] [] c.t.s.o.c.p.c.o.s.ActiveSiteCacheImpl(): Scheduled ActiveSitesCache period flush buf task at fixed rate of 30000 millis.
03-31-2023 14:41:00.515 INFO [main] [] c.t.s.o.d.g.c.d.ThreadConfiguration(): device-gateway datatrack workGroup core thread  num is 32, max thread num is 32
03-31-2023 14:41:00.762 INFO [main] [] c.t.s.o.p.p.r.a.c(): init nioEventLoopGroup
03-31-2023 14:41:01.179 INFO [main] [] c.t.s.o.d.g.c.f.b.c(): file download mq consume task is start...
03-31-2023 14:41:01.208 INFO [main] [] c.t.s.o.m.d.p.t.TransportConfiguration(): adopt workGroup core thread num is 8, max thread num is8
03-31-2023 14:41:01.365 INFO [main] [] c.t.s.o.m.d.p.t.TransportConfiguration(): discovery workGroup core thread num is 2, max thread num is 10
03-31-2023 14:41:01.407 INFO [main] [] c.t.s.o.m.l.p.e.LicenseEventCenterProperties(): licenseManagerTopic: omada.cloud.license.prd.topics
03-31-2023 14:41:02.962 INFO [main] [] c.t.s.r.p.b(): register rtty message Topic: omada.rtty.message, groupId: omada-rtty-message, handler: com.tplink.smb.rtty.proxy.a.b@427fb501
03-31-2023 14:41:02.963 INFO [main] [] c.t.s.r.p.b(): register rtty message Topic: omada.rtty.manager, groupId: omada-rtty-manager, handler: com.tplink.smb.rtty.proxy.a.a@57ede6c3
03-31-2023 14:41:03.027 INFO [main] [] c.t.s.o.d.g.c.d.ThreadConfiguration(): device-gateway datatrack time task workGroup core thread  num is 32, max thread num is 32
03-31-2023 14:41:03.972 INFO [main] [] c.t.s.o.p.b.a.c(): DisconnectRequestServer start
03-31-2023 14:41:04.191 INFO [main] [] c.t.s.o.s.d.a(): record: after OmadacInitTask
03-31-2023 14:41:04.191 INFO [main] [] c.t.s.o.s.t.CloudStartUpTask(): record: CloudStartUpTask start
03-31-2023 14:41:04.192 INFO [main] [] c.t.s.o.s.t.CloudStartUpTask(): record: CloudStartUpTask finished
03-31-2023 14:41:04.192 INFO [main] [] c.t.s.o.s.t.OmadacDiscoveryStartUpTask(): record: OmadacDiscoveryStartUpTask start
03-31-2023 14:41:04.193 INFO [main] [] c.t.s.o.s.t.OmadacDiscoveryStartUpTask(): record: OmadacDiscoveryStartUpTask finished
03-31-2023 14:41:04.267 INFO [main] [] c.t.s.o.d.g.a(): startServers, DISCOVERY host:null.
03-31-2023 14:41:04.268 INFO [main] [] c.t.s.o.d.g.a(): discovery Port is 29810.
03-31-2023 14:41:04.268 INFO [main] [] c.t.s.o.d.g.a(): manage    port v1 is 29811.
03-31-2023 14:41:04.268 INFO [main] [] c.t.s.o.d.g.a(): manage    port v2 is 29814.
03-31-2023 14:41:04.268 INFO [main] [] c.t.s.o.d.g.a(): adopt     port v1 is 29812.
03-31-2023 14:41:04.269 INFO [main] [] c.t.s.o.d.g.a(): upgrade   port is 29813.
03-31-2023 14:41:04.269 INFO [main] [] c.t.s.o.d.g.a(): transfer  port is 29815.
03-31-2023 14:41:04.269 INFO [main] [] c.t.s.o.d.g.a(): rtty port is 29816.
03-31-2023 14:41:04.324 INFO [main] [] c.t.s.e.t.a.t.AbstractServer(): Start NettyUdpServer bind /0.0.0.0:29810, export localhost/127.0.0.1:29810
03-31-2023 14:41:04.333 INFO [main] [] c.t.s.e.t.n.NettyTcpServer(): TCP server /0.0.0.0:29812 global traffic shaping, writeLimit: 536870912, readLimit: 0, checkInterval: 1000, max wait time: 15000
03-31-2023 14:41:04.339 INFO [main] [] c.t.s.e.t.a.t.AbstractServer(): Start NettyTcpServer bind /0.0.0.0:29812, export localhost/127.0.0.1:29812
03-31-2023 14:41:04.341 INFO [main] [] c.t.s.e.t.n.NettyTcpServer(): TCP server /0.0.0.0:29811 global traffic shaping, writeLimit: 536870912, readLimit: 0, checkInterval: 1000, max wait time: 15000
03-31-2023 14:41:04.342 INFO [main] [] c.t.s.e.t.a.t.AbstractServer(): Start NettyTcpServer bind /0.0.0.0:29811, export localhost/127.0.0.1:29811
03-31-2023 14:41:04.344 INFO [main] [] c.t.s.e.t.n.NettyTcpServer(): TCP server /0.0.0.0:29813 global traffic shaping, writeLimit: 536870912, readLimit: 0, checkInterval: 1000, max wait time: 15000
03-31-2023 14:41:04.345 INFO [main] [] c.t.s.e.t.a.t.AbstractServer(): Start NettyTcpServer bind /0.0.0.0:29813, export localhost/127.0.0.1:29813
03-31-2023 14:41:04.348 INFO [main] [] c.t.s.e.t.n.NettyTcpServer(): TCP server /0.0.0.0:29814 global traffic shaping, writeLimit: 536870912, readLimit: 0, checkInterval: 1000, max wait time: 15000
03-31-2023 14:41:04.349 INFO [main] [] c.t.s.e.t.a.t.AbstractServer(): Start NettyTcpServer bind /0.0.0.0:29814, export localhost/127.0.0.1:29814
03-31-2023 14:41:06.363 ERROR [main] [] c.t.s.o.s.t.DeviceConnectorSeverTask(): Failed to start up net server because of 
java.lang.IllegalStateException: failed to create a child event loop
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:88) ~[netty-common-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:60) ~[netty-common-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:49) ~[netty-common-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59) ~[netty-transport-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:114) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:101) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:78) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at com.tplink.smb.ecsp.transport.netty.NettyEventLoopFactory.eventLoopGroup(NettyEventLoopFactory.java:37) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.netty.NettyTcpServer.doOpenTcpServer(NettyTcpServer.java:102) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.netty.NettyTcpServer.doOpen(NettyTcpServer.java:94) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.api.transport.AbstractServer.<init>(AbstractServer.java:52) ~[ecsp-transporter-api-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.netty.NettyTcpServer.<init>(NettyTcpServer.java:83) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.netty.NettyTransporter.bind(NettyTransporter.java:48) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.api.Transporters.bind(Transporters.java:54) ~[ecsp-transporter-api-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.server.j.a(SourceFile:60) ~[ecsp-server-core-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.server.j.a(SourceFile:38) ~[ecsp-server-core-1.1.80.jar:1.1.80]
	at java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?]
	at com.tplink.smb.ecsp.server.j.a(SourceFile:37) ~[ecsp-server-core-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.server.EcspTransporterStarter.start(SourceFile:163) ~[ecsp-server-core-1.1.80.jar:1.1.80]
	at com.tplink.smb.omada.device.gateway.a.a(SourceFile:171) ~[device-gateway-port-local-5.9.31.jar:5.9.31]
	at com.tplink.smb.omada.starter.task.DeviceConnectorSeverTask.execute(SourceFile:28) ~[local-starter-5.9.31.jar:5.9.31]
	at com.tplink.smb.omada.starter.task.c.a(SourceFile:20) ~[local-starter-5.9.31.jar:5.9.31]
	at com.tplink.smb.omada.starter.OmadaBootstrap.f(SourceFile:347) ~[local-starter-5.9.31.jar:5.9.31]
	at com.tplink.smb.omada.starter.OmadaLinuxMain.a(SourceFile:99) ~[local-starter-5.9.31.jar:5.9.31]
	at com.tplink.smb.omada.starter.OmadaLinuxMain.main(SourceFile:42) ~[local-starter-5.9.31.jar:5.9.31]
Caused by: io.netty.channel.ChannelException: timerfd_create() failed: Too many open files
	at io.netty.channel.epoll.Native.timerFd(Native Method) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.Native.newTimerFd(Native.java:145) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoop.<init>(EpollEventLoop.java:114) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoopGroup.newChild(EpollEventLoopGroup.java:187) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoopGroup.newChild(EpollEventLoopGroup.java:37) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84) ~[netty-common-4.1.75.Final.jar:4.1.75.Final]
�������
	... 24 more
03-31-2023 14:41:06.364 ERROR [main] [] c.t.s.f.c.FacadeUtils(): facadeMsgEnable is not enable, msg: Device connector server started error :failed to create a child event loop
03-31-2023 14:41:06.364 ERROR [main] [] c.t.s.o.s.OmadaBootstrap(): com.tplink.smb.omada.starter.b.a: java.lang.IllegalStateException: failed to create a child event loop
com.tplink.smb.omada.starter.b.a: java.lang.IllegalStateException: failed to create a child event loop
	at com.tplink.smb.omada.starter.task.DeviceConnectorSeverTask.execute(SourceFile:40) ~[local-starter-5.9.31.jar:5.9.31]
	at com.tplink.smb.omada.starter.task.c.a(SourceFile:20) ~[local-starter-5.9.31.jar:5.9.31]
	at com.tplink.smb.omada.starter.OmadaBootstrap.f(SourceFile:347) ~[local-starter-5.9.31.jar:5.9.31]
	at com.tplink.smb.omada.starter.OmadaLinuxMain.a(SourceFile:99) ~[local-starter-5.9.31.jar:5.9.31]
	at com.tplink.smb.omada.starter.OmadaLinuxMain.main(SourceFile:42) ~[local-starter-5.9.31.jar:5.9.31]
Caused by: java.lang.IllegalStateException: failed to create a child event loop
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:88) ~[netty-common-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:60) ~[netty-common-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:49) ~[netty-common-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59) ~[netty-transport-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:114) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:101) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:78) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at com.tplink.smb.ecsp.transport.netty.NettyEventLoopFactory.eventLoopGroup(NettyEventLoopFactory.java:37) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.netty.NettyTcpServer.doOpenTcpServer(NettyTcpServer.java:102) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.netty.NettyTcpServer.doOpen(NettyTcpServer.java:94) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.api.transport.AbstractServer.<init>(AbstractServer.java:52) ~[ecsp-transporter-api-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.netty.NettyTcpServer.<init>(NettyTcpServer.java:83) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.netty.NettyTransporter.bind(NettyTransporter.java:48) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.api.Transporters.bind(Transporters.java:54) ~[ecsp-transporter-api-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.server.j.a(SourceFile:60) ~[ecsp-server-core-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.server.j.a(SourceFile:38) ~[ecsp-server-core-1.1.80.jar:1.1.80]
	at java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?]
	at com.tplink.smb.ecsp.server.j.a(SourceFile:37) ~[ecsp-server-core-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.server.EcspTransporterStarter.start(SourceFile:163) ~[ecsp-server-core-1.1.80.jar:1.1.80]
	at com.tplink.smb.omada.device.gateway.a.a(SourceFile:171) ~[device-gateway-port-local-5.9.31.jar:5.9.31]
	at com.tplink.smb.omada.starter.task.DeviceConnectorSeverTask.execute(SourceFile:28) ~[local-starter-5.9.31.jar:5.9.31]
	... 4 more
Caused by: io.netty.channel.ChannelException: timerfd_create() failed: Too many open files
	at io.netty.channel.epoll.Native.timerFd(Native Method) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.Native.newTimerFd(Native.java:145) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoop.<init>(EpollEventLoop.java:114) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoopGroup.newChild(EpollEventLoopGroup.java:187) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoopGroup.newChild(EpollEventLoopGroup.java:37) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84) ~[netty-common-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:60) ~[netty-common-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:49) ~[netty-common-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:59) ~[netty-transport-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:114) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:101) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at io.netty.channel.epoll.EpollEventLoopGroup.<init>(EpollEventLoopGroup.java:78) ~[netty-transport-classes-epoll-4.1.75.Final.jar:4.1.75.Final]
	at com.tplink.smb.ecsp.transport.netty.NettyEventLoopFactory.eventLoopGroup(NettyEventLoopFactory.java:37) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.netty.NettyTcpServer.doOpenTcpServer(NettyTcpServer.java:102) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.netty.NettyTcpServer.doOpen(NettyTcpServer.java:94) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.api.transport.AbstractServer.<init>(AbstractServer.java:52) ~[ecsp-transporter-api-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.netty.NettyTcpServer.<init>(NettyTcpServer.java:83) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.netty.NettyTransporter.bind(NettyTransporter.java:48) ~[ecsp-transporter-netty-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.transport.api.Transporters.bind(Transporters.java:54) ~[ecsp-transporter-api-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.server.j.a(SourceFile:60) ~[ecsp-server-core-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.server.j.a(SourceFile:38) ~[ecsp-server-core-1.1.80.jar:1.1.80]
	at java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?]
	at com.tplink.smb.ecsp.server.j.a(SourceFile:37) ~[ecsp-server-core-1.1.80.jar:1.1.80]
	at com.tplink.smb.ecsp.server.EcspTransporterStarter.start(SourceFile:163) ~[ecsp-server-core-1.1.80.jar:1.1.80]
	at com.tplink.smb.omada.device.gateway.a.a(SourceFile:171) ~[device-gateway-port-local-5.9.31.jar:5.9.31]
	at com.tplink.smb.omada.starter.task.DeviceConnectorSeverTask.execute(SourceFile:28) ~[local-starter-5.9.31.jar:5.9.31]
	... 4 more
03-31-2023 14:41:11.076 INFO [comm-pool-6] [] c.t.s.o.s.t.CloudStartUpTask(): Cloud service is forbidden.

Additional Context

#docker inspect $container

            "Ulimits": [
                {
                    "Name": "nofile",
                    "Hard": 1024,
                    "Soft": 1024
                }
            ],
@mbentley
Copy link
Owner

Yeah, so this will likely need a ulimit change. Not sure exactly how you're running the container but updating the ulimit at runtime would work. I actually didn't remember that I had set this on my own container that I'm running. I'm using --ulimit nofile=4096:8192 and that seems to work. I'll need to figure out where all that needs updated in places like downstream container marketplaces for some of the NAS devices and whatnot that have the controller published but I will see if I can get that taken care of.

mbentley added a commit that referenced this issue Mar 31, 2023
Added ulimits to run examples and compose; fixes #298
@osc86
Copy link
Author

osc86 commented Mar 31, 2023

Thank you for your quick reply and raising the ulimit. I think the docker default setting is still only 1024, which apparently isn't enough for today's applications.
I've increased the default-ulimit globally in /etc/sysconfig/docker and made the file immutable to avoid problems with other containers in the future.
Using the limits you provided, the container starts without errors. Thanks again!

mbentley added a commit to mbentley/DockSTARTer that referenced this issue Mar 31, 2023
See mbentley/docker-omada-controller#298 for
more info; example of issues

Signed-off-by: Matt Bentley <mbentley@mbentley.net>
@benhedrington
Copy link

benhedrington commented Apr 1, 2023

Updated ulimit on Unraid template that user may have been using to match @mbentley original suggestions in start up doc.

This is what Unraid template looks like if interested:
https://github.com/benhedrington/hedrington-unraid-docker-templates/blob/main/hedrington-unraid-docker-templates/omada-controller.xml

@mbentley
Copy link
Owner

mbentley commented Apr 1, 2023

Thank you @benhedrington!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants