Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: BindException: Address already in use #444

Closed
Bezzaee opened this issue Jul 14, 2024 · 1 comment
Closed

[Bug]: BindException: Address already in use #444

Bezzaee opened this issue Jul 14, 2024 · 1 comment
Assignees

Comments

@Bezzaee
Copy link

Bezzaee commented Jul 14, 2024

Controller Version

5.14.26

Describe the Bug

Container fails to start.
Log suggests ports 8043 & 8843 are in use. Used netstat to confirm not the case.

Noticed when container is starting the ports show:
sudo netstat -an | grep 8043
tcp 0 0 0.0.0.0:8043 0.0.0.0:* LISTEN
tcp6 0 0 :::8043 :::* LISTEN

Expected Behavior

Container should start

Steps to Reproduce

Start container

How You're Launching the Container

docker run -d \
  --name omada-controller \
  --stop-timeout 60 \
  --restart unless-stopped \
  --ulimit nofile=4096:8192 \
  -p 8088:8088 \
  -p 8043:8043 \
  -p 8843:8843 \
  -p 27001:27001/udp \
  -p 29810:29810/udp \
  -p 29811-29816:29811-29816 \
  -e MANAGE_HTTP_PORT=8088 \
  -e MANAGE_HTTPS_PORT=8043 \
  -e PGID="508" \
  -e PORTAL_HTTP_PORT=8088 \
  -e PORTAL_HTTPS_PORT=8843 \
  -e PORT_ADOPT_V1=29812 \
  -e PORT_APP_DISCOVERY=27001 \
  -e PORT_DISCOVERY=29810 \
  -e PORT_MANAGER_V1=29811 \
  -e PORT_MANAGER_V2=29814 \
  -e PORT_TRANSFER_V2=29815 \
  -e PORT_RTTY=29816 \
  -e PORT_UPGRADE_V1=29813 \
  -e PUID="508" \
  -e SHOW_SERVER_LOGS=true \
  -e SHOW_MONGODB_LOGS=false \
  -e SSL_CERT_NAME="tls.crt" \
  -e SSL_KEY_NAME="tls.key" \
  -e TZ=Etc/UTC \
  -v omada-data:/opt/tplink/EAPController/data \
  -v omada-logs:/opt/tplink/EAPController/logs \
  mbentley/omada-controller:5.14

Container Logs

at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1311) ~[spring-beans-5.3.31.jar:5.3.31]                                   10:04:30 [30/1838]
        at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.resolveFieldValue(AutowiredAnnotationBeanPostProcessor.java:710) ~[spring-beans-5.3.31.jar:5.3.31]         
        ... 25 more                                                                                                                                                                                                           
07-14-2024 09:03:16.561 ERROR [main] [] c.t.s.o.s.t.SpringBootStartUpTask(): Cannot retry start up springboot. reson:Error creating bean with name 'com.tplink.smb.omada.identityaccess.domain.model.openapi.c.q': Unsatisfied
 dependency expressed through field 'e'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'com.tplink.smb.omada.identityaccess.port.common.a.a': Unsatisfie
d dependency expressed through field 'c'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'com.tplink.smb.omada.identityaccess.port.common.e.a': Unsatisfi
ed dependency expressed through field 'b'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'com.tplink.smb.omada.identityaccess.b.p': Unsatisfied dependen
cy expressed through field 'm'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'com.tplink.smb.omada.identityaccess.domain.model.user.A': Unsatisfied dep
endency expressed through field 'c'; nested exception is org.springframework.beans.factory.BeanCurrentlyInCreationException: Error creating bean with name 'com.tplink.smb.omada.identityaccess.port.mongo.adaptor.persistence
.tenant.a': Bean with name 'com.tplink.smb.omada.identityaccess.port.mongo.adaptor.persistence.tenant.a' has been injected into other beans [com.tplink.smb.omada.identityaccess.domain.model.d.q] in its raw version as part 
of a circular reference, but has eventually been wrapped. This means that said other beans do not use the final version of the bean. This is often the result of over-eager type matching - consider using 'getBeanNamesForTyp
e' with the 'allowEagerInit' flag turned off, for example.                                                                                                                                                                    
07-14-2024 09:03:16.562 ERROR [main] [] c.t.s.o.s.t.FailExitTask(): Failed to start omada controller, going to exit                                                                                                           
07-14-2024 09:03:16.610 INFO [Thread-0] [] c.t.s.o.s.OmadaBootstrap(): Failed to shutdown customThread.                                                                                                                       
java.lang.ExceptionInInitializerError: null                                                                                                                                                                                   
        at com.tplink.smb.omada.common.concurrent.thread.a.a(SourceFile:243) ~[omada-common-5.14.26.1.jar:5.14.26.1]                                                                                                          
        at com.tplink.smb.omada.starter.OmadaBootstrap.b(SourceFile:201) ~[local-starter-5.14.26.1.jar:5.14.26.1]                                                                                                             
        at com.tplink.smb.omada.starter.OmadaLinuxMain.b(SourceFile:87) ~[local-starter-5.14.26.1.jar:5.14.26.1]                                                                                                              
        at java.lang.Thread.run(Thread.java:840) [?:?]                                                                                                                                                                        
Caused by: java.lang.IllegalStateException: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@2c4a939d has not been refreshed yet                                               
        at org.springframework.context.support.AbstractApplicationContext.assertBeanFactoryActive(AbstractApplicationContext.java:1155) ~[spring-context-5.3.31.jar:5.3.31]                                                   
        at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1167) ~[spring-context-5.3.31.jar:5.3.31]                                                                   
        at com.tplink.smb.omada.common.spring.a.a(SourceFile:28) ~[omada-common-5.14.26.1.jar:5.14.26.1]                                                                                                                      
        at com.tplink.smb.omada.common.concurrent.thread.a$a.<clinit>(SourceFile:55) ~[omada-common-5.14.26.1.jar:5.14.26.1]                                                                                                  
        ... 4 more

MongoDB Logs

2024-07-14T09:05:32.049+0000 I CONTROL  [main] ***** SERVER RESTARTED *****                                                                                                                                                   
2024-07-14T09:05:32.061+0000 I CONTROL  [initandlisten] MongoDB starting : pid=187 port=27217 dbpath=../data/db 64-bit host=ed1b8ba4614b                                                                                      
2024-07-14T09:05:32.061+0000 I CONTROL  [initandlisten] db version v3.6.8                                                                                                                                                     
2024-07-14T09:05:32.061+0000 I CONTROL  [initandlisten] git version: 8e540c0b6db93ce994cc548f000900bdc740f80a                                                                                                                 
2024-07-14T09:05:32.061+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.1f  31 Mar 2020                                                                                                                          
2024-07-14T09:05:32.061+0000 I CONTROL  [initandlisten] allocator: tcmalloc                                                                                                                                                   
2024-07-14T09:05:32.061+0000 I CONTROL  [initandlisten] modules: none                                                                                                                                                         
2024-07-14T09:05:32.061+0000 I CONTROL  [initandlisten] build environment:                                                                                                                                                    
2024-07-14T09:05:32.061+0000 I CONTROL  [initandlisten]     distarch: aarch64                                                                                                                                                 
2024-07-14T09:05:32.061+0000 I CONTROL  [initandlisten]     target_arch: aarch64                                                                                                                                              
2024-07-14T09:05:32.061+0000 I CONTROL  [initandlisten] options: { net: { bindIp: "127.0.0.1", port: 27217 }, processManagement: { pidFilePath: "../data/mongo.pid" }, storage: { dbPath: "../data/db" }, systemLog: { destina
tion: "file", logAppend: true, path: "../logs/mongod.log" } }                                                                                                                                                                 
2024-07-14T09:05:32.062+0000 I -        [initandlisten] Detected data files in ../data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.                                   
2024-07-14T09:05:32.062+0000 I STORAGE  [initandlisten]                                                                                                                                                                       
2024-07-14T09:05:32.062+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine                                                                       
2024-07-14T09:05:32.062+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem                                                                                                   
2024-07-14T09:05:32.062+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=1382M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),cache_cursors=false,compa
tibility=(release="3.0",require_max="3.0"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),                          
2024-07-14T09:05:32.893+0000 I STORAGE  [initandlisten] WiredTiger message [1720947932:893588][187:0x7fb9bc0040], txn-recover: Main recovery loop: starting at 2337/11136                                                     
2024-07-14T09:05:33.582+0000 I STORAGE  [initandlisten] WiredTiger message [1720947933:582526][187:0x7fb9bc0040], txn-recover: Recovering log 2337 through 2338                                                               
2024-07-14T09:05:33.934+0000 I STORAGE  [initandlisten] WiredTiger message [1720947933:934785][187:0x7fb9bc0040], txn-recover: Recovering log 2338 through 2338                                                               
2024-07-14T09:05:34.294+0000 I STORAGE  [initandlisten] WiredTiger message [1720947934:294627][187:0x7fb9bc0040], txn-recover: Set global recovery timestamp: 0                                                               
2024-07-14T09:05:34.943+0000 I CONTROL  [initandlisten]                                                                                                                                                                       
2024-07-14T09:05:34.944+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.                                                                                                           
2024-07-14T09:05:34.944+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.            
2024-07-14T09:05:34.944+0000 I CONTROL  [initandlisten] 
2024-07-14T09:05:38.471+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '../data/db/diagnostic.data'
2024-07-14T09:05:38.478+0000 I NETWORK  [initandlisten] waiting for connections on port 27217
2024-07-14T09:05:39.725+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:52252 #1 (1 connection now open)
2024-07-14T09:05:39.726+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:52268 #2 (2 connections now open)
2024-07-14T09:05:39.768+0000 I NETWORK  [conn2] received client metadata from 127.0.0.1:52268 conn2: { driver: { name: "mongo-java-driver|sync", version: "4.6.1" }, os: { type: "Linux", name: "Linux", architecture: "aarch64", version: "6.6.31-haos-raspi" }, platform: "Java/Ubuntu/17.0.11+9-Ubuntu-120.04.2" }
2024-07-14T09:05:39.769+0000 I NETWORK  [conn1] received client metadata from 127.0.0.1:52252 conn1: { driver: { name: "mongo-java-driver|sync", version: "4.6.1" }, os: { type: "Linux", name: "Linux", architecture: "aarch64", version: "6.6.31-haos-raspi" }, platform: "Java/Ubuntu/17.0.11+9-Ubuntu-120.04.2" }
2024-07-14T09:05:40.039+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:52282 #3 (3 connections now open)
2024-07-14T09:05:40.041+0000 I NETWORK  [conn3] received client metadata from 127.0.0.1:52282 conn3: { driver: { name: "mongo-java-driver|sync", version: "4.6.1" }, os: { type: "Linux", name: "Linux", architecture: "aarch64", version: "6.6.31-haos-raspi" }, platform: "Java/Ubuntu/17.0.11+9-Ubuntu-120.04.2" }
2024-07-14T09:05:40.147+0000 I NETWORK  [conn3] end connection 127.0.0.1:52282 (2 connections now open)
2024-07-14T09:05:40.149+0000 I NETWORK  [conn1] end connection 127.0.0.1:52252 (1 connection now open)
2024-07-14T09:05:40.150+0000 I NETWORK  [conn2] end connection 127.0.0.1:52268 (0 connections now open)
2024-07-14T09:05:40.257+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:52300 #4 (1 connection now open)
2024-07-14T09:05:40.257+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:52298 #5 (2 connections now open)
2024-07-14T09:05:40.257+0000 I NETWORK  [conn4] received client metadata from 127.0.0.1:52300 conn4: { driver: { name: "mongo-java-driver|sync", version: "4.6.1" }, os: { type: "Linux", name: "Linux", architecture: "aarch64", version: "6.6.31-haos-raspi" }, platform: "Java/Ubuntu/17.0.11+9-Ubuntu-120.04.2" }
2024-07-14T09:05:40.258+0000 I NETWORK  [conn5] received client metadata from 127.0.0.1:52298 conn5: { driver: { name: "mongo-java-driver|sync", version: "4.6.1" }, os: { type: "Linux", name: "Linux", architecture: "aarch64", version: "6.6.31-haos-raspi" }, platform: "Java/Ubuntu/17.0.11+9-Ubuntu-120.04.2" }
2024-07-14T09:05:40.265+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:52324 #6 (3 connections now open)
2024-07-14T09:05:40.266+0000 I NETWORK  [conn6] received client metadata from 127.0.0.1:52324 conn6: { driver: { name: "mongo-java-driver|sync", version: "4.6.1" }, os: { type: "Linux", name: "Linux", architecture: "aarch64", version: "6.6.31-haos-raspi" }, platform: "Java/Ubuntu/17.0.11+9-Ubuntu-120.04.2" }

Additional Context

➜ ~ uname -a
Linux a0d7b954-ssh 6.6.31-haos-raspi #1 SMP PREEMPT Tue Jun 18 15:11:43 UTC 2024 aarch64 Linux
➜ ~ cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.19.1
PRETTY_NAME="Alpine Linux v3.19"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"

@mbentley
Copy link
Owner

The logs you shared show that you're experiencing the same issue as #418. You will need to roll back to 5.13. See the first post for a link to instructions I provided. This isn't a port conflict.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants