Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

goharbor/harbor-log:v1.7.4 is unhealthy #7119

Closed
Jawenba opened this issue Mar 12, 2019 · 7 comments
Closed

goharbor/harbor-log:v1.7.4 is unhealthy #7119

Jawenba opened this issue Mar 12, 2019 · 7 comments
Assignees

Comments

@Jawenba
Copy link

Jawenba commented Mar 12, 2019

Installed by default harbor.cfg ,after installed,run docker ps ,show goharbor/harbor-log:v1.7.4 is unhealthy. How can i fix it?
图片

Versions:

  • ubuntu version: Ubuntu 18.04.2 LTS
  • harbor version: [1.7.4]
  • docker engine version: [18.06.1-ce]
  • docker-compose version 1.17.1, build unknown
    -docker-py version: 2.5.1
  • CPython version: 2.7.15rc1
  • OpenSSL version: OpenSSL 1.1.0g 2 Nov 2017
    .
@ninjadq
Copy link
Member

ninjadq commented Mar 12, 2019

Hi, can you upload the log files to help to locate the issue?

@ninjadq ninjadq added the more-info-needed The issue author need to provide more details and context to the issue label Mar 12, 2019
@Jawenba
Copy link
Author

Jawenba commented Mar 13, 2019

@ninjadq thanks for your replay , I can't find the harbor-log file, but i checked the container goharbor/harbor-log:v1.7.4 is up except healtcheck fail.

adminserver.log

Mar 11 08:16:05 172.18.0.1 adminserver[4480]: 2019-03-11T08:16:05Z [INFO] initializing system configurations...
Mar 11 08:16:05 172.18.0.1 adminserver[4480]: 2019-03-11T08:16:05Z [INFO] Registering database: type-PostgreSQL host-postgresql port-5432 databse-registry sslmode-"disable"
Mar 11 08:16:06 172.18.0.1 adminserver[4480]: 2019-03-11T08:16:06Z [ERROR] [utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp 172.18.0.7:5432: i/o timeout
Mar 11 08:16:08 172.18.0.1 adminserver[4480]: 2019-03-11T08:16:08Z [ERROR] [utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp 172.18.0.7:5432: getsockopt: connection refused
Mar 11 08:16:10 172.18.0.1 adminserver[4480]: 2019-03-11T08:16:10Z [ERROR] [utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp 172.18.0.7:5432: getsockopt: connection refused
Mar 11 08:16:12 172.18.0.1 adminserver[4480]: 2019-03-11T08:16:12Z [ERROR] [utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp 172.18.0.7:5432: getsockopt: connection refused
Mar 11 08:16:14 172.18.0.1 adminserver[4480]: 2019-03-11T08:16:14Z [ERROR] [utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp 172.18.0.7:5432: getsockopt: connection refused
Mar 11 08:16:16 172.18.0.1 adminserver[4480]: 2019-03-11T08:16:16Z [INFO] Register database completed
Mar 11 08:16:16 172.18.0.1 adminserver[4480]: 2019-03-11T08:16:16Z [INFO] Upgrading schema for pgsql ...
Mar 11 08:16:16 172.18.0.1 adminserver[4480]: 2019-03-11T08:16:16Z [INFO] the path of json configuration storage: /etc/adminserver/config/config.json
Mar 11 08:16:16 172.18.0.1 adminserver[4480]: 2019-03-11T08:16:16Z [INFO] the path of key used by key provider: /etc/adminserver/key
Mar 11 08:16:16 172.18.0.1 adminserver[4480]: 2019-03-11T08:16:16Z [INFO] system initialization completed
Mar 11 08:16:17 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:16:17 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:16:20 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:16:20 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:16:35 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:16:35 +0000] "GET /api/ping HTTP/1.1" 200 6
Mar 11 08:16:39 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:16:39 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:17:05 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:17:05 +0000] "GET /api/ping HTTP/1.1" 200 6
Mar 11 08:17:09 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:17:09 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:17:35 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:17:35 +0000] "GET /api/ping HTTP/1.1" 200 6
Mar 11 08:17:39 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:17:39 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:18:05 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:18:05 +0000] "GET /api/ping HTTP/1.1" 200 6
Mar 11 08:18:09 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:18:09 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:18:35 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:18:35 +0000] "GET /api/ping HTTP/1.1" 200 6
Mar 11 08:18:40 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:18:40 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:19:05 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:19:05 +0000] "GET /api/ping HTTP/1.1" 200 6
Mar 11 08:19:10 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:19:10 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:19:35 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:19:35 +0000] "GET /api/ping HTTP/1.1" 200 6
Mar 11 08:19:40 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:19:40 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:20:05 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:20:05 +0000] "GET /api/ping HTTP/1.1" 200 6
Mar 11 08:20:10 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:20:10 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:20:36 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:20:36 +0000] "GET /api/ping HTTP/1.1" 200 6
Mar 11 08:20:40 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:20:40 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:21:06 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:21:06 +0000] "GET /api/ping HTTP/1.1" 200 6
Mar 11 08:21:10 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:21:10 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:21:36 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:21:36 +0000] "GET /api/ping HTTP/1.1" 200 6
Mar 11 08:21:40 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:21:40 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:22:06 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:22:06 +0000] "GET /api/ping HTTP/1.1" 200 6
Mar 11 08:22:11 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:22:11 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:22:36 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:22:36 +0000] "GET /api/ping HTTP/1.1" 200 6
Mar 11 08:22:41 172.18.0.1 adminserver[4480]: 172.18.0.8 - - [11/Mar/2019:08:22:41 +0000] "GET /api/configs HTTP/1.1" 200 1802
Mar 11 08:23:06 172.18.0.1 adminserver[4480]: 127.0.0.1 - - [11/Mar/2019:08:23:06 +0000] "GET /api/ping HTTP/1.1" 200 6

core.log

Mar 11 08:16:09 172.18.0.1 core[4480]: 2019-03-11T08:16:09Z [INFO] Config path: /etc/core/app.conf
Mar 11 08:16:09 172.18.0.1 core[4480]: 2019-03-11T08:16:09Z [INFO] initializing configurations...
Mar 11 08:16:09 172.18.0.1 core[4480]: 2019-03-11T08:16:09Z [INFO] key path: /etc/core/key
Mar 11 08:16:09 172.18.0.1 core[4480]: 2019-03-11T08:16:09Z [INFO] initializing client for adminserver http://adminserver:8080 ...
Mar 11 08:16:09 172.18.0.1 core[4480]: 2019-03-11T08:16:09Z [ERROR] [utils.go:101]: failed to connect to tcp://adminserver:8080, retry after 2 seconds :dial tcp 172.18.0.4:8080: getsockopt: connection refused
Mar 11 08:16:11 172.18.0.1 core[4480]: 2019-03-11T08:16:11Z [ERROR] [utils.go:101]: failed to connect to tcp://adminserver:8080, retry after 2 seconds :dial tcp 172.18.0.4:8080: getsockopt: connection refused
Mar 11 08:16:13 172.18.0.1 core[4480]: 2019-03-11T08:16:13Z [ERROR] [utils.go:101]: failed to connect to tcp://adminserver:8080, retry after 2 seconds :dial tcp 172.18.0.4:8080: getsockopt: connection refused
Mar 11 08:16:15 172.18.0.1 core[4480]: 2019-03-11T08:16:15Z [ERROR] [utils.go:101]: failed to connect to tcp://adminserver:8080, retry after 2 seconds :dial tcp 172.18.0.4:8080: getsockopt: connection refused
Mar 11 08:16:17 172.18.0.1 core[4480]: 2019-03-11T08:16:17Z [INFO] initializing the project manager based on local database...
Mar 11 08:16:17 172.18.0.1 core[4480]: 2019-03-11T08:16:17Z [INFO] configurations initialization completed
Mar 11 08:16:17 172.18.0.1 core[4480]: 2019-03-11T08:16:17Z [INFO] Registering database: type-PostgreSQL host-postgresql port-5432 databse-registry sslmode-"disable"
Mar 11 08:16:17 172.18.0.1 core[4480]: 2019-03-11T08:16:17Z [INFO] Register database completed
Mar 11 08:16:17 172.18.0.1 core[4480]: 2019-03-11T08:16:17Z [INFO] User id: 1 updated its encypted password successfully.
Mar 11 08:16:17 172.18.0.1 core[4480]: 2019-03-11T08:16:17Z [INFO] Enable redis cache for chart caching
Mar 11 08:16:17 172.18.0.1 core[4480]: 2019-03-11T08:16:17Z [INFO] API controller for chart repository server is successfully initialized
Mar 11 08:16:17 172.18.0.1 core[4480]: 2019-03-11T08:16:17Z [INFO] initialized clair database
Mar 11 08:16:17 172.18.0.1 core[4480]: 2019-03-11T08:16:17Z [INFO] Because SYNC_REGISTRY set false , no need to sync registry 
Mar 11 08:16:17 172.18.0.1 core[4480]: 2019-03-11T08:16:17Z [INFO] Init proxy
Mar 11 08:16:17 172.18.0.1 core[4480]: 2019/03/11 08:16:17 #033[1;34m[I] [asm_amd64.s:2337] http server Running on http://:8080#033[0m
Mar 11 08:16:20 172.18.0.1 core[4480]: 2019/03/11 08:16:20 #033[1;44m[D] [server.go:2619] |    172.18.0.10|#033[42m 200 #033[0m|   5.808324ms|   match|#033[44m GET     #033[0m /api/configs   r:/api/configs#033[0m
Mar 11 08:16:39 172.18.0.1 core[4480]: 2019/03/11 08:16:39 #033[1;44m[D] [server.go:2619] |      127.0.0.1|#033[42m 200 #033[0m|   3.139014ms|   match|#033[44m GET     #033[0m /api/ping   r:/api/ping#033[0m
Mar 11 08:17:09 172.18.0.1 core[4480]: 2019/03/11 08:17:09 #033[1;44m[D] [server.go:2619] |      127.0.0.1|#033[42m 200 #033[0m|   2.118694ms|   match|#033[44m GET     #033[0m /api/ping   r:/api/ping#033[0m
Mar 11 08:17:39 172.18.0.1 core[4480]: 2019/03/11 08:17:39 #033[1;44m[D] [server.go:2619] |      127.0.0.1|#033[42m 200 #033[0m|   2.826498ms|   match|#033[44m GET     #033[0m /api/ping   r:/api/ping#033[0m
Mar 11 08:18:09 172.18.0.1 core[4480]: 2019/03/11 08:18:09 #033[1;44m[D] [server.go:2619] |      127.0.0.1|#033[42m 200 #033[0m|   2.494232ms|   match|#033[44m GET     #033[0m /api/ping   r:/api/ping#033[0m
Mar 11 08:18:40 172.18.0.1 core[4480]: 2019/03/11 08:18:40 #033[1;44m[D] [server.go:2619] |      127.0.0.1|#033[42m 200 #033[0m|   3.177145ms|   match|#033[44m GET     #033[0m /api/ping   r:/api/ping#033[0m
Mar 11 08:19:10 172.18.0.1 core[4480]: 2019/03/11 08:19:10 #033[1;44m[D] [server.go:2619] |      127.0.0.1|#033[42m 200 #033[0m|   3.184508ms|   match|#033[44m GET     #033[0m /api/ping   r:/api/ping#033[0m
Mar 11 08:19:40 172.18.0.1 core[4480]: 2019/03/11 08:19:40 #033[1;44m[D] [server.go:2619] |      127.0.0.1|#033[42m 200 #033[0m|   2.813407ms|   match|#033[44m GET     #033[0m /api/ping   r:/api/ping#033[0m
Mar 11 08:20:10 172.18.0.1 core[4480]: 2019/03/11 08:20:10 #033[1;44m[D] [server.go:2619] |      127.0.0.1|#033[42m 200 #033[0m|   2.338574ms|   match|#033[44m GET     #033[0m /api/ping   r:/api/ping#033[0m
Mar 11 08:20:40 172.18.0.1 core[4480]: 2019/03/11 08:20:40 #033[1;44m[D] [server.go:2619] |      127.0.0.1|#033[42m 200 #033[0m|    2.37794ms|   match|#033[44m GET     #033[0m /api/ping   r:/api/ping#033[0m
Mar 11 08:21:10 172.18.0.1 core[4480]: 2019/03/11 08:21:10 #033[1;44m[D] [server.go:2619] |      127.0.0.1|#033[42m 200 #033[0m|   2.565745ms|   match|#033[44m GET     #033[0m /api/ping   r:/api/ping#033[0m
Mar 11 08:21:40 172.18.0.1 core[4480]: 2019/03/11 08:21:40 #033[1;44m[D] [server.go:2619] |      127.0.0.1|#033[42m 200 #033[0m|   4.056405ms|   match|#033[44m GET     #033[0m /api/ping   r:/api

##chartmuseum.log

Mar 11 08:16:11 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-11T08:16:11.554Z","M":"Starting ChartMuseum","port":9999}
Mar 11 08:31:46 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-11T08:31:46.730Z","M":"[32] Request served","path":"/library/index.yaml","comment":"","latency":"40.267388ms","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"235c8365-9845-4598-9551-09debdc70350"}
Mar 11 08:31:59 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-11T08:31:59.260Z","M":"[33] Request served","path":"/library/index.yaml","comment":"","latency":"540.84µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"74d04520-073c-446a-83ab-571cd07e1e5d"}
Mar 11 08:35:22 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-11T08:35:22.589Z","M":"[41] Request served","path":"/library/index.yaml","comment":"","latency":"16.706029ms","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"65f0dd98-ae88-4e66-a323-54300dab5262"}
Mar 11 08:35:28 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-11T08:35:28.693Z","M":"[42] Request served","path":"/library/index.yaml","comment":"","latency":"670.783µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"280a5d21-d584-4ae8-8d37-aff10bf50298"}
Mar 11 12:25:22 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-11T12:25:22.353Z","M":"[501] Request served","path":"/library/index.yaml","comment":"","latency":"1.265302ms","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"2505de01-5452-49f4-a99d-d6a4ba81bc9c"}
Mar 11 12:25:42 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-11T12:25:42.502Z","M":"[502] Request served","path":"/library/index.yaml","comment":"","latency":"454.63µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"04ebccbd-b1ca-4e0f-8c59-3ebe7041c823"}
Mar 11 12:28:37 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-11T12:28:37.218Z","M":"[509] Request served","path":"/library/index.yaml","comment":"","latency":"661.883µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"4b6a400b-c06f-41c1-947c-a87e63379006"}
Mar 11 12:28:39 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-11T12:28:39.060Z","M":"[510] Request served","path":"/library/index.yaml","comment":"","latency":"712.439µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"7b8e3c9c-b172-4dba-be5f-9f37b2fc8678"}
Mar 11 12:28:42 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-11T12:28:42.468Z","M":"[511] Request served","path":"/api/library/charts","comment":"","latency":"609.181µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"ab621328-7fbf-4a9a-8ef2-9b753b88c879"}
Mar 11 12:28:42 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-11T12:28:42.476Z","M":"[512] Request served","path":"/api/library/charts","comment":"","latency":"339.513µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"6c146a24-6dde-463c-9020-585c66165b09"}
Mar 12 01:03:53 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-12T01:03:53.798Z","M":"[2018] Request served","path":"/library/index.yaml","comment":"","latency":"461.376µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"ea9d0f3d-0c19-4d97-89e7-f542c54542cc"}
Mar 12 01:03:56 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-12T01:03:56.225Z","M":"[2019] Request served","path":"/library/index.yaml","comment":"","latency":"12.742117ms","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"f3f053a5-db93-48b3-9dee-4fdc661916dd"}
Mar 12 01:03:59 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-12T01:03:59.475Z","M":"[2021] Request served","path":"/library/index.yaml","comment":"","latency":"480.114µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"726a0791-8cef-42a1-90c6-efc1cc47d425"}
Mar 12 01:04:14 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-12T01:04:14.966Z","M":"[2022] Request served","path":"/library/index.yaml","comment":"","latency":"476.587µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"bb89b609-db99-4c1f-85fb-0a30d95d9b5f"}
Mar 12 01:05:03 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-12T01:05:03.141Z","M":"[2025] Request served","path":"/library/index.yaml","comment":"","latency":"513.136µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"8d30b781-df7d-4948-9061-38b028533d70"}
Mar 12 01:05:05 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-12T01:05:05.744Z","M":"[2026] Request served","path":"/api/library/charts","comment":"","latency":"391.474µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"a86ce3a1-46ad-44db-a018-725da088c568"}
Mar 12 01:05:05 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-12T01:05:05.788Z","M":"[2027] Request served","path":"/api/library/charts","comment":"","latency":"518.117µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"ead09d9e-fe7f-4bfc-9c4d-b28f506d694c"}
Mar 12 01:05:10 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-12T01:05:10.601Z","M":"[2028] Request served","path":"/api/library/charts","comment":"","latency":"484.069µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"9ae5a4a4-06ea-41f3-bf4a-0799ed2b382a"}
Mar 12 01:05:10 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-12T01:05:10.609Z","M":"[2029] Request served","path":"/api/library/charts","comment":"","latency":"476.007µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"7fead08c-a655-48c4-9edf-232340e2bcf7"}
Mar 12 01:05:22 172.18.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-12T01:05:22.410Z","M":"[2030] Request served","path":"/library/index.yaml","comment":"","latency":"383.861µs","clientIP":"172.19.0.4","method":"GET","statusCode":200,"reqID":"a9398d2a-ab5d-4f08-a640-782fd827208c"}
Mar 12 01:54:40 172.23.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-12T01:54:40.051Z","M":"Starting ChartMuseum","port":9999}
Mar 12 01:55:51 172.23.0.1 chartmuseum[4480]: {"L":"INFO","T":"2019-03-12T01:55:51.538Z","M":"[3] Request served","path":"/library/index.yaml","comment":"","latency":"42.650085ms","clientIP":"172.24.0.3","method":"GET","statusCode":200,"reqID":"0050f1eb-b2fe-4eb3-8a53-f0af0a048e85"}
Mar 12 03:08:45 172.18.0.1 chartmuseum[795]: {"L":"INFO","T":"2019-03-12T03:08:45.448Z","M":"Starting ChartMuseum","port":9999}
Mar 12 03:09:39 172.18.0.1 chartmuseum[795]: {"L":"INFO","T":"2019-03-12T03:09:39.954Z","M":"[2] Request served","path":"/library/index.yaml","comment":"","latency":"43.300913ms","clientIP":"172.24.0.3","method":"GET","statusCode":200,"reqID":"f521086d-b9d0-4a85-b26d-7493a92ad1b7"}

clair.log

Mar 11 08:16:12 172.18.0.1 clair[4480]: {"Event":"pgsql: could not open database: dial tcp 172.21.0.3:5432: connect: connection refused","Level":"fatal","Location":"main.go:96","Time":"2019-03-11 08:16:12.762717"}
Mar 11 08:16:13 172.18.0.1 clair[4480]: {"Event":"pgsql: could not open database: dial tcp 172.21.0.3:5432: connect: connection refused","Level":"fatal","Location":"main.go:96","Time":"2019-03-11 08:16:13.891791"}
Mar 11 08:16:15 172.18.0.1 clair[4480]: {"Event":"pgsql: could not open database: dial tcp 172.21.0.3:5432: connect: connection refused","Level":"fatal","Location":"main.go:96","Time":"2019-03-11 08:16:15.011482"}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"running database migrations","Level":"info","Location":"pgsql.go:216","Time":"2019-03-11 08:16:16.318242"}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"database migration ran successfully","Level":"info","Location":"pgsql.go:223","Time":"2019-03-11 08:16:16.591337"}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"starting main API","Level":"info","Location":"api.go:52","Time":"2019-03-11 08:16:16.591996","port":6060}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"sender configured","Level":"info","Location":"notifier.go:66","Time":"2019-03-11 08:16:16.592008","sender name":"webhook"}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"notifier service started","Level":"info","Location":"notifier.go:82","Time":"2019-03-11 08:16:16.592196","lock identifier":"e6756c32-9028-41c9-ad4c-68c78af989a5"}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"starting health API","Level":"info","Location":"api.go:85","Time":"2019-03-11 08:16:16.592551","port":6061}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"updater service started","Level":"info","Location":"updater.go:81","Time":"2019-03-11 08:16:16.592812","lock identifier":"262c9f70-9074-46b0-b066-ff47c72e3968"}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"updating vulnerabilities","Level":"info","Location":"updater.go:182","Time":"2019-03-11 08:16:16.604028"}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"fetching vulnerability updates","Level":"info","Location":"updater.go:228","Time":"2019-03-11 08:16:16.604083"}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"Start fetching vulnerabilities","Level":"info","Location":"oracle.go:119","Time":"2019-03-11 08:16:16.604440","package":"Oracle Linux"}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"Start fetching vulnerabilities","Level":"info","Location":"ubuntu.go:85","Time":"2019-03-11 08:16:16.604462","package":"Ubuntu"}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"Start fetching vulnerabilities","Level":"info","Location":"alpine.go:52","Time":"2019-03-11 08:16:16.604715","package":"Alpine"}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"Start fetching vulnerabilities","Level":"info","Location":"debian.go:63","Time":"2019-03-11 08:16:16.605106","package":"Debian"}
Mar 11 08:16:16 172.18.0.1 clair[4480]: {"Event":"Start fetching vulnerabilities","Level":"info","Location":"rhel.go:92","Time":"2019-03-11 08:16:16.608468","package":"RHEL"}
Mar 11 08:16:26 172.18.0.1 clair[4480]: {"Event":"could not download Debian's update","Level":"error","Location":"debian.go:68","Time":"2019-03-11 08:16:26.614902","error":"Get https://security-tracker.debian.org/tracker/data/json: dial tcp: lookup security-tracker.debian.org: Temporary failure in name resolution"}
Mar 11 08:16:26 172.18.0.1 clair[4480]: {"Event":"could not download Oracle's update list","Level":"error","Location":"oracle.go:134","Time":"2019-03-11 08:16:26.614902","error":"Get https://linux.oracle.com/oval/: dial tcp: lookup linux.oracle.com: Temporary failure in name resolution"}
Mar 11 08:16:26 172.18.0.1 clair[4480]: {"Event":"an error occured when fetching update","Level":"error","Location":"updater.go:235","Time":"2019-03-11 08:16:26.615042","error":"could not download requested resource","updater name":"debian"}
Mar 11 08:16:26 172.18.0.1 clair[4480]: {"Event":"an error occured when fetching update","Level":"error","Location":"updater.go:235","Time":"2019-03-11 08:16:26.615039","error":"could not download requested resource","updater name":"oracle"}
Mar 11 08:16:26 172.18.0.1 clair[4480]: {"Event":"could not download RHEL's update list","Level":"error","Location":"rhel.go:106","Time":"2019-03-11 08:16:26.621901","error":"Get https://www.redhat.com/security/data/oval/: dial tcp: lookup www.redhat.com: Temporary failure in name resolution"}
Mar 11 08:16:26 172.18.0.1 clair[4480]: {"Event":"an error occured when fetching update","Level":"error","Location":"updater.go:235","Time":"2019-03-11 08:16:26.621963","error":"could not download requested resource","updater name":"rhel"}

jobservice.log

Mar 11 08:16:11 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:11Z [ERROR] [context.go:86]: Job context initialization error: Get http://core:8080/api/configs: dial tcp 172.18.0.8:8080: getsockopt: connection refused
Mar 11 08:16:11 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:11Z [INFO] Retry in 9 seconds
Mar 11 08:16:11 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:11Z [INFO] 0 outdated log entries are sweepped by sweeper *sweeper.FileSweeper
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Registering database: type-PostgreSQL host-postgresql port-5432 databse-registry sslmode-"disable"
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Register database completed
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Register job *impl.DemoJob with name DEMO
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Register job *scan.ClairJob with name IMAGE_SCAN
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Register job *scan.All with name IMAGE_SCAN_ALL
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Register job *replication.Transfer with name IMAGE_TRANSFER
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Register job *replication.Deleter with name IMAGE_DELETE
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Register job *replication.Replicator with name IMAGE_REPLICATE
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Register job *gc.GarbageCollector with name IMAGE_GC
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Server is started at :8080 with http
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] OP commands sweeper is started
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Redis job stats manager is started
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Message server is started
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Subscribe redis channel {harbor_job_service_namespace}:period:policies:notifications
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Redis worker pool is started
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Load 0 periodic job policies
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Periodic enqueuer is started
Mar 11 08:16:20 172.18.0.1 jobservice[4480]: 2019-03-11T08:16:20Z [INFO] Redis scheduler is started
Mar 12 01:54:41 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:41Z [INFO] 0 outdated log entries are sweepped by sweeper *sweeper.FileSweeper
Mar 12 01:54:41 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:41Z [ERROR] [context.go:86]: Job context initialization error: Get http://core:8080/api/configs: dial tcp 172.23.0.8:8080: getsockopt: connection refused
Mar 12 01:54:41 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:41Z [INFO] Retry in 9 seconds
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Registering database: type-PostgreSQL host-postgresql port-5432 databse-registry sslmode-"disable"
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Register database completed
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Register job *impl.DemoJob with name DEMO
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Register job *gc.GarbageCollector with name IMAGE_GC
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Register job *scan.ClairJob with name IMAGE_SCAN
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Register job *scan.All with name IMAGE_SCAN_ALL
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Register job *replication.Transfer with name IMAGE_TRANSFER
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Register job *replication.Deleter with name IMAGE_DELETE
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Register job *replication.Replicator with name IMAGE_REPLICATE
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Server is started at :8080 with http
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] OP commands sweeper is started
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Redis job stats manager is started
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Message server is started
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Subscribe redis channel {harbor_job_service_namespace}:period:policies:notifications
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Load 0 periodic job policies
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Periodic enqueuer is started
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Redis scheduler is started
Mar 12 01:54:50 172.23.0.1 jobservice[4480]: 2019-03-12T01:54:50Z [INFO] Redis worker pool is started
Mar 12 03:01:03 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:03Z [INFO] 0 outdated log entries are sweepped by sweeper *sweeper.FileSweeper
Mar 12 03:01:07 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:07Z [ERROR] [context.go:86]: Job context initialization error: Get http://core:8080/api/configs: dial tcp 172.23.0.7:8080: getsockopt: no route to host
Mar 12 03:01:07 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:07Z [INFO] Retry in 9 seconds
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Registering database: type-PostgreSQL host-postgresql port-5432 databse-registry sslmode-"disable"
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Register database completed
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Register job *impl.DemoJob with name DEMO
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Register job *replication.Deleter with name IMAGE_DELETE
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Register job *replication.Replicator with name IMAGE_REPLICATE
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Register job *gc.GarbageCollector with name IMAGE_GC
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Register job *scan.ClairJob with name IMAGE_SCAN
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Register job *scan.All with name IMAGE_SCAN_ALL
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Register job *replication.Transfer with name IMAGE_TRANSFER
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Message server is started
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] OP commands sweeper is started
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Redis job stats manager is started
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Server is started at :8080 with http
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Subscribe redis channel {harbor_job_service_namespace}:period:policies:notifications
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Load 0 periodic job policies
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Periodic enqueuer is started
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Redis scheduler is started
Mar 12 03:01:16 172.23.0.1 jobservice[22668]: 2019-03-12T03:01:16Z [INFO] Redis worker pool is started
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] 0 outdated log entries are sweepped by sweeper *sweeper.FileSweeper
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Registering database: type-PostgreSQL host-postgresql port-5432 databse-registry sslmode-"disable"
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Register database completed
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Register job *impl.DemoJob with name DEMO
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Register job *replication.Transfer with name IMAGE_TRANSFER
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Register job *replication.Deleter with name IMAGE_DELETE
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Register job *replication.Replicator with name IMAGE_REPLICATE
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Register job *gc.GarbageCollector with name IMAGE_GC
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Register job *scan.ClairJob with name IMAGE_SCAN
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Register job *scan.All with name IMAGE_SCAN_ALL
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Message server is started
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Subscribe redis channel {harbor_job_service_namespace}:period:policies:notifications
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] OP commands sweeper is started
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Redis job stats manager is started
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Server is started at :8080 with http
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Load 0 periodic job policies
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Periodic enqueuer is started
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Redis scheduler is started
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [ERROR] [enqueuer.go:81]: periodic_enqueuer.loop.enqueue:key {harbor_job_service_namespace}:period:lock is already set with value d55962da48b90a84baeba16f
Mar 12 03:08:44 172.18.0.1 jobservice[795]: 2019-03-12T03:08:44Z [INFO] Redis worker pool is started

notary-server.log

Mar 11 08:16:09 172.18.0.1 notary-server[4480]: 2019/03/11 08:16:09 Updating database.
Mar 11 08:16:09 172.18.0.1 notary-server[4480]: 2019/03/11 08:16:09 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:10 172.18.0.1 notary-server[4480]: 2019/03/11 08:16:10 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:11 172.18.0.1 notary-server[4480]: 2019/03/11 08:16:11 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:12 172.18.0.1 notary-server[4480]: 2019/03/11 08:16:12 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:13 172.18.0.1 notary-server[4480]: 2019/03/11 08:16:13 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:14 172.18.0.1 notary-server[4480]: 2019/03/11 08:16:14 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:16 172.18.0.1 notary-server[4480]: 2019/03/11 08:16:16 schema_migrations table does not exist, skip.
Mar 11 08:16:16 172.18.0.1 notary-server[4480]: 1/u initial (34.446097ms)
Mar 11 08:16:16 172.18.0.1 notary-server[4480]: 2/u changefeed (69.904667ms)
Mar 11 08:16:16 172.18.0.1 notary-server[4480]: notaryserver database migrated to latest version
Mar 11 08:16:16 172.18.0.1 notary-server[4480]: {"level":"info","msg":"Version: 0.6.1, Git commit: d6e1431f","time":"2019-03-11T08:16:16Z"}
Mar 11 08:16:16 172.18.0.1 notary-server[4480]: {"level":"info","msg":"Using remote signing service","time":"2019-03-11T08:16:16Z"}
Mar 11 08:16:16 172.18.0.1 notary-server[4480]: {"level":"info","msg":"Using postgres backend","time":"2019-03-11T08:16:16Z"}
Mar 11 08:16:16 172.18.0.1 notary-server[4480]: 2019/03/11 08:16:16 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 172.22.0.2:7899: connect: connection refused"; Reconnecting to {notarysigner:7899 <nil>}
Mar 11 08:16:16 172.18.0.1 notary-server[4480]: {"level":"info","msg":"Starting Server","time":"2019-03-11T08:16:16Z"}
Mar 11 08:16:16 172.18.0.1 notary-server[4480]: {"level":"info","msg":"Starting on :4443","time":"2019-03-11T08:16:16Z"}
Mar 12 01:54:38 172.23.0.1 notary-server[4480]: 2019/03/12 01:54:38 Updating database.
Mar 12 01:54:38 172.23.0.1 notary-server[4480]: 2019/03/12 01:54:38 Failed to Ping DB, sleep for 1 second.
Mar 12 01:54:39 172.23.0.1 notary-server[4480]: 2019/03/12 01:54:39 Failed to Ping DB, sleep for 1 second.
Mar 12 01:54:40 172.23.0.1 notary-server[4480]: 2019/03/12 01:54:40 Failed to Ping DB, sleep for 1 second.
Mar 12 01:54:41 172.23.0.1 notary-server[4480]: 2019/03/12 01:54:41 Failed to Ping DB, sleep for 1 second.
Mar 12 01:54:42 172.23.0.1 notary-server[4480]: 2019/03/12 01:54:42 Failed to Ping DB, sleep for 1 second.
Mar 12 01:54:43 172.23.0.1 notary-server[4480]: 2019/03/12 01:54:43 schema_migrations table does not exist, skip.
Mar 12 01:54:44 172.23.0.1 notary-server[4480]: 1/u initial (39.438461ms)
Mar 12 01:54:44 172.23.0.1 notary-server[4480]: 2/u changefeed (69.660085ms)
Mar 12 01:54:44 172.23.0.1 notary-server[4480]: notaryserver database migrated to latest version
Mar 12 01:54:44 172.23.0.1 notary-server[4480]: {"level":"info","msg":"Version: 0.6.1, Git commit: d6e1431f","time":"2019-03-12T01:54:44Z"}
Mar 12 01:54:44 172.23.0.1 notary-server[4480]: {"level":"info","msg":"Using remote signing service","time":"2019-03-12T01:54:44Z"}
Mar 12 01:54:44 172.23.0.1 notary-server[4480]: {"level":"info","msg":"Using postgres backend","time":"2019-03-12T01:54:44Z"}
Mar 12 01:54:44 172.23.0.1 notary-server[4480]: 2019/03/12 01:54:44 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 172.27.0.2:7899: connect: connection refused"; Reconnecting to {notarysigner:7899 <nil>}
Mar 12 01:54:44 172.23.0.1 notary-server[4480]: {"level":"info","msg":"Starting Server","time":"2019-03-12T01:54:44Z"}
Mar 12 01:54:44 172.23.0.1 notary-server[4480]: {"level":"info","msg":"Starting on :4443","time":"2019-03-12T01:54:44Z"}
Mar 12 03:08:45 172.18.0.1 notary-server[795]: 2019/03/12 03:08:45 Updating database.
Mar 12 03:08:45 172.18.0.1 notary-server[795]: 2019/03/12 03:08:45 schema_migrations table does not require update, skip.
Mar 12 03:08:45 172.18.0.1 notary-server[795]: no change
Mar 12 03:08:46 172.18.0.1 notary-server[795]: notaryserver database migrated to latest version
Mar 12 03:08:46 172.18.0.1 notary-server[795]: {"level":"info","msg":"Version: 0.6.1, Git commit: d6e1431f","time":"2019-03-12T03:08:46Z"}
Mar 12 03:08:46 172.18.0.1 notary-server[795]: {"level":"info","msg":"Using remote signing service","time":"2019-03-12T03:08:46Z"}
Mar 12 03:08:46 172.18.0.1 notary-server[795]: {"level":"info","msg":"Using postgres backend","time":"2019-03-12T03:08:46Z"}
Mar 12 03:08:46 172.18.0.1 notary-server[795]: {"level":"info","msg":"Starting Server","time":"2019-03-12T03:08:46Z"}
Mar 12 03:08:46 172.18.0.1 notary-server[795]: {"level":"info","msg":"Starting on :4443","time":"2019-03-12T03:08:46Z"}

notary-signer.log

Mar 11 08:16:08 172.18.0.1 notary-signer[4480]: 2019/03/11 08:16:08 Updating database.
Mar 11 08:16:08 172.18.0.1 notary-signer[4480]: 2019/03/11 08:16:08 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:09 172.18.0.1 notary-signer[4480]: 2019/03/11 08:16:09 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:10 172.18.0.1 notary-signer[4480]: 2019/03/11 08:16:10 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:11 172.18.0.1 notary-signer[4480]: 2019/03/11 08:16:11 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:12 172.18.0.1 notary-signer[4480]: 2019/03/11 08:16:12 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:13 172.18.0.1 notary-signer[4480]: 2019/03/11 08:16:13 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:14 172.18.0.1 notary-signer[4480]: 2019/03/11 08:16:14 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:15 172.18.0.1 notary-signer[4480]: 2019/03/11 08:16:15 Failed to Ping DB, sleep for 1 second.
Mar 11 08:16:16 172.18.0.1 notary-signer[4480]: 2019/03/11 08:16:16 schema_migrations table does not exist, skip.
Mar 11 08:16:16 172.18.0.1 notary-signer[4480]: 1/u initial (42.675995ms)
Mar 11 08:16:16 172.18.0.1 notary-signer[4480]: notarysigner database migrated to latest version
Mar 11 08:16:16 172.18.0.1 notary-signer[4480]: {"level":"info","msg":"Version: 0.6.1, Git commit: d6e1431f","time":"2019-03-11T08:16:16Z"}
Mar 11 08:16:16 172.18.0.1 notary-signer[4480]: {"level":"debug","msg":"Default Alias: defaultalias","time":"2019-03-11T08:16:16Z"}
Mar 12 01:54:37 172.23.0.1 notary-signer[4480]: 2019/03/12 01:54:37 Updating database.
Mar 12 01:54:37 172.23.0.1 notary-signer[4480]: 2019/03/12 01:54:37 Failed to Ping DB, sleep for 1 second.
Mar 12 01:54:38 172.23.0.1 notary-signer[4480]: 2019/03/12 01:54:38 Failed to Ping DB, sleep for 1 second.
Mar 12 01:54:39 172.23.0.1 notary-signer[4480]: 2019/03/12 01:54:39 Failed to Ping DB, sleep for 1 second.
Mar 12 01:54:40 172.23.0.1 notary-signer[4480]: 2019/03/12 01:54:40 Failed to Ping DB, sleep for 1 second.
Mar 12 01:54:41 172.23.0.1 notary-signer[4480]: 2019/03/12 01:54:41 Failed to Ping DB, sleep for 1 second.
Mar 12 01:54:42 172.23.0.1 notary-signer[4480]: 2019/03/12 01:54:42 Failed to Ping DB, sleep for 1 second.
Mar 12 01:54:43 172.23.0.1 notary-signer[4480]: 2019/03/12 01:54:43 schema_migrations table does not exist, skip.
Mar 12 01:54:44 172.23.0.1 notary-signer[4480]: 1/u initial (27.695073ms)
Mar 12 01:54:44 172.23.0.1 notary-signer[4480]: notarysigner database migrated to latest version
Mar 12 01:54:44 172.23.0.1 notary-signer[4480]: {"level":"info","msg":"Version: 0.6.1, Git commit: d6e1431f","time":"2019-03-12T01:54:44Z"}
Mar 12 01:54:44 172.23.0.1 notary-signer[4480]: {"level":"debug","msg":"Default Alias: defaultalias","time":"2019-03-12T01:54:44Z"}
Mar 12 03:08:44 172.18.0.1 notary-signer[795]: 2019/03/12 03:08:44 Updating database.
Mar 12 03:08:44 172.18.0.1 notary-signer[795]: 2019/03/12 03:08:44 schema_migrations table does not require update, skip.
Mar 12 03:08:44 172.18.0.1 notary-signer[795]: no change
Mar 12 03:08:44 172.18.0.1 notary-signer[795]: notarysigner database migrated to latest version
Mar 12 03:08:44 172.18.0.1 notary-signer[795]: {"level":"info","msg":"Version: 0.6.1, Git commit: d6e1431f","time":"2019-03-12T03:08:44Z"}
Mar 12 03:08:44 172.18.0.1 notary-signer[795]: {"level":"debug","msg":"Default Alias: defaultalias","time":"2019-03-12T03:08:44Z"}

portal.log

Mar 11 08:16:42 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:16:42 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:17:12 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:17:12 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:17:42 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:17:42 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:18:12 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:18:12 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:18:42 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:18:42 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:19:12 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:19:12 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:19:42 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:19:42 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:20:12 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:20:12 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:20:43 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:20:43 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:21:13 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:21:13 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:21:43 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:21:43 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:22:13 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:22:13 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:22:43 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:22:43 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:23:13 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:23:13 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:23:43 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:23:43 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:24:13 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:24:13 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:24:44 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:24:44 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:25:14 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:25:14 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:25:44 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:25:44 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:26:14 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:26:14 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:26:44 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:26:44 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:27:14 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:27:14 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:27:44 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:27:44 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:28:14 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:28:14 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:28:44 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:28:44 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:29:15 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:29:15 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:29:45 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:29:45 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:30:15 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:30:15 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:30:45 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:30:45 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:31:15 172.18.0.1 portal[4480]: 127.0.0.1 - - [11/Mar/2019:08:31:15 +0000] "GET / HTTP/1.1" 200 693 "-" "curl/7.59.0"
Mar 11 08:31:32 172.18.0.1 portal[4480]: 172.18.0.11 - - [11/Mar/2019:08:31:32 +0000] "GET / HTTP/1.1" 200 693 "-" "Mozilla/5.0 (Windows NT 6.1; rv:65.0) Gecko/20100101 Firefox/65.0"

postgresql.log

Mar 11 08:16:07 172.18.0.1 postgresql[4480]: The files belonging to this database system will be owned by user "postgres".
Mar 11 08:16:07 172.18.0.1 postgresql[4480]: This user must also own the server process.
Mar 11 08:16:07 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:07 172.18.0.1 postgresql[4480]: The database cluster will be initialized with locales
Mar 11 08:16:07 172.18.0.1 postgresql[4480]:   COLLATE:  en_US.UTF-8
Mar 11 08:16:07 172.18.0.1 postgresql[4480]:   CTYPE:    en_US.UTF-8
Mar 11 08:16:07 172.18.0.1 postgresql[4480]:   MESSAGES: C
Mar 11 08:16:07 172.18.0.1 postgresql[4480]:   MONETARY: C
Mar 11 08:16:07 172.18.0.1 postgresql[4480]:   NUMERIC:  C
Mar 11 08:16:07 172.18.0.1 postgresql[4480]:   TIME:     C
Mar 11 08:16:07 172.18.0.1 postgresql[4480]: The default text search configuration will be set to "english".
Mar 11 08:16:07 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:07 172.18.0.1 postgresql[4480]: Data page checksums are disabled.
Mar 11 08:16:07 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:07 172.18.0.1 postgresql[4480]: fixing permissions on existing directory /var/lib/postgresql/data ... ok
Mar 11 08:16:07 172.18.0.1 postgresql[4480]: creating subdirectories ... ok
Mar 11 08:16:08 172.18.0.1 postgresql[4480]: selecting default max_connections ... 100
Mar 11 08:16:08 172.18.0.1 postgresql[4480]: selecting default shared_buffers ... 128MB
Mar 11 08:16:08 172.18.0.1 postgresql[4480]: selecting dynamic shared memory implementation ... posix
Mar 11 08:16:10 172.18.0.1 postgresql[4480]: creating configuration files ... ok
Mar 11 08:16:11 172.18.0.1 postgresql[4480]: running bootstrap script ... ok
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: performing post-bootstrap initialization ... ok
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: syncing data to disk ... ok
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: Success. You can now start the database server using:
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:12 172.18.0.1 postgresql[4480]:     pg_ctl -D /var/lib/postgresql/data -l logfile start
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: WARNING: enabling "trust" authentication for local connections
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: You can change this by editing pg_hba.conf or using the option -A, or
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: --auth-local and --auth-host, the next time you run initdb.
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: root
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: waiting for server to start....LOG:  database system was shut down at 2019-03-11 08:16:12 UTC
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: LOG:  MultiXact member wraparound protections are now enabled
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: LOG:  database system is ready to accept connections
Mar 11 08:16:12 172.18.0.1 postgresql[4480]: LOG:  autovacuum launcher started
Mar 11 08:16:13 172.18.0.1 postgresql[4480]:  done
Mar 11 08:16:13 172.18.0.1 postgresql[4480]: server started
Mar 11 08:16:13 172.18.0.1 postgresql[4480]: ALTER ROLE
Mar 11 08:16:13 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:13 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:13 172.18.0.1 postgresql[4480]: /entrypoint.sh: running /docker-entrypoint-initdb.d/initial-notaryserver.sql
Mar 11 08:16:13 172.18.0.1 postgresql[4480]: CREATE DATABASE
Mar 11 08:16:13 172.18.0.1 postgresql[4480]: CREATE ROLE
Mar 11 08:16:13 172.18.0.1 postgresql[4480]: ALTER ROLE
Mar 11 08:16:13 172.18.0.1 postgresql[4480]: GRANT
Mar 11 08:16:13 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:13 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:13 172.18.0.1 postgresql[4480]: /entrypoint.sh: running /docker-entrypoint-initdb.d/initial-notarysigner.sql
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: CREATE DATABASE
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: CREATE ROLE
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: ALTER ROLE
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: GRANT
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: /entrypoint.sh: running /docker-entrypoint-initdb.d/initial-registry.sql
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: CREATE DATABASE
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: You are now connected to database "registry" as user "postgres".
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: CREATE TABLE
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: waiting for server to shut down...LOG:  received fast shutdown request
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: LOG:  aborting any active transactions
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: LOG:  autovacuum launcher shutting down
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: .LOG:  shutting down
Mar 11 08:16:14 172.18.0.1 postgresql[4480]: LOG:  database system is shut down
Mar 11 08:16:15 172.18.0.1 postgresql[4480]:  done
Mar 11 08:16:15 172.18.0.1 postgresql[4480]: server stopped
Mar 11 08:16:15 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:15 172.18.0.1 postgresql[4480]: PostgreSQL init process complete; ready for start up.
Mar 11 08:16:15 172.18.0.1 postgresql[4480]: 
Mar 11 08:16:15 172.18.0.1 postgresql[4480]: LOG:  database system was shut down at 2019-03-11 08:16:14 UTC
Mar 11 08:16:15 172.18.0.1 postgresql[4480]: LOG:  MultiXact member wraparound protections are now enabled
Mar 11 08:16:15 172.18.0.1 postgresql[4480]: LOG:  database system is ready to accept connections
Mar 11 08:16:15 172.18.0.1 postgresql[4480]: LOG:  autovacuum launcher started
Mar 11 08:16:16 172.18.0.1 postgresql[4480]: LOG:  incomplete startup packet
Mar 11 08:16:20 172.18.0.1 postgresql[4480]: message repeated 3 times: [ LOG:  incomplete startup packet]
Mar 12 01:09:30 172.18.0.1 postgresql[4480]: Session terminated, terminating shell... ...killed.
Mar 12 01:54:34 172.23.0.1 postgresql[4480]: The files belonging to this database system will be owned by user "postgres".
Mar 12 01:54:34 172.23.0.1 postgresql[4480]: This user must also own the server process.
Mar 12 01:54:34 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:34 172.23.0.1 postgresql[4480]: The database cluster will be initialized with locales
Mar 12 01:54:34 172.23.0.1 postgresql[4480]:   COLLATE:  en_US.UTF-8
Mar 12 01:54:34 172.23.0.1 postgresql[4480]:   CTYPE:    en_US.UTF-8
Mar 12 01:54:34 172.23.0.1 postgresql[4480]:   MESSAGES: C
Mar 12 01:54:34 172.23.0.1 postgresql[4480]:   MONETARY: C
Mar 12 01:54:34 172.23.0.1 postgresql[4480]:   NUMERIC:  C
Mar 12 01:54:34 172.23.0.1 postgresql[4480]:   TIME:     C
Mar 12 01:54:34 172.23.0.1 postgresql[4480]: The default text search configuration will be set to "english".
Mar 12 01:54:34 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:34 172.23.0.1 postgresql[4480]: Data page checksums are disabled.
Mar 12 01:54:34 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:34 172.23.0.1 postgresql[4480]: fixing permissions on existing directory /var/lib/postgresql/data ... ok
Mar 12 01:54:34 172.23.0.1 postgresql[4480]: creating subdirectories ... ok
Mar 12 01:54:34 172.23.0.1 postgresql[4480]: selecting default max_connections ... 100
Mar 12 01:54:34 172.23.0.1 postgresql[4480]: selecting default shared_buffers ... 128MB
Mar 12 01:54:34 172.23.0.1 postgresql[4480]: selecting dynamic shared memory implementation ... posix
Mar 12 01:54:36 172.23.0.1 postgresql[4480]: creating configuration files ... ok
Mar 12 01:54:38 172.23.0.1 postgresql[4480]: running bootstrap script ... ok
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: performing post-bootstrap initialization ... ok
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: syncing data to disk ... ok
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: Success. You can now start the database server using:
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:39 172.23.0.1 postgresql[4480]:     pg_ctl -D /var/lib/postgresql/data -l logfile start
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: WARNING: enabling "trust" authentication for local connections
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: You can change this by editing pg_hba.conf or using the option -A, or
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: --auth-local and --auth-host, the next time you run initdb.
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: root
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: waiting for server to start....LOG:  database system was shut down at 2019-03-12 01:54:39 UTC
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: LOG:  MultiXact member wraparound protections are now enabled
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: LOG:  database system is ready to accept connections
Mar 12 01:54:39 172.23.0.1 postgresql[4480]: LOG:  autovacuum launcher started
Mar 12 01:54:40 172.23.0.1 postgresql[4480]:  done
Mar 12 01:54:40 172.23.0.1 postgresql[4480]: server started
Mar 12 01:54:40 172.23.0.1 postgresql[4480]: ALTER ROLE
Mar 12 01:54:40 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:40 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:40 172.23.0.1 postgresql[4480]: /entrypoint.sh: running /docker-entrypoint-initdb.d/initial-notaryserver.sql
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: CREATE DATABASE
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: CREATE ROLE
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: ALTER ROLE
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: GRANT
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: /entrypoint.sh: running /docker-entrypoint-initdb.d/initial-notarysigner.sql
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: CREATE DATABASE
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: CREATE ROLE
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: ALTER ROLE
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: GRANT
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: /entrypoint.sh: running /docker-entrypoint-initdb.d/initial-registry.sql
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: CREATE DATABASE
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: You are now connected to database "registry" as user "postgres".
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: CREATE TABLE
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: LOG:  received fast shutdown request
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: LOG:  aborting any active transactions
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: waiting for server to shut down....LOG:  autovacuum launcher shutting down
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: LOG:  shutting down
Mar 12 01:54:41 172.23.0.1 postgresql[4480]: LOG:  database system is shut down
Mar 12 01:54:42 172.23.0.1 postgresql[4480]:  done
Mar 12 01:54:42 172.23.0.1 postgresql[4480]: server stopped
Mar 12 01:54:42 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:42 172.23.0.1 postgresql[4480]: PostgreSQL init process complete; ready for start up.
Mar 12 01:54:42 172.23.0.1 postgresql[4480]: 
Mar 12 01:54:42 172.23.0.1 postgresql[4480]: LOG:  database system was shut down at 2019-03-12 01:54:41 UTC
Mar 12 01:54:42 172.23.0.1 postgresql[4480]: LOG:  MultiXact member wraparound protections are now enabled
Mar 12 01:54:42 172.23.0.1 postgresql[4480]: LOG:  database system is ready to accept connections
Mar 12 01:54:42 172.23.0.1 postgresql[4480]: LOG:  autovacuum launcher started
Mar 12 01:54:43 172.23.0.1 postgresql[4480]: LOG:  incomplete startup packet
Mar 12 01:54:50 172.23.0.1 postgresql[4480]: message repeated 3 times: [ LOG:  incomplete startup packet]
Mar 12 03:00:47 172.23.0.1 postgresql[4480]: 
Mar 12 03:00:49 172.23.0.1 postgresql[4480]: Session terminated, terminating shell... ...killed.
Mar 12 03:01:05 172.23.0.1 postgresql[22668]: LOG:  database system was interrupted; last known up at 2019-03-12 02:59:43 UTC
Mar 12 03:01:05 172.23.0.1 postgresql[22668]: LOG:  database system was not properly shut down; automatic recovery in progress
Mar 12 03:01:05 172.23.0.1 postgresql[22668]: LOG:  redo starts at 0/16F3148
Mar 12 03:01:05 172.23.0.1 postgresql[22668]: LOG:  invalid record length at 0/16F3788: wanted 24, got 0
Mar 12 03:01:05 172.23.0.1 postgresql[22668]: LOG:  redo done at 0/16F3760
Mar 12 03:01:05 172.23.0.1 postgresql[22668]: LOG:  last completed transaction was at log time 2019-03-12 03:00:46.515664+00
Mar 12 03:01:05 172.23.0.1 postgresql[22668]: LOG:  MultiXact member wraparound protections are now enabled
Mar 12 03:01:05 172.23.0.1 postgresql[22668]: LOG:  database system is ready to accept connections
Mar 12 03:01:05 172.23.0.1 postgresql[22668]: LOG:  autovacuum launcher started
Mar 12 03:01:06 172.23.0.1 postgresql[22668]: LOG:  incomplete startup packet
Mar 12 03:01:16 172.23.0.1 postgresql[22668]: message repeated 3 times: [ LOG:  incomplete startup packet]
Mar 12 03:05:02 172.23.0.1 postgresql[22668]: 
Mar 12 03:05:04 172.23.0.1 postgresql[22668]: Session terminated, terminating shell... ...killed.
Mar 12 03:08:43 172.18.0.1 postgresql[795]: LOG:  database system was interrupted; last known up at 2019-03-12 03:01:05 UTC
Mar 12 03:08:43 172.18.0.1 postgresql[795]: LOG:  database system was not properly shut down; automatic recovery in progress
Mar 12 03:08:43 172.18.0.1 postgresql[795]: LOG:  redo starts at 0/16F37F8
Mar 12 03:08:43 172.18.0.1 postgresql[795]: LOG:  invalid record length at 0/16F7958: wanted 24, got 0
Mar 12 03:08:43 172.18.0.1 postgresql[795]: LOG:  redo done at 0/16F7930
Mar 12 03:08:43 172.18.0.1 postgresql[795]: LOG:  last completed transaction was at log time 2019-03-12 03:02:36.117211+00
Mar 12 03:08:43 172.18.0.1 postgresql[795]: LOG:  MultiXact member wraparound protections are now enabled
Mar 12 03:08:43 172.18.0.1 postgresql[795]: LOG:  database system is ready to accept connections
Mar 12 03:08:43 172.18.0.1 postgresql[795]: LOG:  autovacuum launcher started
Mar 12 03:08:44 172.18.0.1 postgresql[795]: LOG:  incomplete startup packet
Mar 12 03:08:44 172.18.0.1 postgresql[795]: message repeated 3 times: [ LOG:  incomplete startup packet]

proxy.log

Mar 11 08:16:43 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:18:13 172.18.0.1 proxy[4480]: message repeated 3 times: [ 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .]
Mar 11 08:18:43 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:19:14 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:19:44 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:20:14 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:20:44 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:21:14 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:21:44 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:22:14 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:22:44 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:23:15 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:23:45 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:24:15 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:24:45 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:25:15 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:25:45 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:26:15 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:26:45 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:27:16 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:27:46 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:28:16 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:28:46 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:29:16 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:29:46 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:30:16 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:30:46 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:31:17 172.18.0.1 proxy[4480]: 127.0.0.1 - "GET / HTTP/1.1" 308 188 "-" "curl/7.59.0" 0.000 - .
Mar 11 08:31:32 172.18.0.1 proxy[4480]: 192.168.188.10 - "GET / HTTP/1.1" 200 693 "

redis.log

 Mar 11 08:16:06 172.18.0.1 redis[4480]: 8:C 11 Mar 08:16:06.135 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
Mar 11 08:16:06 172.18.0.1 redis[4480]: 8:C 11 Mar 08:16:06.135 # Redis version=4.0.10, bits=64, commit=00000000, modified=0, pid=8, just started
Mar 11 08:16:06 172.18.0.1 redis[4480]: 8:C 11 Mar 08:16:06.135 # Configuration loaded
Mar 11 08:16:06 172.18.0.1 redis[4480]:                 _._                                                  
Mar 11 08:16:06 172.18.0.1 redis[4480]:            _.-``__ ''-._                                             
Mar 11 08:16:06 172.18.0.1 redis[4480]:       _.-``    `.  `_.  ''-._           Redis 4.0.10 (00000000/0) 64 bit
Mar 11 08:16:06 172.18.0.1 redis[4480]:   .-`` .-```.  ```\/    _.,_ ''-._                                   
Mar 11 08:16:06 172.18.0.1 redis[4480]:  (    '      ,       .-`  | `,    )     Running in standalone mode
Mar 11 08:16:06 172.18.0.1 redis[4480]:  |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
Mar 11 08:16:06 172.18.0.1 redis[4480]:  |    `-._   `._    /     _.-'    |     PID: 8
Mar 11 08:16:06 172.18.0.1 redis[4480]:   `-._    `-._  `-./  _.-'    _.-'                                   
Mar 11 08:16:06 172.18.0.1 redis[4480]:  |`-._`-._    `-.__.-'    _.-'_.-'|                                  
Mar 11 08:16:06 172.18.0.1 redis[4480]:  |    `-._`-._        _.-'_.-'    |           http://redis.io        
Mar 11 08:16:06 172.18.0.1 redis[4480]:   `-._    `-._`-.__.-'_.-'    _.-'                                   
Mar 11 08:16:06 172.18.0.1 redis[4480]:  |`-._`-._    `-.__.-'    _.-'_.-'|                                  
Mar 11 08:16:06 172.18.0.1 redis[4480]:  |    `-._`-._        _.-'_.-'    |                                  
Mar 11 08:16:06 172.18.0.1 redis[4480]:   `-._    `-._`-.__.-'_.-'    _.-'                                   
Mar 11 08:16:06 172.18.0.1 redis[4480]:       `-._    `-.__.-'    _.-'                                       
Mar 11 08:16:06 172.18.0.1 redis[4480]:           `-._        _.-'                                           
Mar 11 08:16:06 172.18.0.1 redis[4480]:               `-.__.-'                                               
Mar 11 08:16:06 172.18.0.1 redis[4480]: 
Mar 11 08:16:06 172.18.0.1 redis[4480]: 8:M 11 Mar 08:16:06.137 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
Mar 11 08:16:06 172.18.0.1 redis[4480]: 8:M 11 Mar 08:16:06.137 # Server initialized
Mar 11 08:16:06 172.18.0.1 redis[4480]: 8:M 11 Mar 08:16:06.137 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
Mar 11 08:16:06 172.18.0.1 redis[4480]: 8:M 11 Mar 08:16:06.137 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
Mar 11 08:16:06 172.18.0.1 redis[4480]: 8:M 11 Mar 08:16:06.137 * Ready to accept connections
Mar 11 08:21:07 172.18.0.1 redis[4480]: 8:M 11 Mar 08:21:07.027 * 10 changes in 300 seconds. Saving...
Mar 11 08:21:07 172.18.0.1 redis[4480]: 8:M 11 Mar 08:21:07.028 * Background saving started by pid 12
Mar 11 08:21:07 172.18.0.1 redis[4480]: 12:C 11 Mar 08:21:07.030 * DB saved on disk
Mar 11 08:21:07 172.18.0.1 redis[4480]: 12:C 11 Mar 08:21:07.031 * RDB: 0 MB of memory used by copy-on-write
Mar 11 08:21:07 172.18.0.1 redis[4480]: 8:M 11 Mar 08:21:07.129 * Background saving terminated with success

registry.log

Mar 11 08:16:08 172.18.0.1 registry[4480]: time="2019-03-11T08:16:08.293465177Z" level=info msg="debug server listening localhost:5001" 
Mar 11 08:16:08 172.18.0.1 registry[4480]: time="2019-03-11T08:16:08.294934665Z" level=info msg="configuring endpoint harbor (http://core:8080/service/notifications), timeout=3s, headers=map[]" go.version=go1.7.3 instance.id=fb4470e3-f08b-4920-88b5-2cd767440f71 service=registry version=v2.6.2 
Mar 11 08:16:08 172.18.0.1 registry[4480]: time="2019-03-11T08:16:08.335356627Z" level=info msg="using redis blob descriptor cache" go.version=go1.7.3 instance.id=fb4470e3-f08b-4920-88b5-2cd767440f71 service=registry version=v2.6.2 
Mar 11 08:16:08 172.18.0.1 registry[4480]: time="2019-03-11T08:16:08.335834697Z" level=info msg="listening on [::]:5000" go.version=go1.7.3 instance.id=fb4470e3-f08b-4920-88b5-2cd767440f71 service=registry version=v2.6.2 
Mar 11 08:16:38 172.18.0.1 registry[4480]: 127.0.0.1 - - [11/Mar/2019:08:16:38 +0000] "GET / HTTP/1.1" 200 0 "" "curl/7.59.0"
Mar 11 08:17:08 172.18.0.1 registry[4480]: 127.0.0.1 - - [11/Mar/2019:08:17:08 +0000] "GET / HTTP/1.1" 200 0 "" "curl/7.59.0"
Mar 11 08:17:38 172.18.0.1 registry[4480]: 127.0.0.1 - - [11/Mar/2019:08:17:38 +0000] "GET / HTTP/1.1" 200 0 "" "curl/7.59.0"
Mar 11 08:18:08 172.18.0.1 registry[4480]: 127.0.0.1 - - [11/Mar/2019:08:18:08 +0000] "GET / HTTP/1.1" 200 0 "" "curl/7.59.0"

registryctl.log

Mar 11 08:16:35 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:16:35 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:17:05 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:17:05 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:17:36 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:17:36 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:18:06 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:18:06 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:18:36 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:18:36 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:19:06 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:19:06 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:19:36 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:19:36 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:20:06 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:20:06 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:20:36 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:20:36 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:21:06 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:21:06 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:21:37 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:21:37 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:22:07 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:22:07 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:22:37 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:22:37 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:23:07 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:23:07 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:23:37 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:23:37 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:24:07 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:24:07 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:24:37 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:24:37 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:25:07 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:25:07 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:25:38 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:25:38 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:26:08 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:26:08 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:26:38 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:26:38 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:27:08 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:27:08 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:27:38 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:27:38 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:28:08 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:28:08 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:28:38 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:28:38 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:29:08 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:29:08 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:29:39 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:29:39 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:30:09 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:30:09 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:30:39 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:30:39 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:31:09 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:31:09 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:31:39 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:31:39 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:32:09 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:32:09 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:32:39 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:32:39 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:33:09 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:33:09 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:33:40 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:33:40 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:34:10 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:34:10 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:34:40 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:34:40 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:35:10 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:35:10 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:35:40 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:35:40 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:36:10 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:36:10 +0000] "GET /api/health HTTP/1.1" 200 9
Mar 11 08:36:40 172.18.0.1 registryctl[4480]: 127.0.0.1 - - [11/Mar/2019:08:36:40 +0000] "GET /api/health HTTP/1.1" 200 9

@lijianfeng1993
Copy link

me too

@ninjadq
Copy link
Member

ninjadq commented Mar 14, 2019

hi @Jawenba,
can you provide the inspect info of harbor-log container?

@Jawenba
Copy link
Author

Jawenba commented Mar 14, 2019

hi @ninjadq ,the inspect info is :

$ docker inspect harbor-log
[
    {
        "Id": "f54809f3471a45c7b9ee3b52018290b68cb0a44645537501fc08d1847b38e0eb",
        "Created": "2019-03-13T05:59:13.840844952Z",
        "Path": "/bin/sh",
        "Args": [
            "-c",
            "/usr/local/bin/start.sh"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 22704,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2019-03-13T05:59:18.097558457Z",
            "FinishedAt": "0001-01-01T00:00:00Z",
            "Health": {
                "Status": "unhealthy",
                "FailingStreak": 1401,
                "Log": [
                    {
                        "Start": "2019-03-14T05:16:06.070947528Z",
                        "End": "2019-03-14T05:16:36.071205889Z",
                        "ExitCode": -1,
                        "Output": "Health check exceeded timeout (30s)"
                    },
                    {
                        "Start": "2019-03-14T05:17:06.080257082Z",
                        "End": "2019-03-14T05:17:36.080500626Z",
                        "ExitCode": -1,
                        "Output": "Health check exceeded timeout (30s)"
                    },
                    {
                        "Start": "2019-03-14T05:18:06.088291796Z",
                        "End": "2019-03-14T05:18:36.088675077Z",
                        "ExitCode": -1,
                        "Output": "Health check exceeded timeout (30s)"
                    },
                    {
                        "Start": "2019-03-14T05:19:06.099157248Z",
                        "End": "2019-03-14T05:19:36.099483477Z",
                        "ExitCode": -1,
                        "Output": "Health check exceeded timeout (30s)"
                    },
                    {
                        "Start": "2019-03-14T05:20:06.108429697Z",
                        "End": "2019-03-14T05:20:36.108769163Z",
                        "ExitCode": -1,
                        "Output": "Health check exceeded timeout (30s)"
                    }
                ]
            }
        },
        "Image": "sha256:bf4916eef530bda0f9bb89a046df9fd180de8a624d57513cebefc0a4ac6f4ef5",
        "ResolvConfPath": "/var/lib/docker/containers/f54809f3471a45c7b9ee3b52018290b68cb0a44645537501fc08d1847b38e0eb/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/f54809f3471a45c7b9ee3b52018290b68cb0a44645537501fc08d1847b38e0eb/hostname",
        "HostsPath": "/var/lib/docker/containers/f54809f3471a45c7b9ee3b52018290b68cb0a44645537501fc08d1847b38e0eb/hosts",
        "LogPath": "/var/lib/docker/containers/f54809f3471a45c7b9ee3b52018290b68cb0a44645537501fc08d1847b38e0eb/f54809f3471a45c7b9ee3b52018290b68cb0a44645537501fc08d1847b38e0eb-json.log",
        "Name": "/harbor-log",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "docker-default",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/var/log/harbor:/var/log/docker:z",
                "/home/user/harbor/common/config/log:/etc/logrotate.d:z"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "harbor_harbor",
            "PortBindings": {
                "10514/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": "1514"
                    }
                ]
            },
            "RestartPolicy": {
                "Name": "always",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": [],
            "CapAdd": [
                "CHOWN",
                "DAC_OVERRIDE",
                "SETGID",
                "SETUID"
            ],
            "CapDrop": [
                "ALL"
            ],
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": [
                "."
            ],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "shareable",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/asound",
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/b65b0255e40995b5085f05be58183921a982ad0e3d5927e00eb7ac47eb09ab31-init/diff:/var/lib/docker/overlay2/9701c5dc437d1ba16c18be337c0f3b1b95e1d5c5bd541d24c9d6cc8e01f522ae/diff:/var/lib/docker/overlay2/dc9317f215c8ded176c1edd88b3f4b8e5660ab3596b0b4adc9df909d7b8f8181/diff:/var/lib/docker/overlay2/055261d004059619a24b0ac0ed265f88d374b362963683c5797e01baa18f1895/diff:/var/lib/docker/overlay2/6b460f2ef04124e8f53a2107ee64c42d6fcbdb7e832eb58aed7a72a0468efe8c/diff:/var/lib/docker/overlay2/15bed58edb5d0336c94d6d9e39c8762424b72176809b67f45e07657087647e13/diff:/var/lib/docker/overlay2/0aa99e16599099e80f5c18ef034ebf72b91d734944e03c68b517cf3ef8451166/diff:/var/lib/docker/overlay2/fe66274bc03d0de61b51b1df6c6e03542eaeb5d7ca29a7722a2ca96559ae4bc5/diff:/var/lib/docker/overlay2/9ff3f837323f9accd9e17fdaab55f3c33cc25f3f934a8a71eeb4bd054e088e3a/diff",
                "MergedDir": "/var/lib/docker/overlay2/b65b0255e40995b5085f05be58183921a982ad0e3d5927e00eb7ac47eb09ab31/merged",
                "UpperDir": "/var/lib/docker/overlay2/b65b0255e40995b5085f05be58183921a982ad0e3d5927e00eb7ac47eb09ab31/diff",
                "WorkDir": "/var/lib/docker/overlay2/b65b0255e40995b5085f05be58183921a982ad0e3d5927e00eb7ac47eb09ab31/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/var/log/harbor",
                "Destination": "/var/log/docker",
                "Mode": "z",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/home/user/harbor/common/config/log",
                "Destination": "/etc/logrotate.d",
                "Mode": "z",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "volume",
                "Name": "3a7a116c8d07c11bd374d47149738850afa5b8b1a0f6bdda0f50c272ea948da9",
                "Source": "/var/lib/docker/volumes/3a7a116c8d07c11bd374d47149738850afa5b8b1a0f6bdda0f50c272ea948da9/_data",
                "Destination": "/run",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "f54809f3471a",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "10514/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/sh",
                "-c",
                "/usr/local/bin/start.sh"
            ],
            "Healthcheck": {
                "Test": [
                    "CMD-SHELL",
                    "netstat -ltu|grep 10514"
                ]
            },
            "ArgsEscaped": true,
            "Image": "goharbor/harbor-log:v1.7.4",
            "Volumes": {
                "/etc/logrotate.d": {},
                "/etc/logrotate.d/": {},
                "/run/": {},
                "/var/log/docker": {},
                "/var/log/docker/": {}
            },
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {
                "build-date": "20190301",
                "com.docker.compose.config-hash": "54b0b768af810ea2a7474f5c1c227d71079421eee1da242f2d2a12df33f1e301",
                "com.docker.compose.container-number": "1",
                "com.docker.compose.oneoff": "False",
                "com.docker.compose.project": "harbor",
                "com.docker.compose.service": "log",
                "com.docker.compose.version": "1.23.2",
                "name": "Photon OS 2.0 Base Image",
                "vendor": "VMware"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "01eaf814c9857be6ab1280abfadbd2aa65ff3d42902f25d6fe144b21f0ce22ba",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "10514/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": "1514"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/01eaf814c985",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "harbor_harbor": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": [
                        "log",
                        "f54809f3471a"
                    ],
                    "NetworkID": "9d81840bbff1ad0fb0057c2c9f79cc8decb59783a965d96b523782e9086f353f",
                    "EndpointID": "836abaf3468ce2d0734e5588799daf3791fef917a5e66908f329904a465a4dcc",
                    "Gateway": "172.19.0.1",
                    "IPAddress": "172.19.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:13:00:02",
                    "DriverOpts": null
                }
            }
        }
    }
]

@ninjadq ninjadq removed the more-info-needed The issue author need to provide more details and context to the issue label Mar 18, 2019
@hlwanghl
Copy link

I believe this commit can fix the issue.

As a workaround before the fix is released, you can simply override the health check command through docker-compose.yml like this PR.

@Jawenba
Copy link
Author

Jawenba commented Mar 20, 2019

@hlwanghl Well done,think you!
I modify docker-compose.yml like the PR ,and modify the version of other docker-compose yaml files to version: '2.1' .the issue is fixed.

docker-compose.yml
图片

docker ps
图片

@Jawenba Jawenba closed this as completed Mar 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants