Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HA Redis Cluster (cluster-mode enabled) Support : Service Unavailable 503 Error on Harbor #6075

Closed
jakirpatel opened this issue Oct 18, 2018 · 19 comments

Comments

@jakirpatel
Copy link

jakirpatel commented Oct 18, 2018

I have deployed the Harbor. While loading the Harbor sometimes I am getting the service unavailable 503 error. Also, this happens during logging in or after logout.

In expected behavior, Harbor should return the 200 ok for the authentication with LDAP. I am not sure about the problem but I doubt if it is something related with Redis (In terms of cookies).

While inspecting my home page I observed two API's:

/api/systeminfo
/api/repositories/top

While logging out harbor made a request to these API's and they returned 503 - Service Unavailable. The more interesting thing is I am not sure if these API's relies on cookies. I can see no cookies in my browser while calling this.

Steps to reproduce the problem:
1.Deploy the Harbor with the notary, Clair, and chart
2.Enable LDAP
3. Use external Postgres, Redis for Harbor (Not containers)
4. Login with admin user (Inspect the browser network calls)
5. Logout from admin user (Inspect the browser network calls)

Versions:
Please specify the versions of the following systems.

  • harbor version: v1.6.0-66709daa
  • docker engine version: 18.06.1-ce
  • docker-compose version: 1.23.0-rc1
  • postgres version: 10.2

Logs :

adminserver:

Oct 18 06:39:40 192.168.96.1 adminserver[19643]: 192.168.96.7 - - [18/Oct/2018:06:39:40 +0000] "GET /api/configurations HTTP/1.1" 200 1920
Oct 18 06:39:41 192.168.96.1 adminserver[19643]: 192.168.96.7 - - [18/Oct/2018:06:39:41 +0000] "GET /api/configurations HTTP/1.1" 200 1920
Oct 18 06:39:48 192.168.96.1 adminserver[19643]: 192.168.96.7 - - [18/Oct/2018:06:39:48 +0000] "GET /api/configurations HTTP/1.1" 200 1920
Oct 18 06:39:49 192.168.96.1 adminserver[19643]: 192.168.96.7 - - [18/Oct/2018:06:39:49 +0000] "GET /api/configurations HTTP/1.1" 200 1920
Oct 18 06:39:50 192.168.96.1 adminserver[19643]: 192.168.96.7 - - [18/Oct/2018:06:39:50 +0000] "GET /api/systeminfo/capacity HTTP/1.1" 200 42
Oct 18 06:40:08 192.168.96.1 adminserver[19643]: 127.0.0.1 - - [18/Oct/2018:06:40:08 +0000] "GET /api/ping HTTP/1.1" 200 6

proxy log:

Oct 18 06:44:24 192.168.96.1 proxy[19643]: <ip> - "GET /log_out HTTP/1.1" 200 0 "https://dev-harbor.hnd.local/harbor/projects" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0" 0.002 0.002 .
Oct 18 06:44:24 192.168.96.1 proxy[19643]: <ip>- "GET /api/repositories/top HTTP/1.1" 503 1952 "https://dev-harbor.hnd.local/harbor/sign-in?signout=true" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0" 0.001 0.001 .
Oct 18 06:44:24 192.168.96.1 proxy[19643]: <ip> - "GET /api/systeminfo HTTP/1.1" 503 1952 "https://dev-harbor.hnd.local/harbor/sign-in?signout=true" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0" 0.001 0.001 .
Oct 18 06:44:41 192.168.96.1 proxy[19643]: 127.0.0.1 - "GET / HTTP/1.1" 301 185 "-" "curl/7.59.0" 0.000 - .
Oct 18 06:44:49 192.168.96.1 proxy[19643]: <ip> - "GET /harbor/sign-in?signout=true HTTP/1.1" 503 1952 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0" 0.001 0.001 .
Oct 18 06:44:57 192.168.96.1 proxy[19643]: <ip> - "GET /harbor/sign-in HTTP/1.1" 503 1952 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0" 0.000 0.000 .

Error on UI:

selection_011

FYI -- I edited the sensitive information in the proxy log. For Example ip represents IP of the machine.

@jakirpatel
Copy link
Author

I am pretty sure this is the problem because of external Redis cluster. I have tested by deploying the redis server. Harbor is working fine with single instance redis. So it will be helpful to have version compatibility and type of redis protocol supported by redis client through Harbor.

@reasonerjt
Copy link
Contributor

@jakirpatel could you provide complete logs to help us better debug?

@jakirpatel
Copy link
Author

jakirpatel commented Oct 19, 2018

@reasonerjt
Which component logs you want ?

Here are ui.log:

Oct 19 07:03:24 172.18.0.1 ui[19643]: 2018/10/19 07:03:24 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[42m 200 #033[0m|    575.008µs|   match|#033[44m GET     #033[0m /log_out   r:/log_out#033[0m
Oct 19 07:03:24 172.18.0.1 ui[19643]: 2018/10/19 07:03:24 #033[1;31m[E] [server.go:2619] MOVED 11614 192.168.0.2:6379#033[0m
Oct 19 07:03:24 172.18.0.1 ui[19643]: 2018/10/19 07:03:24 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[41m 503 #033[0m|    404.566µs| nomatch|#033[44m GET     #033[0m /api/systeminfo#033[0m
Oct 19 07:03:24 172.18.0.1 ui[19643]: 2018/10/19 07:03:24 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[42m 200 #033[0m|   1.403744ms|   match|#033[44m GET     #033[0m /api/repositories/top   r:/api/repositories/top#033[0m
Oct 19 07:03:24 172.18.0.1 ui[19643]: 2018/10/19 07:03:24 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|      31.01µs|   match|#033[44m GET     #033[0m /static/images/harbor-black-logo.png#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[42m 200 #033[0m|    773.142µs|   match|#033[44m GET     #033[0m /harbor/sign-in   r:/harbor/*#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|  11.264107ms|   match|#033[44m GET     #033[0m /static/clarity-ui.min.css#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|  11.224795ms|   match|#033[44m GET     #033[0m /static/mutationobserver.min.js#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|  12.646368ms|   match|#033[44m GET     #033[0m /static/styles.css#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|  13.132588ms|   match|#033[44m GET     #033[0m /static/clarity-icons.min.css#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|  12.444669ms|   match|#033[44m GET     #033[0m /static/custom-elements.min.js#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|  13.936462ms|   match|#033[44m GET     #033[0m /static/prism-solarizedlight.css#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|  19.646251ms|   match|#033[44m GET     #033[0m /static/clarity-icons.min.js#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|  20.119759ms|   match|#033[44m GET     #033[0m /static/marked.js#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|  21.537023ms|   match|#033[44m GET     #033[0m /static/prism.js#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|  19.654583ms|   match|#033[44m GET     #033[0m /static/prism-yaml.min.js#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m| 153.069212ms|   match|#033[44m GET     #033[0m /static/build.min.js#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|     43.033µs|   match|#033[44m GET     #033[0m /i18n/lang/en-us-lang.json#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|     25.197µs|   match|#033[44m GET     #033[0m /static/setting.json#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[42m 200 #033[0m|   1.922972ms|   match|#033[44m GET     #033[0m /api/systeminfo   r:/api/systeminfo#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018-10-19T07:03:25Z [INFO] unauthorized
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[43m 401 #033[0m|    677.801µs|   match|#033[44m GET     #033[0m /api/users/current   r:/api/users/:id#033[0m
Oct 19 07:03:25 172.18.0.1 ui[19643]: 2018/10/19 07:03:25 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[42m 200 #033[0m|    573.654µs|   match|#033[44m GET     #033[0m /log_out   r:/log_out#033[0m
Oct 19 07:03:26 172.18.0.1 ui[19643]: 2018/10/19 07:03:26 #033[1;31m[E] [server.go:2619] MOVED 243 192.168.0.3:6399#033[0m
Oct 19 07:03:26 172.18.0.1 ui[19643]: 2018/10/19 07:03:26 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[41m 503 #033[0m|    466.101µs| nomatch|#033[44m GET     #033[0m /api/systeminfo#033[0m
Oct 19 07:03:26 172.18.0.1 ui[19643]: 2018/10/19 07:03:26 #033[1;31m[E] [server.go:2619] MOVED 14772 192.168.0.2:6399#033[0m
Oct 19 07:03:26 172.18.0.1 ui[19643]: 2018/10/19 07:03:26 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[41m 503 #033[0m|    349.191µs| nomatch|#033[44m GET     #033[0m /api/repositories/top#033[0m
Oct 19 07:03:26 172.18.0.1 ui[19643]: 2018/10/19 07:03:26 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[47m 304 #033[0m|     31.625µs|   match|#033[44m GET     #033[0m /static/images/harbor-black-logo.png#033[0m
Oct 19 07:03:26 172.18.0.1 ui[19643]: 2018/10/19 07:03:26 #033[1;31m[E] [server.go:2619] MOVED 7953 192.168.0.2:6349#033[0m
Oct 19 07:03:26 172.18.0.1 ui[19643]: 2018/10/19 07:03:26 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[41m 503 #033[0m|    452.848µs| nomatch|#033[44m GET     #033[0m /harbor/sign-in#033[0m
Oct 19 07:03:43 172.18.0.1 ui[19643]: 2018/10/19 07:03:43 #033[1;31m[E] [server.go:2619] MOVED 520 192.168.0.3:6399#033[0m
Oct 19 07:03:43 172.18.0.1 ui[19643]: 2018/10/19 07:03:43 #033[1;44m[D] [server.go:2619] |      127.0.0.1|#033[41m 503 #033[0m|    438.315µs| nomatch|#033[44m GET     #033[0m /api/ping#033[0m

Here what is the meaning of following lines :

Oct 19 07:03:26 172.18.0.1 ui[19643]: 2018/10/19 07:03:26 #033[1;31m[E] [server.go:2619] MOVED 7953 192.168.0.2:6349#033[0m
Oct 19 07:03:26 172.18.0.1 ui[19643]: 2018/10/19 07:03:26 #033[1;44m[D] [server.go:2619] |   192.168.0.1|#033[41m 503 #033[0m|    452.848µs| nomatch|#033[44m GET 

@reasonerjt

I am not sure if the problem is connected with this issue: https://serverfault.com/questions/812156/redis-cluster-error-moved

Is the redis client making the request to the right node ? In my case I have redis-cluster so the key can be anywhere in the cluster. How the harbor redis client make sure to retrive the key from the specific node?

@cd1989
Copy link
Contributor

cd1989 commented Oct 19, 2018

By service unavailable, have you check the harbor-ui container, is it healthy ?

@jakirpatel
Copy link
Author

jakirpatel commented Oct 19, 2018

@cd1989

I think it is changing the status depend on the ping response. Sometimes container becomes healthy and sometimes later it becomes unhealthy.

d34072929204        goharbor/harbor-ui:v1.6.0                     "/harbor/start.sh"       About an hour ago   Up About an hour (unhealthy)                                                                      harbor-ui

I observed that the job service container is restarted and I got the actual error response from the Jobservice log.

Oct 19 09:03:15 172.18.0.1 jobservice[19643]: 2018-10-19T09:03:15Z [FATAL] [service_logger.go:73]: Failed to load and run worker pool: connect to redis server timeout: ERR SELECT is not allowed in cluster mode
Oct 19 09:03:16 172.18.0.1 jobservice[19643]: 2018-10-19T09:03:16Z [INFO] Registering database: type-PostgreSQL host-dev.hnd.local port-5432 databse-registry sslmode-"disable"
Oct 19 09:03:16 172.18.0.1 jobservice[19643]: 2018-10-19T09:03:16Z [INFO] Register database completed


@jakirpatel
Copy link
Author

@reasonerjt @cd1989
I think its related to #4500

@steven-zou
Copy link
Contributor

steven-zou commented Oct 23, 2018

@jakirpatel
Could you please tell us what redis cluster are you using?

For the error occurred in the ui part, I think the reason may be that one described in the issue #4500. The UI framework beego does not support redis cluster, you need to specify a single node for UI.

For the error in the job service, I'll take a look at it.

@steven-zou
Copy link
Contributor

@jakirpatel

I think the ERR SELECT error is caused by the multiple redis databases because redis cluster does not support multiple databases.

The official doc says:

#From https://redis.io/topics/cluster-spec

Redis Cluster does not support multiple databases like the stand alone version of Redis. There is just database 0 and the SELECT command is not allowed.

Could you please try to config the db index in the harbor.cfg file with the same one 0?

redis_db_index = 0,0,0

@steven-zou
Copy link
Contributor

Beego issue related to https://github.com/astaxie/beego/issues/1453

@jakirpatel
Copy link
Author

@steven-zou
Using redis_db_index = 0,0,0 does not worked for me. I got different errors.

@jakirpatel
Copy link
Author

@steven-zou @reasonerjt

For time being I started using the standalone redis and skipped the cluster-mode redis. So let this issue open until the error get fixed.It will be really helpful if you provide some fix HA solution including redis.

Any thoughts ?

@jakirpatel jakirpatel changed the title Service Unavailable 503 Error on Harbor HA Redis Cluster (cluster-mode enabled) Support : Service Unavailable 503 Error on Harbor Oct 30, 2018
@jakirpatel
Copy link
Author

@reasonerjt @clouderati any opinion on this to be on the roadmap? Specifically how the harbor will be proceeding for HA mode?

@stale
Copy link

stale bot commented Feb 4, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Stale label Feb 4, 2019
@stale stale bot closed this as completed Feb 25, 2019
@Danpiel
Copy link

Danpiel commented Jul 22, 2020

Fresh install of Harbor v2 in Kubernetes, same issue in HA setup, Redis Cluster is not working because it can't handle MOVED in session requests. Also it won't support Redis with Sentinel as it was stated in another issue.

Attaching log if it helps

Appending internal tls trust CA to ca-bundle ...

find: /etc/harbor/ssl: No such file or directory

Internal tls trust CA appending is Done.

2020-07-22T03:55:11Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.oci.image.index.v1+json registered

2020-07-22T03:55:11Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.distribution.manifest.list.v2+json registered

2020-07-22T03:55:11Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.distribution.manifest.v1+prettyjws registered

2020-07-22T03:55:11Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.oci.image.config.v1+json registered

2020-07-22T03:55:11Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.container.image.v1+json registered

2020-07-22T03:55:11Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.cncf.helm.config.v1+json registered

2020-07-22T03:55:11Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.cnab.manifest.v1 registered

2020-07-22T03:55:11Z [INFO] [/replication/adapter/native/adapter.go:36]: the factory for adapter docker-registry registered

2020-07-22T03:55:11Z [INFO] [/replication/adapter/harbor/adaper.go:31]: the factory for adapter harbor registered

2020-07-22T03:55:11Z [INFO] [/replication/adapter/dockerhub/adapter.go:25]: Factory for adapter docker-hub registered

2020-07-22T03:55:11Z [INFO] [/replication/adapter/huawei/huawei_adapter.go:27]: the factory of Huawei adapter was registered

2020-07-22T03:55:11Z [INFO] [/replication/adapter/googlegcr/adapter.go:29]: the factory for adapter google-gcr registered

2020-07-22T03:55:11Z [INFO] [/replication/adapter/awsecr/adapter.go:47]: the factory for adapter aws-ecr registered

2020-07-22T03:55:11Z [INFO] [/replication/adapter/azurecr/adapter.go:15]: Factory for adapter azure-acr registered

2020-07-22T03:55:12Z [INFO] [/replication/adapter/aliacr/adapter.go:31]: the factory for adapter ali-acr registered

2020-07-22T03:55:12Z [INFO] [/replication/adapter/jfrog/adapter.go:30]: the factory of jfrog artifactory adapter was registered

2020-07-22T03:55:12Z [INFO] [/replication/adapter/quayio/adapter.go:38]: the factory of Quay.io adapter was registered

2020-07-22T03:55:12Z [INFO] [/replication/adapter/helmhub/adapter.go:30]: the factory for adapter helm-hub registered

2020-07-22T03:55:12Z [INFO] [/replication/adapter/gitlab/adapter.go:17]: the factory for adapter gitlab registered

2020-07-22T03:55:12Z [INFO] [/core/controllers/base.go:299]: Config path: /etc/core/app.conf

2020-07-22T03:55:12Z [INFO] [/core/main.go:111]: initializing configurations...

2020-07-22T03:55:12Z [INFO] [/core/config/config.go:83]: key path: /etc/core/key

2020-07-22T03:55:12Z [INFO] [/core/config/config.go:60]: init secret store

2020-07-22T03:55:12Z [INFO] [/core/config/config.go:63]: init project manager

2020-07-22T03:55:12Z [INFO] [/core/config/config.go:95]: initializing the project manager based on local database...

2020-07-22T03:55:12Z [INFO] [/core/main.go:113]: configurations initialization completed

2020-07-22T03:55:12Z [INFO] [/common/dao/base.go:84]: Registering database: type-PostgreSQL host-postgresql port-5432 databse-registry sslmode-"disable"

2020-07-22T03:55:12Z [INFO] [/common/dao/base.go:89]: Register database completed

2020-07-22T03:55:12Z [INFO] [/common/dao/pgsql.go:118]: Upgrading schema for pgsql ...

2020-07-22T03:55:12Z [INFO] [/common/dao/pgsql.go:121]: No change in schema, skip.

2020-07-22T03:55:12Z [INFO] [/core/main.go:80]: User id: 1 already has its encrypted password.

2020-07-22T03:55:12Z [INFO] [/chartserver/cache.go:184]: Enable redis cache for chart caching

2020-07-22T03:55:12Z [INFO] [/chartserver/reverse_proxy.go:60]: New chart server traffic proxy with middlewares

2020-07-22T03:55:12Z [INFO] [/core/api/chart_repository.go:613]: API controller for chart repository server is successfully initialized

2020-07-22T03:55:12Z [INFO] [/core/main.go:189]: Registering Trivy scanner

2020-07-22T03:55:12Z [INFO] [/common/dao/base.go:64]: initialized clair database

2020-07-22T03:55:12Z [INFO] [/core/main.go:211]: Registering Clair scanner

2020-07-22T03:55:12Z [INFO] [/pkg/scan/init.go:62]: Scanner registration already exists: http://harbor-harbor-trivy:8080

2020-07-22T03:55:12Z [INFO] [/pkg/scan/init.go:62]: Scanner registration already exists: http://harbor-harbor-clair:8080

2020-07-22T03:55:12Z [INFO] [/core/main.go:229]: Setting Trivy as default scanner

2020-07-22T03:55:12Z [INFO] [/pkg/scan/init.go:77]: Skipped setting Trivy as the default scanner. The default scanner is already set to http://harbor-harbor-trivy:8080

2020-07-22T03:55:12Z [INFO] [/core/main.go:156]: initializing notification...

2020-07-22T03:55:12Z [INFO] [/pkg/notification/notification.go:47]: notification initialization completed

2020-07-22T03:55:12Z [INFO] [/core/main.go:175]: Version: v2.0.1, Git commit: d714b3ea

2020/07/22 03:55:12.535 [I] [asm_amd64.s:1357]  http server Running on http://:8080

2020-07-22T03:55:34Z [ERROR] [/server/middleware/security/session.go:35][requestID="da423985-d821-42a4-bbac-345c53b601f0"]: failed to get the session store for request: MOVED 959 10.244.12.86:6379

2020/07/22 03:55:34.450 [E] [transaction.go:62]  MOVED 1461 10.244.12.86:6379

2020/07/22 03:55:34.451 [D] [transaction.go:62]  |    10.244.25.1| 503 |    808.957µs| nomatch| GET      /api/v2.0/ping

2020-07-22T03:55:44Z [ERROR] [/server/middleware/security/session.go:35][requestID="1f1d59a5-145b-48b4-8a5c-6473ccb53cda"]: failed to get the session store for request: MOVED 10800 10.244.24.247:6379

2020/07/22 03:55:44.448 [E] [transaction.go:62]  MOVED 10664 10.244.24.247:6379

2020/07/22 03:55:44.450 [D] [transaction.go:62]  |    10.244.25.1| 503 |   1.548683ms| nomatch| GET      /api/v2.0/ping

2020-07-22T03:55:54Z [ERROR] [/server/middleware/security/session.go:35][requestID="228c1034-d411-4740-a4a0-f3759b5ac0c1"]: failed to get the session store for request: MOVED 4495 10.244.12.86:6379

2020/07/22 03:55:54.449 [E] [transaction.go:62]  MOVED 5941 10.244.24.247:6379

2020/07/22 03:55:54.449 [D] [transaction.go:62]  |    10.244.25.1| 503 |    529.182µs| nomatch| GET      /api/v2.0/ping

2020-07-22T03:56:04Z [ERROR] [/server/middleware/security/session.go:35][requestID="68b1718c-57da-49df-8bcc-9dc8b65d76a5"]: failed to get the session store for request: MOVED 7373 10.244.24.247:6379

2020/07/22 03:56:04.448 [E] [transaction.go:62]  MOVED 6896 10.244.24.247:6379

2020/07/22 03:56:04.448 [D] [transaction.go:62]  |    10.244.25.1| 503 |    524.279µs| nomatch| GET      /api/v2.0/ping

2020-07-22T03:56:09Z [INFO] [/replication/registry/healthcheck.go:60]: Start regular health check for registries with interval 5m0s

2020-07-22T03:56:14Z [ERROR] [/server/middleware/security/session.go:35][requestID="eb5e899c-7d37-4c82-b949-c179f24350c4"]: failed to get the session store for request: MOVED 15552 10.244.26.165:6379

2020/07/22 03:56:14.448 [E] [transaction.go:62]  MOVED 9010 10.244.24.247:6379

2020/07/22 03:56:14.448 [D] [transaction.go:62]  |    10.244.25.1| 503 |    514.898µs| nomatch| GET      /api/v2.0/ping

2020-07-22T03:56:24Z [ERROR] [/server/middleware/security/session.go:35][requestID="975a4118-ffe6-40ef-a1cf-567a4810ddb2"]: failed to get the session store for request: MOVED 11241 10.244.26.165:6379

@kfirfer
Copy link

kfirfer commented Sep 21, 2020

same error to me , harbor version 2.0.2 , can someone open the issue?

@misteruly
Copy link

@Danpiel I have the same problem. How did you solve it

@smallersoup
Copy link

+1

1 similar comment
@derekcha
Copy link

+1

@Danpiel
Copy link

Danpiel commented May 25, 2022

@Danpiel I have the same problem. How did you solve it

@misteruly deployed redis as single node for now, so downtime while redis is unavailable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants