Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

harbor-core connect postgresql refused #10206

Closed
damozhiying opened this issue Dec 9, 2019 · 20 comments
Closed

harbor-core connect postgresql refused #10206

damozhiying opened this issue Dec 9, 2019 · 20 comments
Assignees
Labels
doc-impact Engineering issues that will require a change in user docs env_issue Stale

Comments

@damozhiying
Copy link

If you are reporting a problem, please make sure the following information are provided:

Expected behavior and actual behavior:
A clear and concise description of what you expected to happen and what's the actual behavior. If applicable, add screenshots to help explain your problem.

Steps to reproduce the problem:

docker-compose ps
      Name                     Command                       State                     Ports          
------------------------------------------------------------------------------------------------------
harbor-core         /harbor/harbor_core              Up (health: starting)                            
harbor-db           /docker-entrypoint.sh            Up (healthy)            5432/tcp                 
harbor-jobservice   /harbor/harbor_jobservice  ...   Up (health: starting)                            
harbor-log          /bin/sh -c /usr/local/bin/ ...   Up (healthy)            127.0.0.1:1514->10514/tcp
harbor-portal       nginx -g daemon off;             Up (healthy)            8080/tcp                 
nginx               nginx -g daemon off;             Restarting                                       
redis               redis-server /etc/redis.conf     Up (healthy)            6379/tcp                 
registry            /entrypoint.sh /etc/regist ...   Up (healthy)            5000/tcp                 
registryctl         /harbor/start.sh                 Up (healthy)                                     
docker-compose exec postgresql sh
sh-4.4$                                                                                                                                        sh-4.4$ \l
sh: l: command not found
sh-4.4$ psql
psql (9.6.14)
Type "help" for help.

postgres=# \l
                                   List of databases
     Name     |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
--------------+----------+----------+-------------+-------------+-----------------------
 notaryserver | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres         +
              |          |          |             |             | postgres=CTc/postgres+
              |          |          |             |             | server=CTc/postgres
 notarysigner | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres         +
              |          |          |             |             | postgres=CTc/postgres+
              |          |          |             |             | signer=CTc/postgres
 postgres     | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 registry     | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0    | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
              |          |          |             |             | postgres=CTc/postgres
 template1    | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
              |          |          |             |             | postgres=CTc/postgres
(6 rows)

postgres=# 

Versions:
v1.9.2

  • harbor version: [v1.9.2]
  • docker engine version: [18.06.3-ce]
  • docker-compose version: [1.25.0]

Additional context:

  • Harbor config files: You can get them by packaging harbor.yml and files in the same directory, including subdirectory.
cat harbor.yml 
# Configuration file of Harbor

# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: harbor.fastify.top

# http related config
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 7080

# https related config
https:
#   # https port for harbor, default is 443
  port: 7443
#   # The path of cert and key files for nginx
  certificate: /data/harbor/cert/server.crt
  private_key: /data/harbor/cert/server.key

# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433

# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: Harbor12345

# Harbor DB configuration
database:
  # The password for the root user of Harbor DB. Change this before any production use.
  password: root123
  # The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
  max_idle_conns: 50
  # The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
  # Note: the default number of connections is 100 for postgres.
  max_open_conns: 100

# The default data volume
data_volume: /data/harbor

# Harbor Storage settings by default is using /data dir on local filesystem
# Uncomment storage_service setting If you want to using external storage
# storage_service:
#   # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
#   # of registry's and chart repository's containers.  This is usually needed when the user hosts a internal storage with self signed certificate.
#   ca_bundle:

#   # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
#   # for more info about this configuration please refer https://docs.docker.com/registry/configuration/
#   filesystem:
#     maxthreads: 100
#   # set disable to true when you want to disable registry redirect
#   redirect:
#     disabled: false

# Clair configuration
clair:
  # The interval of clair updaters, the unit is hour, set to 0 to disable the updaters.
  updaters_interval: 12

jobservice:
  # Maximum number of job workers in job service
  max_job_workers: 10

notification:
  # Maximum retry count for webhook job
  webhook_job_max_retry: 10

chart:
  # Change the value of absolute_url to enabled can enable absolute url in chart
  absolute_url: disabled

# Log configurations
log:
  # options are debug, info, warning, error, fatal
  level: info
  # configs for logs in local storage
  local:
    # Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
    rotate_count: 50
    # Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
    # If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
    # are all valid.
    rotate_size: 200M
    # The directory on your host that store log
    location: /var/log/harbor

  # Uncomment following lines to enable external syslog endpoint.
  # external_endpoint:
  #   # protocol used to transmit log to external endpoint, options is tcp or udp
  #   protocol: tcp
  #   # The host of external endpoint
  #   host: localhost
  #   # Port of external endpoint
  #   port: 5140

#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
_version: 1.9.2

# Uncomment external_database if using external database.
#   clair:
#     host: clair_db_host
#     port: clair_db_port
#     db_name: clair_db_name
#     username: clair_db_username
#     password: clair_db_password
#     ssl_mode: disable
#   notary_signer:
#     host: notary_signer_db_host
#     port: notary_signer_db_port
#     db_name: notary_signer_db_name
#     username: notary_signer_db_username
#     password: notary_signer_db_password
#     ssl_mode: disable
#   notary_server:
#     host: notary_server_db_host
#     port: notary_server_db_port
#     db_name: notary_server_db_name
#     username: notary_server_db_username
#     password: notary_server_db_password
#     ssl_mode: disable

# Uncomment external_redis if using external Redis server
# Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
# uaa:
#   ca_file: /path/to/ca

# Global proxy
# Config http proxy for components, e.g. http://my.proxy.com:3128
# Components doesn't need to connect to each others via http proxy.
# Remove component from `components` array if want disable proxy
# for it. If you want use proxy for replication, MUST enable proxy
# for core and jobservice, and set `http_proxy` and `https_proxy`.
# Add domain to the `no_proxy` field, when you want disable proxy
# for some special registry.
proxy:
  http_proxy:
  https_proxy:
  no_proxy: 127.0.0.1,localhost,.local,.internal,log,db,redis,nginx,core,portal,postgresql,jobservice,registry,registryctl,clair
  components:
    - core
    - jobservice
    - clair
  • Log files: You can get them by package the /var/log/harbor/ .
 tail -f /var/log/harbor/postgresql.log 
Dec  9 16:04:44 172.22.0.1 postgresql[27065]: LOG:  database system was shut down at 2019-12-09 08:04:43 UTC
Dec  9 16:04:44 172.22.0.1 postgresql[27065]: LOG:  MultiXact member wraparound protections are now enabled
Dec  9 16:04:44 172.22.0.1 postgresql[27065]: LOG:  database system is ready to accept connections
Dec  9 16:04:44 172.22.0.1 postgresql[27065]: LOG:  autovacuum launcher started
Dec  9 16:26:39 172.24.0.1 postgresql[27065]: LOG:  database system was not properly shut down; automatic recovery in progress
Dec  9 16:26:39 172.24.0.1 postgresql[27065]: LOG:  invalid record length at 0/1509B50: wanted 24, got 0
Dec  9 16:26:39 172.24.0.1 postgresql[27065]: LOG:  redo is not required
Dec  9 16:26:39 172.24.0.1 postgresql[27065]: LOG:  MultiXact member wraparound protections are now enabled
Dec  9 16:26:39 172.24.0.1 postgresql[27065]: LOG:  database system is ready to accept connections
Dec  9 16:26:39 172.24.0.1 postgresql[27065]: LOG:  autovacuum launcher started
^C
[root@cdn-k8s-m164 harbor]# tail -f /var/log/harbor/core.log 
Dec  9 16:27:21 172.24.0.1 core[27065]: 2019-12-09T08:27:21Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:33849->127.0.0.11:53: read: connection refused
Dec  9 16:27:23 172.24.0.1 core[27065]: 2019-12-09T08:27:23Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:34031->127.0.0.11:53: read: connection refused
Dec  9 16:27:25 172.24.0.1 core[27065]: 2019-12-09T08:27:25Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:48467->127.0.0.11:53: read: connection refused
Dec  9 16:27:27 172.24.0.1 core[27065]: 2019-12-09T08:27:27Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:53315->127.0.0.11:53: read: connection refused
Dec  9 16:27:29 172.24.0.1 core[27065]: 2019-12-09T08:27:29Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:37904->127.0.0.11:53: read: connection refused
Dec  9 16:27:31 172.24.0.1 core[27065]: 2019-12-09T08:27:31Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:33494->127.0.0.11:53: read: connection refused
Dec  9 16:27:33 172.24.0.1 core[27065]: 2019-12-09T08:27:33Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:44737->127.0.0.11:53: read: connection refused
Dec  9 16:27:35 172.24.0.1 core[27065]: 2019-12-09T08:27:35Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:54347->127.0.0.11:53: read: connection refused
Dec  9 16:27:37 172.24.0.1 core[27065]: 2019-12-09T08:27:37Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:54135->127.0.0.11:53: read: connection refused
Dec  9 16:27:39 172.24.0.1 core[27065]: 2019-12-09T08:27:39Z [FATAL] [/core/main.go:185]: failed to initialize database: failed to connect to tcp:postgresql:5432 after 60 seconds
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [ERROR] [/common/config/manager.go:118]: loadSystemConfigFromEnv failed, config item, key: clair_db_port,  err: strconv.Atoi: parsing "": invalid syntax
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/replication/adapter/native/adapter.go:44]: the factory for adapter docker-registry registered
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/replication/adapter/harbor/adapter.go:42]: the factory for adapter harbor registered
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/replication/adapter/dockerhub/adapter.go:25]: Factory for adapter docker-hub registered
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/replication/adapter/huawei/huawei_adapter.go:27]: the factory of Huawei adapter was registered
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/replication/adapter/googlegcr/adapter.go:31]: the factory for adapter google-gcr registered
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/replication/adapter/awsecr/adapter.go:49]: the factory for adapter aws-ecr registered
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/replication/adapter/azurecr/adapter.go:15]: Factory for adapter azure-acr registered
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/replication/adapter/aliacr/adapter.go:28]: the factory for adapter ali-acr registered
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/replication/adapter/helmhub/adapter.go:31]: the factory for adapter helm-hub registered
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/core/controllers/base.go:290]: Config path: /etc/core/app.conf
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/core/main.go:174]: initializing configurations...
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/core/config/config.go:101]: key path: /etc/core/key
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [ERROR] [/common/config/manager.go:118]: loadSystemConfigFromEnv failed, config item, key: clair_db_port,  err: strconv.Atoi: parsing "": invalid syntax
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/core/config/config.go:74]: init secret store
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/core/config/config.go:77]: init project manager based on deploy mode
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/core/config/config.go:146]: initializing the project manager based on local database...
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/core/main.go:178]: configurations initialization completed
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [INFO] [/common/dao/base.go:84]: Registering database: type-PostgreSQL host-postgresql port-5432 databse-registry sslmode-"disable"
Dec  9 16:27:41 172.24.0.1 core[27065]: 2019-12-09T08:27:41Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:43241->127.0.0.11:53: read: connection refused
Dec  9 16:27:43 172.24.0.1 core[27065]: 2019-12-09T08:27:43Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:49708->127.0.0.11:53: read: connection refused
Dec  9 16:27:45 172.24.0.1 core[27065]: 2019-12-09T08:27:45Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:49450->127.0.0.11:53: read: connection refused
Dec  9 16:27:47 172.24.0.1 core[27065]: 2019-12-09T08:27:47Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:57640->127.0.0.11:53: read: connection refused
Dec  9 16:27:49 172.24.0.1 core[27065]: 2019-12-09T08:27:49Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:56391->127.0.0.11:53: read: connection refused
Dec  9 16:27:51 172.24.0.1 core[27065]: 2019-12-09T08:27:51Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:36840->127.0.0.11:53: read: connection refused
Dec  9 16:27:53 172.24.0.1 core[27065]: 2019-12-09T08:27:53Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:49972->127.0.0.11:53: read: connection refused
Dec  9 16:27:55 172.24.0.1 core[27065]: 2019-12-09T08:27:55Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:33770->127.0.0.11:53: read: connection refused
Dec  9 16:27:57 172.24.0.1 core[27065]: 2019-12-09T08:27:57Z [ERROR] [/common/utils/utils.go:101]: failed to connect to tcp://postgresql:5432, retry after 2 seconds :dial tcp: lookup postgresql on 127.0.0.11:53: read udp 127.0.0.1:33479->127.0.0.11:53: read: connection refused
^C
@heww
Copy link
Contributor

heww commented Dec 9, 2019

You may need to check the dns server of 127.0.0.11:53, it failed to lookup the hostname of postgresql.

@damozhiying
Copy link
Author

@heww i guess the dns server 127.0.0.11:53 is docker virtual server, how to check?

@stonezdj
Copy link
Contributor

Are you using http_proxy and https_proxy? if yes, please check the postgres is in the no_proxy list, a complete no_proxy list should be:

127.0.0.1,localhost,.local,.internal,log,db,redis,nginx,core,portal,postgresql,jobservice,registry,registryctl,clair,chartmuseum,notary-server

@damozhiying
Copy link
Author

@stonezdj
below is harbor.yml,postgresql is in the no_proxy list

cat /opt/harbor/harbor.yml 
# Configuration file of Harbor

# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: harbor.fastify.top

# http related config
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 7080

# https related config
https:
#   # https port for harbor, default is 443
  port: 7443
#   # The path of cert and key files for nginx
  certificate: /data/harbor/cert/server.crt
  private_key: /data/harbor/cert/server.key

# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433

# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: Harbor12345

# Harbor DB configuration
database:
  # The password for the root user of Harbor DB. Change this before any production use.
  password: root123
  # The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
  max_idle_conns: 50
  # The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
  # Note: the default number of connections is 100 for postgres.
  max_open_conns: 100

# The default data volume
data_volume: /data/harbor

# Harbor Storage settings by default is using /data dir on local filesystem
# Uncomment storage_service setting If you want to using external storage
# storage_service:
#   # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
#   # of registry's and chart repository's containers.  This is usually needed when the user hosts a internal storage with self signed certificate.
#   ca_bundle:

#   # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
#   # for more info about this configuration please refer https://docs.docker.com/registry/configuration/
#   filesystem:
#     maxthreads: 100
#   # set disable to true when you want to disable registry redirect
#   redirect:
#     disabled: false

# Clair configuration
clair:
  # The interval of clair updaters, the unit is hour, set to 0 to disable the updaters.
  updaters_interval: 12

jobservice:
  # Maximum number of job workers in job service
  max_job_workers: 10

notification:
  # Maximum retry count for webhook job
  webhook_job_max_retry: 10

chart:
  # Change the value of absolute_url to enabled can enable absolute url in chart
  absolute_url: disabled

# Log configurations
log:
  # options are debug, info, warning, error, fatal
  level: info
  # configs for logs in local storage
  local:
    # Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
    rotate_count: 50
    # Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
    # If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
    # are all valid.
    rotate_size: 200M
    # The directory on your host that store log
    location: /var/log/harbor

  # Uncomment following lines to enable external syslog endpoint.
  # external_endpoint:
  #   # protocol used to transmit log to external endpoint, options is tcp or udp
  #   protocol: tcp
  #   # The host of external endpoint
  #   host: localhost
  #   # Port of external endpoint
  #   port: 5140

#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
_version: 1.9.2

# Uncomment external_database if using external database.
#   clair:
#     host: clair_db_host
#     port: clair_db_port
#     db_name: clair_db_name
#     username: clair_db_username
#     password: clair_db_password
#     ssl_mode: disable
#   notary_signer:
#     host: notary_signer_db_host
#     port: notary_signer_db_port
#     db_name: notary_signer_db_name
#     username: notary_signer_db_username
#     password: notary_signer_db_password
#     ssl_mode: disable
#   notary_server:
#     host: notary_server_db_host
#     port: notary_server_db_port
#     db_name: notary_server_db_name
#     username: notary_server_db_username
#     password: notary_server_db_password
#     ssl_mode: disable

# Uncomment external_redis if using external Redis server
# Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
# uaa:
#   ca_file: /path/to/ca

# Global proxy
# Config http proxy for components, e.g. http://my.proxy.com:3128
# Components doesn't need to connect to each others via http proxy.
# Remove component from `components` array if want disable proxy
# for it. If you want use proxy for replication, MUST enable proxy
# for core and jobservice, and set `http_proxy` and `https_proxy`.
# Add domain to the `no_proxy` field, when you want disable proxy
# for some special registry.
proxy:
  http_proxy:
  https_proxy:
  no_proxy: 127.0.0.1,localhost,.local,.internal,log,db,redis,nginx,core,portal,postgresql,jobservice,registry,registryctl,clair
  components:
    - core
    - jobservice
    - clair

@heww
Copy link
Contributor

heww commented Dec 10, 2019

Maybe there are issues for the embedded dns of docker, there is an issue moby/moby#40294 opened for this problem and its version is 18.06 too.

@qx517971976
Copy link

I have also encountered this problem. My MySQL data directory is set in /data/database. After modifying the MySQL data directory and deleting /data/database.Finally, reinstall harbor

@jacklmjie
Copy link

the postgresql.conf file has
listen_addresses = '*'
port = 5432

and the pg_hba.conf file has:
host replication replicator 172.17.0.0/16 md5

and
firewall-cmd --permanent --add-port=5432/tcp
firewall-cmd --add-port=5432/tcp
firewall-cmd --reload

@JeanRessouche
Copy link

Same here on a setup from scratch using a CentOs 8 fresh install.
Nobody have a clue about it ? that's a pretty big blocker making Harbor unusable.

@JeanRessouche
Copy link

After a few hours trying everything under CentOs 8 (including os setup again from scratch), i reinstalled using ubuntu server 19: no problem, working as expected...
Therefore maybe this issue is constrained to RHCL distributions.

@josuemotte
Copy link

josuemotte commented Apr 24, 2020

Got the same issue recently on CentOS 8, I fixed it like this, you can certainly find more details here https://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf and here https://stackoverflow.com/questions/40214617/docker-no-route-to-host :

# sysctl net.bridge.bridge-nf-call-arptables=0
# sysctl net.bridge.bridge-nf-call-ip6tables=0
# firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 4 -i docker0 -j ACCEPT
# firewall-cmd --permanent --zone=public --add-rich-rule='rule family=ipv4 source address=172.17.0.0/16 accept'
# firewall-cmd --permanent --zone=public --add-rich-rule='rule family=ipv4 source address=172.18.0.0/16 accept'
# firewall-cmd --reload
# systemctl restart docker

@xaleeks xaleeks added doc-impact Engineering issues that will require a change in user docs and removed need-document labels Jun 3, 2020
@AntoCanza
Copy link

@josuemotte on CentOS 8 i had the same issue, after running your command Harbor wasn't working.
i ended up with this.

sudo firewall-cmd --zone=public --add-masquerade --permanent
sudo firewall-cmd --permanent --zone=public --change-interface=docker0
sudo firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 4 -i docker0 -j ACCEPT
sudo firewall-cmd --permanent --zone=public --add-port=5432/tcp
sudo firewall-cmd --reload
sudo systemctl restart docker

as far as I understood this has nothing to do with Harbor, but is docker/firewall configuration...

@iuv
Copy link

iuv commented Jun 11, 2020

@josuemotte on CentOS 8 i had the same issue, after running your command Harbor wasn't working.
i ended up with this.

sudo firewall-cmd --zone=public --add-masquerade --permanent
sudo firewall-cmd --permanent --zone=public --change-interface=docker0
sudo firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 4 -i docker0 -j ACCEPT
sudo firewall-cmd --permanent --zone=public --add-port=5432/tcp
sudo firewall-cmd --reload
sudo systemctl restart docker

as far as I understood this has nothing to do with Harbor, but is docker/firewall configuration...

On Centos8.1 is working

@AntoCanza
Copy link

AntoCanza commented Jun 11, 2020

To me seems that the issue is only related to firewalld configuration. i'm installing harbor 2.0 into this Host

[antonio@192 ~]$ cat /etc/centos-release
CentOS Linux release 8.1.1911 (Core)

[antonio@192 ~]$ sudo firewall-cmd --version
0.7.0

[antonio@192 ~]$ sudo docker --version
Docker version 19.03.11, build 42e35e61f3

[antonio@192 ~]$ sudo docker-compose --version
docker-compose version 1.26.0, build d4451659

Harbor is installed with ./install.sh --with-clair --with-chartmuseum. after installation we will have 4 network interface.
docker0 is the default and the other 3 are defined inside .yml files.

with ifconfig i'm picking up the network interface ids and then:

sudo firewall-cmd --permanent --zone=trusted --change-interface=docker0
sudo firewall-cmd --permanent --zone=trusted --change-interface=br-d92d56047624
sudo firewall-cmd --permanent --zone=trusted --change-interface=br-f00c9ed64e80
sudo firewall-cmd --permanent --zone=trusted --change-interface=br-f208abd8081b
sudo firewall-cmd --complete-reload
sudo systemctl restart docker

make sure to fire a docker-compose up and check all services are started.

the problem is that if we do a docker-compose down the network name will be different and we need to apply once angain the rules in firewalld, only for the 3 network definded into yml files.

after that all problem with nslookup and ping are no more present inside containers.

@josuemotte
Copy link

It seems to be related to the usage of nftables in CentOS 8 :

@EKwongChum
Copy link

To me seems that the issue is only related to firewalld configuration. i'm installing harbor 2.0 into this Host

[antonio@192 ~]$ cat /etc/centos-release
CentOS Linux release 8.1.1911 (Core)

[antonio@192 ~]$ sudo firewall-cmd --version
0.7.0

[antonio@192 ~]$ sudo docker --version
Docker version 19.03.11, build 42e35e61f3

[antonio@192 ~]$ sudo docker-compose --version
docker-compose version 1.26.0, build d4451659

Harbor is installed with ./install.sh --with-clair --with-chartmuseum. after installation we will have 4 network interface.
docker0 is the default and the other 3 are defined inside .yml files.

with ifconfig i'm picking up the network interface ids and then:

sudo firewall-cmd --permanent --zone=trusted --change-interface=docker0
sudo firewall-cmd --permanent --zone=trusted --change-interface=br-d92d56047624
sudo firewall-cmd --permanent --zone=trusted --change-interface=br-f00c9ed64e80
sudo firewall-cmd --permanent --zone=trusted --change-interface=br-f208abd8081b
sudo firewall-cmd --complete-reload
sudo systemctl restart docker

make sure to fire a docker-compose up and check all services are started.

the problem is that if we do a docker-compose down the network name will be different and we need to apply once angain the rules in firewalld, only for the 3 network definded into yml files.

after that all problem with nslookup and ping are no more present inside containers.

It seems more like the firewall setting of CentOS 8.

I try by this steps:

  1. stop your harbor by docker-compose down -v
  2. get network interface by ifconfig.
  3. run ./install
  4. get network interface by ifconfig again, compare the data with step 2, you may find new config like br-5b1e59c88510 ( start with br-)
  5. grant the permission like @AntoCanza says,
    firewall-cmd --permanent --zone=trusted --change-interface=docker0
    firewall-cmd --permanent --zone=trusted --change-interface=${your_new_interface}
    firewall-cmd --complete-reload
    systemctl restart docker
  6. run docker-compose up

@stale
Copy link

stale bot commented Sep 19, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Stale label Sep 19, 2020
@stale stale bot closed this as completed Oct 11, 2020
@vikas-shaw
Copy link

I have a gke cluster with Ubuntu as base image for nodes. And I installed harbor with helm chart ... I am facing the same issue ..

@kunogi
Copy link

kunogi commented Dec 2, 2020

sudo firewall-cmd --zone=public --add-masquerade --permanent
sudo firewall-cmd --permanent --zone=public --change-interface=docker0
sudo firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 4 -i docker0 -j ACCEPT
sudo firewall-cmd --permanent --zone=public --add-port=5432/tcp
sudo firewall-cmd --reload

save my day bro

@cuizhaoyue
Copy link

I have a k8s cluster with centos 7.9 and ovn4nfv cni plugin. And I install harbor with helm chart. I am facting the same issue

@paulliss
Copy link

In my case, there weren't any privileges for registry database like in @damozhiying case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
doc-impact Engineering issues that will require a change in user docs env_issue Stale
Projects
None yet
Development

No branches or pull requests