Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'ERROR: for snuba-api Container "ee666f7f2cdd" is unhealthy.' during install.sh #1178

Closed
ynotna87 opened this issue Nov 30, 2021 · 57 comments · Fixed by #1241 or #1384
Closed

'ERROR: for snuba-api Container "ee666f7f2cdd" is unhealthy.' during install.sh #1178

ynotna87 opened this issue Nov 30, 2021 · 57 comments · Fixed by #1241 or #1384

Comments

@ynotna87
Copy link

Version

21.11.0

Steps to Reproduce

  1. Run ./install.sh

Expected Result

Installation succeed

Actual Result

▶ Bootstrapping and migrating Snuba ...
Creating sentry_onpremise_clickhouse_1 ...
Creating sentry_onpremise_zookeeper_1 ...
Creating sentry_onpremise_redis_1 ...
Creating sentry_onpremise_redis_1 ... done
Creating sentry_onpremise_zookeeper_1 ... done
Creating sentry_onpremise_clickhouse_1 ... done
Creating sentry_onpremise_kafka_1 ...
Creating sentry_onpremise_kafka_1 ... done

ERROR: for snuba-api Container "ee666f7f2cdd" is unhealthy.
Encountered errors while bringing up the project.
An error occurred, caught SIGERR on line 3
Cleaning up...

Can anyone help me with this ?

@chadwhitacre
Copy link
Member

Is this a clean install @ynotna87? If not, can you try a clean install?

@ynotna87
Copy link
Author

ynotna87 commented Dec 1, 2021

Is this a clean install @ynotna87? If not, can you try a clean install?

Hi Chad, yes it is a clean install from scratch on a new VM.

OS: Centos 7
Docker Version: 20.10.11
docker-compose Version: 1.29.2
Python Version: 3.6.8
vCPU: 8
RAM: 16G

@aminvakil
Copy link
Collaborator

Try running docker-compose down -v --remove-orphans && docker volume prune -f && docker-compose up -d again.

Beware this will effectively removes all your data.

@ynotna87
Copy link
Author

ynotna87 commented Dec 2, 2021

Try running docker-compose down -v --remove-orphans && docker volume prune -f && docker-compose up -d again.

Beware this will effectively removes all your data.

Hi Amin,

i tried to run the command, but unfortunately the same error still the same. What could possibly wrong ?
Currently i try to request additional VM to try installing on that, but it's yet to be provisioned.

@ynotna87
Copy link
Author

ynotna87 commented Dec 2, 2021

Try running docker-compose down -v --remove-orphans && docker volume prune -f && docker-compose up -d again.
Beware this will effectively removes all your data.

Hi Amin,

i tried to run the command, but unfortunately the same error still the same. What could possibly wrong ? Currently i try to request additional VM to try installing on that, but it's yet to be provisioned.

Update : After i tried on another VM, it still giving the same error:

▶ Bootstrapping and migrating Snuba ...
Creating sentry_onpremise_clickhouse_1 ...
Creating sentry_onpremise_zookeeper_1 ...
Creating sentry_onpremise_redis_1 ...
Creating sentry_onpremise_redis_1 ... done
Creating sentry_onpremise_clickhouse_1 ... done
Creating sentry_onpremise_zookeeper_1 ... done
Creating sentry_onpremise_kafka_1 ...
Creating sentry_onpremise_kafka_1 ... done

ERROR: for snuba-api Container "05c4e3a60327" is unhealthy.
Encountered errors while bringing up the project.
An error occurred, caught SIGERR on line 3
Cleaning up...

@chadwhitacre
Copy link
Member

Can you run with debugging and paste your full install log in a gist?

DEBUG=1 ./install.sh --no-user-prompt

@sajphon
Copy link

sajphon commented Dec 3, 2021

@AxTheB
Copy link

AxTheB commented Dec 3, 2021

Had the same issue, rebooting the machine helped.

@chadwhitacre
Copy link
Member

What is the snuba-api healthcheck? How is it failing? Why?

snuba-api:
<<: *depends_on-default

Curious to me that snuba-api doesn't seem on the surface to have a healthcheck. 🤔

@aminvakil
Copy link
Collaborator

aminvakil commented Dec 4, 2021

Curious to me that snuba-api doesn't seem on the surface to have a healthcheck. thinking

It does not.

docker ps | grep snuba-api
bab3ff609285   getsentry/snuba:21.9.0                 "./docker_entrypoint…"   2 hours ago   Up 2 hours             1218/tcp                                    sentry_onpremise_snuba-api_1

(It does not have a (healthy) after Up 2 hours).
I'm more confused now, why on the first run it has? 🤔

@markdensen403
Copy link

I'm also having the same error where snuba-api is failing. In fact, my entire dockers are down, and production is offline because I was trying to update our sentry server. Does anyone have any idea what is happening here?

@AxTheB
Copy link

AxTheB commented Dec 5, 2021

At that point it will fail for example when any of the started containers does not go up, try starting the zookeeper, clickhouse, redis and kafka containers manually and check their state.

@ynotna87
Copy link
Author

I am still having the same issue, snuba-api container are not healthy. Anyone here with the same issue, having some luck on resolving this ?

@ynotna87
Copy link
Author

Can you run with debugging and paste your full install log in a gist?

DEBUG=1 ./install.sh --no-user-prompt

https://gist.github.com/ynotna87/96fc2d37aed51ead2f6a915c8cf7f5cb @chadwhitacre

@AxTheB
Copy link

AxTheB commented Dec 10, 2021

@ynotna87 Yes I had. Can you start other containers mentioned in this phase into healthy/running state? In my case it was kafka one stuck in Starting up state because of network issues

@ynotna87
Copy link
Author

@ynotna87 Yes I had. Can you start other containers mentioned in this phase into healthy/running state? In my case it was kafka one stuck in Starting up state because of network issues

Any guidance on how to do it ? honestly i am not too proficient on docker.

@chadwhitacre
Copy link
Member

@ynotna87 Not to be a jerk but have you considered SaaS if maintaining self-hosted is beyond your comfort level?

@ynotna87
Copy link
Author

@ynotna87 Not to be a jerk but have you considered SaaS if maintaining self-hosted is beyond your comfort level?

Yes, but in my country, there's a regulation, that the data should be on premise, specifically for our company's nature of business. Unfortunately, we have to comply as it have serious sanctions. As for me, i am more than happy to use the SaaS as it will brings more ease of mind for me and the team.

@chadwhitacre
Copy link
Member

Ah, gotcha. :) Well in that case it sounds like you have a learning journey ahead of you! Good luck! ☺️

@rwky
Copy link

rwky commented Dec 17, 2021

I had the same issue the error is actually with clickhouse just to add confusion and it's this error #1205

This comment solved it for me #1205 (comment)

@aamarques
Copy link

I had the same issue with snuba. I did a docker-compose up -d and got many unhealthy errors.
So, I did a docker-compose down and docker-compose up -d again and snuba is healthy but.. now I have a lot of problems:

ERROR: for post-process-forwarder  Container "a5a3823b64ee" is unhealthy.

ERROR: for worker  Container "a5a3823b64ee" is unhealthy.

ERROR: for cron  Container "a5a3823b64ee" is unhealthy.

ERROR: for subscription-consumer-events  Container "a5a3823b64ee" is unhealthy.

ERROR: for sentry-cleanup  Container "a5a3823b64ee" is unhealthy.

ERROR: for web  Container "a5a3823b64ee" is unhealthy.

ERROR: for ingest-consumer  Container "a5a3823b64ee" is unhealthy.

ERROR: for subscription-consumer-transactions  Container "a5a3823b64ee" is unhealthy.
postgres_1                                  |
postgres_1                                  | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1                                  |
postgres_1                                  | LOG:  database system was interrupted while in recovery at 2021-12-20 11:25:50 UTC
postgres_1                                  | HINT:  This probably means that some data is corrupted and you will have to use the last backup for recovery.
postgres_1                                  | LOG:  database system was not properly shut down; automatic recovery in progress
postgres_1                                  | LOG:  redo starts at A08/5AD57638
postgres_1                                  | WARNING:  specified item offset is too large
postgres_1                                  | CONTEXT:  xlog redo at A08/5B7A5700 for Btree/INSERT_LEAF: off 169
postgres_1                                  | PANIC:  btree_insert_redo: failed to add item
postgres_1                                  | CONTEXT:  xlog redo at A08/5B7A5700 for Btree/INSERT_LEAF: off 169
postgres_1                                  | LOG:  startup process (PID 33) was terminated by signal 6: Aborted
postgres_1                                  | LOG:  aborting startup due to startup pro

@aamarques
Copy link

following #1211

@MarshallEriksen-shaomingyang
Copy link

Do you forgot run ./install.sh before you run docker-compose up -d Hm... i met seem wrong because i forgot run it

@aamarques
Copy link

aamarques commented Dec 21, 2021

Do you forgot run ./install.sh before you run docker-compose up -d Hm... i met seem wrong because i forgot run it

No.. Idid, but did docker-compose up -d after this:

ERROR: for snuba-api Container "xxx" is unhealthy.
Encountered errors while bringing up the project.
An error occurred, caught SIGERR on line 3
Cleaning up...

@AxTheB
Copy link

AxTheB commented Dec 21, 2021

Any guidance on how to do it ? honestly i am not too proficient on docker.

I use portainer.io for this, its clikety and easy.

@nttdocomo
Copy link

I have the same error, and follow your step, this time error disapear but the postgres is not start

@aminvakil
Copy link
Collaborator

For future reference: https://develop.sentry.dev/self-hosted/troubleshooting/#docker-containers-healthcheck

@tomasnorre
Copy link

I also had this issue upgrading from 21.6.3 -> 22.1.0, the suggestion from #1178 (comment) helped me move on.

@chadwhitacre
Copy link
Member

I added a "gotcha" at the top of https://github.com/getsentry/self-hosted/releases/tag/21.12.0, hopefully helps somebody avoid hitting this in the future.

@chadwhitacre
Copy link
Member

And actually I'm surprised you hit this with 22.1.0, @tomasnorre, because we made #1241 to avoid this. Were you going straight to 22.1.0, or through 21.12.0 first?

@tomasnorre
Copy link

I'm on 21.1.0 went to 21.6.3 and from there to 22.1.0 as the upgrade guide suggests.
https://develop.sentry.dev/self-hosted/releases/#hard-stops

I got this in the step from 21.6.3 -> 22.1.0.

@chadwhitacre
Copy link
Member

chadwhitacre commented Jan 20, 2022

Bummer. I guess we'll start getting more reports about this, then, and we'll have to keep digging. :-/

@kai11
Copy link

kai11 commented Jan 26, 2022

I had this error during upgrade and fixed it. My findings:

  • this error is not in snuba-api container - error says "for snuba-api" because it is dependency of snuba-api
  • this is clickhouse error and clickhouse container DOES have healhcheck
  • error essentially means clickhouse already running on this server somewhere
  • for my 21.10.0 install I had COMPOSE_PROJECT_NAME=sentry_onpremise
  • 22.1.0 defaults to COMPOSE_PROJECT_NAME=sentry-self-hosted in .env file
  • with incorrect setting above docker-compose may not work with running containers - even docker-compose down may not be enough to stop everything.
  • my own fix was to revert to correct project name; install.sh \ docker-compose up both successful after it.

PS. c258a1e#diff-e9cbb0224c4a3d23a6019ba557e0cd568c1ad5e1582ff1e335fb7d99b7a1055d

@chadwhitacre
Copy link
Member

even docker-compose down may not be enough to stop everything.

That could explain why @frame took a more drastic route:

Note: If the upgrade got interrupted you need to stop all docker containers before trying again:

docker stop $(docker ps -q) (as root)

This assumes no other services than sentry is on this machine.

Maybe we should do something similar in turn-things-off.sh? 🤔

@chadwhitacre chadwhitacre reopened this Jan 26, 2022
@jthomaschewski
Copy link

jthomaschewski commented Jan 26, 2022

I worked around this by calling docker-compose down before upgrading, see
#1178 (comment)
All tries to stop containers in the install script fail because of the changed COMPOSE_PROJECT_NAME

I see 2 possible solutions, something around:

  • document that its required to manually stop containers before doing major upgrades
  • add something like COMPOSE_PROJECT_NAME=sentry_onpremise docker-compose down into the install.sh script to make sure the old containers are stopped and removed if existent

@chadwhitacre
Copy link
Member

document that its required to manually stop containers before doing major upgrades

The intention of turn-things-off.sh is to avoid having to document this as a manual step.

add something like COMPOSE_PROJECT_NAME=sentry_onpremise docker-compose down into the install.sh

I like this better than the over-broad docker stop $(docker ps -q). Are there other reasons that docker compose down might be insufficient, though?

@kai11
Copy link

kai11 commented Jan 30, 2022

What about simply fail to proceed with install if old containers are found and need to be stopped by user? Of course, with clear message.

@chadwhitacre
Copy link
Member

I think since we're already attempting to stop services automatically during install then we should preserve that behavior for old containers as well.

@shaqaruden
Copy link

I had this same issue. After a failed install I ran docker-compose down which brought down all the new containers but I needed to run docker-compose down again which brought all the older containers. I ran the same command again to ensure everything was down and then ran ./install.sh which succeeded

...
ERROR: for snuba-api  Container "faf1ca068692" is unhealthy.
Encountered errors while bringing up the project.
An error occurred, caught SIGERR on line 3
Cleaning up...

[redacted] in sentry at [redacted] on  tags/22.1.0 [!?] on 🐳 v20.10.12 took 57s 
➜ dcd
Removing sentry-onpremise_kafka_1      ... done
Removing sentry-onpremise_clickhouse_1 ... done
Removing sentry-onpremise_zookeeper_1  ... done
Removing sentry-onpremise_redis_1      ... done
Removing network sentry-onpremise_default

[redacted] in sentry at [redacted] on  tags/22.1.0 [!?] on 🐳 v20.10.12 
➜ vim .env

[redacted] in sentry at [redacted] on  tags/22.1.0 [!?] on 🐳 v20.10.12 took 8s 
➜ dcd
Stopping sentry_onpremise_nginx_1                                    ... done
Stopping sentry_onpremise_relay_1                                    ... done
Stopping sentry_onpremise_worker_1                                   ... done
Stopping sentry_onpremise_cron_1                                     ... done
Stopping sentry_onpremise_subscription-consumer-events_1             ... done
Stopping sentry_onpremise_post-process-forwarder_1                   ... done
Stopping sentry_onpremise_sentry-cleanup_1                           ... done
Stopping sentry_onpremise_subscription-consumer-transactions_1       ... done
Stopping sentry_onpremise_ingest-consumer_1                          ... done
Stopping sentry_onpremise_web_1                                      ... done
Stopping sentry_onpremise_snuba-cleanup_1                            ... done
Stopping sentry_onpremise_snuba-transactions-cleanup_1               ... done
Stopping sentry_onpremise_symbolicator-cleanup_1                     ... done
Stopping sentry_onpremise_snuba-replacer_1                           ... done
Stopping sentry_onpremise_snuba-subscription-consumer-transactions_1 ... done
Stopping sentry_onpremise_snuba-sessions-consumer_1                  ... done
Stopping sentry_onpremise_snuba-outcomes-consumer_1                  ... done
Stopping sentry_onpremise_snuba-subscription-consumer-events_1       ... done
Stopping sentry_onpremise_snuba-consumer_1                           ... done
Stopping sentry_onpremise_snuba-api_1                                ... done
Stopping sentry_onpremise_snuba-transactions-consumer_1              ... done
Stopping sentry_onpremise_postgres_1                                 ... done
Stopping sentry_onpremise_smtp_1                                     ... done
Stopping sentry_onpremise_memcached_1                                ... done
Stopping sentry_onpremise_symbolicator_1                             ... done
Stopping sentry_onpremise_kafka_1                                    ... done
Stopping sentry_onpremise_clickhouse_1                               ... done
Stopping sentry_onpremise_zookeeper_1                                ... done
Stopping sentry_onpremise_redis_1                                    ... done
Removing sentry_onpremise_nginx_1                                    ... done
Removing sentry_onpremise_relay_1                                    ... done
Removing sentry_onpremise_worker_1                                   ... done
Removing sentry_onpremise_cron_1                                     ... done
Removing sentry_onpremise_subscription-consumer-events_1             ... done
Removing sentry_onpremise_post-process-forwarder_1                   ... done
Removing sentry_onpremise_sentry-cleanup_1                           ... done
Removing sentry_onpremise_subscription-consumer-transactions_1       ... done
Removing sentry_onpremise_ingest-consumer_1                          ... done
Removing sentry_onpremise_web_1                                      ... done
Removing sentry_onpremise_snuba-cleanup_1                            ... done
Removing sentry_onpremise_snuba-transactions-cleanup_1               ... done
Removing sentry_onpremise_symbolicator-cleanup_1                     ... done
Removing sentry_onpremise_geoipupdate_1                              ... done
Removing sentry_onpremise_snuba-replacer_1                           ... done
Removing sentry_onpremise_snuba-subscription-consumer-transactions_1 ... done
Removing sentry_onpremise_snuba-sessions-consumer_1                  ... done
Removing sentry_onpremise_snuba-outcomes-consumer_1                  ... done
Removing sentry_onpremise_snuba-subscription-consumer-events_1       ... done
Removing sentry_onpremise_snuba-consumer_1                           ... done
Removing sentry_onpremise_snuba-api_1                                ... done
Removing sentry_onpremise_snuba-transactions-consumer_1              ... done
Removing sentry_onpremise_postgres_1                                 ... done
Removing sentry_onpremise_smtp_1                                     ... done
Removing sentry_onpremise_memcached_1                                ... done
Removing sentry_onpremise_symbolicator_1                             ... done
Removing sentry_onpremise_kafka_1                                    ... done
Removing sentry_onpremise_clickhouse_1                               ... done
Removing sentry_onpremise_zookeeper_1                                ... done
Removing sentry_onpremise_redis_1                                    ... done
Removing network sentry_onpremise_default

[redacted] in sentry at [redacted] on  tags/22.1.0 [?] on 🐳 v20.10.12 took 26s 
➜ dcd
Removing network sentry-self-hosted_default
WARNING: Network sentry-self-hosted_default not found.

[redacted] in sentry at [redacted] on  tags/22.1.0 [?] on 🐳 v20.10.12 
➜ ./install.sh 
▶ Parsing command line ...

▶ Initializing Docker Compose ...

▶ Setting up error handling ...

...

-----------------------------------------------------------------

You're all done! Run the following command to get Sentry running:

  docker-compose up -d

-----------------------------------------------------------------

@github-actions
Copy link

This issue has gone three weeks without activity. In another week, I will close it.

But! If you comment or otherwise update it, I will reset the clock, and if you label it Status: Backlog or Status: In Progress, I will leave it alone ... forever!


"A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀

@ragesoss
Copy link

ragesoss commented Mar 1, 2022

I ran into this when upgrading today, and had to use docker stop $(docker ps -q) to get the upgrade to work properly. (docker compose down was not sufficient.)

@aminvakil
Copy link
Collaborator

I ran into this when upgrading today, and had to use docker stop $(docker ps -q) to get the upgrade to work properly. (docker compose down was not sufficient.)

@ragesoss If this has happened from a version < 21.12.0 to a version >= 21.12.0, docker-compose down -v --remove-orphans should have worked. Or executing docker-compose down -v before checking out new version.

@iburrows
Copy link

I recently upgraded from 20.12.1 -> 21.6.3 -> 22.2.0 and running docker-compose down -v --remove-orphans did not help going from 21.6.3 -> 22.2.0. The only way I could get it running was to set the clickhouse healthcheck to test: "exit 0" and then it started up. I looked at #1081 as this was the same error I was seeing but there is nothing different in master to the latest release (currently 22.2.0). I probably broke something with setting test: "exit 0" but it started up here are the logs from clickhouse.

$ docker logs 2047e3808568
Processing configuration file '/etc/clickhouse-server/config.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/docker_related_config.xml'.
Merging configuration file '/etc/clickhouse-server/config.d/sentry.xml'.
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
Logging information to /var/log/clickhouse-server/clickhouse-server.log
Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Logging information to console
2022.03.15 14:32:13.546698 [ 1 ] {} <Information> : Starting ClickHouse 20.3.9.70 with revision 54433
2022.03.15 14:32:13.549431 [ 1 ] {} <Information> Application: starting up
Include not found: networks
2022.03.15 14:32:13.565627 [ 1 ] {} <Information> Application: Uncompressed cache size was lowered to 1.90 GiB because the system has low amount of memory
2022.03.15 14:32:13.565955 [ 1 ] {} <Information> Application: Mark cache size was lowered to 1.90 GiB because the system has low amount of memory
2022.03.15 14:32:13.565998 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2022.03.15 14:32:13.567895 [ 1 ] {} <Information> DatabaseOrdinary (system): Total 2 tables and 0 dictionaries.
2022.03.15 14:32:13.571923 [ 44 ] {} <Information> BackgroundProcessingPool: Create BackgroundProcessingPool with 16 threads
2022.03.15 14:32:13.832585 [ 1 ] {} <Information> DatabaseOrdinary (system): Starting up tables.
2022.03.15 14:32:13.843690 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 13 tables and 0 dictionaries.
2022.03.15 14:32:13.944329 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2022.03.15 14:32:13.947507 [ 1 ] {} <Information> BackgroundSchedulePool: Create BackgroundSchedulePool with 16 threads
2022.03.15 14:32:13.948176 [ 1 ] {} <Information> Application: It looks like the process has no CAP_NET_ADMIN capability, 'taskstats' performance statistics will be disabled. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_net_admin=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems. It also doesn't work if you run clickhouse-server inside network namespace as it happens in some containers.
2022.03.15 14:32:13.948212 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_nice' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2022.03.15 14:32:13.950341 [ 1 ] {} <Error> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2022.03.15 14:32:13.950672 [ 1 ] {} <Error> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2022.03.15 14:32:13.950951 [ 1 ] {} <Error> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2022.03.15 14:32:13.951219 [ 1 ] {} <Error> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: -9 (version 20.3.9.70 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2022.03.15 14:32:13.951330 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2022.03.15 14:32:13.951442 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2022.03.15 14:32:13.951534 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2022.03.15 14:32:14.126773 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2022.03.15 14:32:14.127392 [ 1 ] {} <Information> Application: Available RAM: 3.80 GiB; physical cores: 2; logical cores: 2.
2022.03.15 14:32:14.127416 [ 1 ] {} <Information> Application: Ready for connections.
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression

@jthomaschewski
Copy link

jthomaschewski commented Mar 15, 2022

I recently upgraded from 20.12.1 -> 21.6.3 -> 22.2.0 and running docker-compose down -v --remove-orphans did not help

I think this needs to be run before checking out the new release branch/tag.
The issue is, that --remove-orphans only removes orphan containers of the same docker-compose project. But as the project name changed, it won't discover/cleanup running containers of the old project.

So running docker-compose down while stilling having the old version checked out should stop and remove all containers. Then checking out the new version and upping/running install.sh should work

@chadwhitacre
Copy link
Member

In that case would something like #1384 address this?

@chadwhitacre
Copy link
Member

Here goes nothin'. ¯\_(ツ)_/¯

@jthomaschewski
Copy link

In that case would something like #1384 address this?

lgtm, I believe this should fix this issue, thanks!
I haven't tested it though as I've upgraded my instances a while ago.

@github-actions github-actions bot locked and limited conversation to collaborators Mar 31, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet