Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Compose Redis exits immediately #1144

Closed
SiqingYu opened this issue Nov 4, 2018 · 3 comments
Closed

Docker Compose Redis exits immediately #1144

SiqingYu opened this issue Nov 4, 2018 · 3 comments

Comments

@SiqingYu
Copy link

SiqingYu commented Nov 4, 2018

Here is the output of docker-compose up:

user@user-XPS-13-9350:~/repos/NewsBlur$ docker-compose up
Starting newsblur_mongo_1         ... done
Starting newsblur_elasticsearch_1 ... done
Starting newsblur_postgres_1      ... done
Recreating newsblur_redis_1       ... done
Recreating newsblur_newsblur_1    ... done
Attaching to newsblur_postgres_1, newsblur_mongo_1, newsblur_elasticsearch_1, newsblur_redis_1, newsblur_newsblur_1
postgres_1       | LOG:  database system was shut down at 2018-11-04 15:39:47 UTC
postgres_1       | LOG:  MultiXact member wraparound protections are now enabled
postgres_1       | LOG:  database system is ready to accept connections
postgres_1       | LOG:  autovacuum launcher started
mongo_1          | 2018-11-04T15:43:14.778+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=5ff0ca4d1710
mongo_1          | 2018-11-04T15:43:14.778+0000 I CONTROL  [initandlisten] db version v3.2.21
mongo_1          | 2018-11-04T15:43:14.778+0000 I CONTROL  [initandlisten] git version: 1ab1010737145ba3761318508ff65ba74dfe8155
mongo_1          | 2018-11-04T15:43:14.778+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1t  3 May 2016
mongo_1          | 2018-11-04T15:43:14.778+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongo_1          | 2018-11-04T15:43:14.778+0000 I CONTROL  [initandlisten] modules: none
mongo_1          | 2018-11-04T15:43:14.778+0000 I CONTROL  [initandlisten] build environment:
mongo_1          | 2018-11-04T15:43:14.778+0000 I CONTROL  [initandlisten]     distmod: debian81
mongo_1          | 2018-11-04T15:43:14.778+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongo_1          | 2018-11-04T15:43:14.778+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongo_1          | 2018-11-04T15:43:14.778+0000 I CONTROL  [initandlisten] options: { storage: { mmapv1: { smallFiles: true } } }
mongo_1          | 2018-11-04T15:43:14.782+0000 I -        [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
mongo_1          | 2018-11-04T15:43:14.783+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=8G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),verbose=(recovery_progress),
mongo_1          | 2018-11-04T15:43:14.922+0000 I STORAGE  [initandlisten] WiredTiger [1541346194:922284][1:0x7fe8d8c7acc0], txn-recover: Main recovery loop: starting at 6/4224
mongo_1          | 2018-11-04T15:43:14.996+0000 I STORAGE  [initandlisten] WiredTiger [1541346194:996871][1:0x7fe8d8c7acc0], txn-recover: Recovering log 6 through 7
mongo_1          | 2018-11-04T15:43:15.000+0000 I STORAGE  [initandlisten] WiredTiger [1541346195:775][1:0x7fe8d8c7acc0], txn-recover: Recovering log 7 through 7
mongo_1          | 2018-11-04T15:43:15.224+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
mongo_1          | 2018-11-04T15:43:15.226+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongo_1          | 2018-11-04T15:43:15.226+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
mongo_1          | 2018-11-04T15:43:15.226+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
elasticsearch_1  | [2018-11-04 15:43:15,365][INFO ][node                     ] [Ramshot] version[1.7.6], pid[1], build[c730b59/2016-11-18T15:21:16Z]
elasticsearch_1  | [2018-11-04 15:43:15,366][INFO ][node                     ] [Ramshot] initializing ...
elasticsearch_1  | [2018-11-04 15:43:15,421][INFO ][plugins                  ] [Ramshot] loaded [], sites []
elasticsearch_1  | [2018-11-04 15:43:15,450][INFO ][env                      ] [Ramshot] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/nvme0n1p2)]], net usable_space [88.8gb], net total_space [233.2gb], types [ext4]
newsblur_redis_1 exited with code 0
newsblur_1       | [2018-11-04 15:43:16 +0000] [1] [INFO] Starting gunicorn 19.7.0
newsblur_1       | [2018-11-04 15:43:16 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
newsblur_1       | [2018-11-04 15:43:16 +0000] [1] [INFO] Using worker: sync
newsblur_1       | [2018-11-04 15:43:16 +0000] [11] [INFO] Booting worker with pid: 11
elasticsearch_1  | [2018-11-04 15:43:17,441][INFO ][node                     ] [Ramshot] initialized
elasticsearch_1  | [2018-11-04 15:43:17,442][INFO ][node                     ] [Ramshot] starting ...
elasticsearch_1  | [2018-11-04 15:43:17,539][INFO ][transport                ] [Ramshot] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/172.20.0.4:9300]}
elasticsearch_1  | [2018-11-04 15:43:17,567][INFO ][discovery                ] [Ramshot] elasticsearch/RBtSRUJ2T1GxhnSAMH9udg
elasticsearch_1  | [2018-11-04 15:43:21,367][INFO ][cluster.service          ] [Ramshot] new_master [Ramshot][RBtSRUJ2T1GxhnSAMH9udg][47dec3105eab][inet[/172.20.0.4:9300]], reason: zen-disco-join (elected_as_master)
elasticsearch_1  | [2018-11-04 15:43:21,417][INFO ][http                     ] [Ramshot] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/172.20.0.4:9200]}
elasticsearch_1  | [2018-11-04 15:43:21,418][INFO ][node                     ] [Ramshot] started
elasticsearch_1  | [2018-11-04 15:43:21,439][INFO ][gateway                  ] [Ramshot] recovered [0] indices into cluster_state
newsblur_1       | [2018-11-04 15:54:50 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:11)
newsblur_1       | [2018-11-04 15:54:50 +0000] [11] [INFO] Worker exiting (pid: 11)
newsblur_1       | [2018-11-04 15:54:51 +0000] [12] [INFO] Booting worker with pid: 12

The Redis container does not seem to leave any logs of the exit:

user@user-XPS-13-9350:~/repos/NewsBlur$ docker-compose logs --follow redis
Attaching to newsblur_redis_1
newsblur_redis_1 exited with code 0
@samuelclay
Copy link
Owner

Thanks @leophys! Any chance you can take a look at why the postgres user isn't being created? See https://forum.newsblur.com/t/setting-up-newsblur-on-docker/6919

@leophys
Copy link

leophys commented Dec 28, 2018

Thanks @leophys! Any chance you can take a look at why the postgres user isn't being created? See https://forum.newsblur.com/t/setting-up-newsblur-on-docker/6919

Dear @samuelclay

The sql dump with which the db is initialized has two statements with the user postgres hardcoded at the end of the file (lines 4144-4145):

REVOKE ALL ON SCHEMA public FROM postgres;
GRANT ALL ON SCHEMA public TO postgres;

But the container is built (in the docker-compose.yml file) using another user to launch and manage the db (POSTGRES_USER=newsblur). The error one gets is:

postgres_1       | ERROR:  role "postgres" does not exist                                                                                                                                                          
postgres_1       | STATEMENT:  REVOKE ALL ON SCHEMA public FROM postgres;                                                                                                                                        
postgres_1       | ERROR:  role "postgres" does not exist  

The solution would be either to change the user in the sql dump from postgres to newsblur or to remove the above POSTGRES_USER environment variable, therefore managing the database with the standard user postgres in the container. If you use directly this newsblur user around, I suggest the first solution. I did it but the file is shipped gzipped and therefore the diff would not be shown in the commit diff. If you are comfortable with this I can arrange another PR. Otherwise proceed autonomously with one of these changes 😊

@samuelclay
Copy link
Owner

Sure, go ahead and send that as a PR. I'll unzip it myself and double-check, but I love having more contributors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants