Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

check if whisk_local_subjects with CouchDB,exists Status code was -1 and not [200, 404]: Request failed: <urlopen error timed out> #4148

Closed
axiqia opened this issue Nov 30, 2018 · 6 comments

Comments

@axiqia
Copy link
Contributor

axiqia commented Nov 30, 2018

I got error when I tried to build openwhisk on both my server, and the error are the same. Can someone help me? Thank you very much.

Environment details:

root@ubuntu-s-1vcpu-1gb-sfo2-01:~/openwhisk/ansible# uname -a
Linux ubuntu-s-1vcpu-1gb-sfo2-01 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
root@ubuntu-s-1vcpu-1gb-sfo2-01:~/openwhisk/ansible# docker --version
Docker version 18.09.0, build 4d60db4

Steps to reproduce the issue:

ansible-playbook -i environments/local initdb.yml

Provide the actual results and outputs:

root@ubuntu-s-1vcpu-1gb-sfo2-01:~/openwhisk/ansible# ansible-playbook -i environments/local initdb.yml

PLAY [ansible] ***************************************************************************************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************************************************************************
Friday 30 November 2018  14:23:39 +0000 (0:00:00.176)       0:00:00.176 ******* 
ok: [ansible]

TASK [include_tasks] *********************************************************************************************************************************************************************
Friday 30 November 2018  14:23:40 +0000 (0:00:00.925)       0:00:01.101 ******* 
included: /root/openwhisk/ansible/tasks/initdb.yml for ansible

TASK [include_tasks] *********************************************************************************************************************************************************************
Friday 30 November 2018  14:23:40 +0000 (0:00:00.080)       0:00:01.182 ******* 
included: /root/openwhisk/ansible/tasks/db/recreateDb.yml for ansible

TASK [check if whisk_local_subjects with CouchDB exists] *********************************************************************************************************************************
Friday 30 November 2018  14:23:40 +0000 (0:00:00.096)       0:00:01.279 ******* 
fatal: [ansible]: FAILED! => {"changed": false, "content": "", "msg": "Status code was -1 and not [200, 404]: Request failed: <urlopen error timed out>", "redirected": false, "status": -1, "url": "http://192.168.222.140:5984/whisk_local_subjects"}

Status code was -1 and not [200, 404]: Request failed: <urlopen error timed
out>

PLAY RECAP *******************************************************************************************************************************************************************************
ansible                    : ok=3    changed=0    unreachable=0    failed=1   

Friday 30 November 2018  14:24:11 +0000 (0:00:30.494)       0:00:31.773 ******* 
=============================================================================== 
check if whisk_local_subjects with CouchDB exists -------------------------------------------------------------------------------------------------------------------------------- 30.49s
Gathering Facts ------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.93s
include_tasks --------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.10s
include_tasks --------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.08s
@rabbah
Copy link
Member

rabbah commented Dec 2, 2018

I presume you ran the couchdb playbook first? The logs suggests the container wasn't reachable.

@axiqia
Copy link
Contributor Author

axiqia commented Dec 3, 2018

@rabbah Thank you for your reply.
I'm sorry for missing the step.
I havd ran the couchdb playbook before, and I got
$ ansible-playbook -i environments/local couchdb.yml


PLAY [db] *******************************************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************
Monday 03 December 2018  05:25:11 +0000 (0:00:00.142)       0:00:00.142 ******* 
ok: [172.17.0.1]

TASK [couchdb : set the coordinator to the first node] **********************************************************************************************************************************
Monday 03 December 2018  05:25:12 +0000 (0:00:00.857)       0:00:01.000 ******* 
ok: [172.17.0.1]

TASK [couchdb : Set the volumes] ********************************************************************************************************************************************************
Monday 03 December 2018  05:25:12 +0000 (0:00:00.099)       0:00:01.099 ******* 
ok: [172.17.0.1]

TASK [couchdb : check if db credentials are valid for CouchDB] **************************************************************************************************************************
Monday 03 December 2018  05:25:12 +0000 (0:00:00.095)       0:00:01.195 ******* 
skipping: [172.17.0.1]

TASK [couchdb : check for persistent disk] **********************************************************************************************************************************************
Monday 03 December 2018  05:25:12 +0000 (0:00:00.082)       0:00:01.277 ******* 
skipping: [172.17.0.1]

TASK [couchdb : set the volume_dir] *****************************************************************************************************************************************************
Monday 03 December 2018  05:25:12 +0000 (0:00:00.039)       0:00:01.317 ******* 
skipping: [172.17.0.1]

TASK [couchdb : include_tasks] **********************************************************************************************************************************************************
Monday 03 December 2018  05:25:12 +0000 (0:00:00.040)       0:00:01.357 ******* 
skipping: [172.17.0.1]

TASK [couchdb : set the erlang cookie volume] *******************************************************************************************************************************************
Monday 03 December 2018  05:25:13 +0000 (0:00:00.086)       0:00:01.443 ******* 
skipping: [172.17.0.1]

TASK [couchdb : (re)start CouchDB from 'apache/couchdb:2.1 '] ***************************************************************************************************************************
Monday 03 December 2018  05:25:13 +0000 (0:00:00.090)       0:00:01.534 ******* 
changed: [172.17.0.1]

TASK [couchdb : wait until CouchDB in this host is up and running] **********************************************************************************************************************
Monday 03 December 2018  05:25:15 +0000 (0:00:02.353)       0:00:03.887 ******* 
FAILED - RETRYING: wait until CouchDB in this host is up and running (12 retries left).
ok: [172.17.0.1]

TASK [couchdb : create '_users' database for singleton mode] ****************************************************************************************************************************
Monday 03 December 2018  05:25:21 +0000 (0:00:05.963)       0:00:09.851 ******* 
ok: [172.17.0.1]

TASK [couchdb : enable the cluster setup mode] ******************************************************************************************************************************************
Monday 03 December 2018  05:25:21 +0000 (0:00:00.385)       0:00:10.237 ******* 
skipping: [172.17.0.1]

TASK [couchdb : add remote nodes to the cluster] ****************************************************************************************************************************************
Monday 03 December 2018  05:25:21 +0000 (0:00:00.085)       0:00:10.322 ******* 
skipping: [172.17.0.1]

TASK [couchdb : finish the cluster setup mode] ******************************************************************************************************************************************
Monday 03 December 2018  05:25:22 +0000 (0:00:00.083)       0:00:10.406 ******* 
skipping: [172.17.0.1]

TASK [couchdb : remove CouchDB] *********************************************************************************************************************************************************
Monday 03 December 2018  05:25:22 +0000 (0:00:00.087)       0:00:10.493 ******* 
skipping: [172.17.0.1]

PLAY RECAP ******************************************************************************************************************************************************************************
172.17.0.1                 : ok=6    changed=1    unreachable=0    failed=0   

Monday 03 December 2018  05:25:22 +0000 (0:00:00.021)       0:00:10.515 ******* 
=============================================================================== 
couchdb : wait until CouchDB in this host is up and running ---------------------------------------------------------------------------------------------------------------------- 5.96s
couchdb : (re)start CouchDB from 'apache/couchdb:2.1 '  -------------------------------------------------------------------------------------------------------------------------- 2.35s
Gathering Facts ------------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.86s
couchdb : create '_users' database for singleton mode ---------------------------------------------------------------------------------------------------------------------------- 0.39s
couchdb : set the coordinator to the first node ---------------------------------------------------------------------------------------------------------------------------------- 0.10s
couchdb : Set the volumes -------------------------------------------------------------------------------------------------------------------------------------------------------- 0.10s
couchdb : set the erlang cookie volume ------------------------------------------------------------------------------------------------------------------------------------------- 0.09s
couchdb : finish the cluster setup mode ------------------------------------------------------------------------------------------------------------------------------------------ 0.09s
couchdb : include_tasks ---------------------------------------------------------------------------------------------------------------------------------------------------------- 0.09s
couchdb : enable the cluster setup mode ------------------------------------------------------------------------------------------------------------------------------------------ 0.09s
couchdb : add remote nodes to the cluster ---------------------------------------------------------------------------------------------------------------------------------------- 0.08s
couchdb : check if db credentials are valid for CouchDB -------------------------------------------------------------------------------------------------------------------------- 0.08s
couchdb : set the volume_dir ----------------------------------------------------------------------------------------------------------------------------------------------------- 0.04s
couchdb : check for persistent disk ---------------------------------------------------------------------------------------------------------------------------------------------- 0.04s
couchdb : remove CouchDB --------------------------------------------------------------------------------------------------------------------------------------------------------- 0.02 

And the problem like I mentioned above

$ ansible-playbook -i environments/local initdb.yml

PLAY [ansible] **************************************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************
Monday 03 December 2018  05:25:59 +0000 (0:00:00.151)       0:00:00.151 ******* 
ok: [ansible]

TASK [include_tasks] ********************************************************************************************************************************************************************
Monday 03 December 2018  05:25:59 +0000 (0:00:00.886)       0:00:01.037 ******* 
included: /root/openwhisk/ansible/tasks/initdb.yml for ansible

TASK [include_tasks] ********************************************************************************************************************************************************************
Monday 03 December 2018  05:25:59 +0000 (0:00:00.082)       0:00:01.119 ******* 
included: /root/openwhisk/ansible/tasks/db/recreateDb.yml for ansible

TASK [check if whisk_local_subjects with CouchDB exists] ********************************************************************************************************************************
Monday 03 December 2018  05:26:00 +0000 (0:00:00.104)       0:00:01.224 ******* 
fatal: [ansible]: FAILED! => {"changed": false, "content": "", "msg": "Status code was -1 and not [200, 404]: Request failed: <urlopen error timed out>", "redirected": false, "status": -1, "url": "http://192.168.222.140:5984/whisk_local_subjects"}

Status code was -1 and not [200, 404]: Request failed: <urlopen error timed
out>

PLAY RECAP ******************************************************************************************************************************************************************************
ansible                    : ok=3    changed=0    unreachable=0    failed=1   

Monday 03 December 2018  05:26:30 +0000 (0:00:30.550)       0:00:31.774 ******* 
=============================================================================== 
check if whisk_local_subjects with CouchDB exists ------------------------------------------------------------------------------------------------------------------------------- 30.55s
Gathering Facts ------------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.89s
include_tasks -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.10s
include_tasks -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.08s

I'm new to couchdb and ansible, so PLEASE forgive me if I asked a stupied question.

@rabbah
Copy link
Member

rabbah commented Dec 3, 2018

It looks like there's a configuration error in your ansible/db_local.ini file because the second script is connecting to http://192.168.222.140:5984 instead of 172.

This is what the ansible/db_local.ini file should look like:

> cat ansible/db_local.ini
[db_creds]
db_provider=CouchDB
db_username=whisk_admin
db_password=some_passw0rd
db_protocol=http
db_host=172.17.0.1
db_port=5984

[controller]
db_username=whisk_local_controller0
db_password=some_controller_passw0rd

[invoker]
db_username=whisk_local_invoker0
db_password=some_invoker_passw0rd

@axiqia
Copy link
Contributor Author

axiqia commented Dec 3, 2018

@rabbah Thank you for your quick reply.
You are right, I changed db_local.init to db_host=172.17.0.1 and ansible-playbook -i environments/local initdb.yml executed right.

@axiqia
Copy link
Contributor Author

axiqia commented Dec 3, 2018

@rabbah May I have a new problem.

ansible-playbook -i environments/local couchdb.yml
ansible-playbook -i environments/local initdb.yml
ansible-playbook -i environments/local wipe.yml
ansible-playbook -i environments/local apigateway.yml
ansible-playbook -i environments/local openwhisk.yml

After the last command, I got

$ ansible-playbook -i environments/local openwhisk.yml 

PLAY [zookeepers] ***********************************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************
Monday 03 December 2018  06:26:36 +0000 (0:00:00.143)       0:00:00.143 ******* 
ok: [kafka0]

TASK [zookeeper : pull the zookeeper:3.4 image] *****************************************************************************************************************************************
Monday 03 December 2018  06:26:37 +0000 (0:00:00.991)       0:00:01.134 ******* 
changed: [kafka0]

TASK [zookeeper : (re)start zookeeper] **************************************************************************************************************************************************
Monday 03 December 2018  06:26:39 +0000 (0:00:02.062)       0:00:03.197 ******* 
changed: [kafka0]

TASK [zookeeper : wait until the Zookeeper in this host is up and running] **************************************************************************************************************
Monday 03 December 2018  06:26:41 +0000 (0:00:02.283)       0:00:05.480 ******* 
FAILED - RETRYING: wait until the Zookeeper in this host is up and running (36 retries left).
changed: [kafka0]

TASK [zookeeper : remove old zookeeper] *************************************************************************************************************************************************
Monday 03 December 2018  06:26:49 +0000 (0:00:07.677)       0:00:13.158 ******* 
skipping: [kafka0]

TASK [zookeeper : remove zookeeper] *****************************************************************************************************************************************************
Monday 03 December 2018  06:26:49 +0000 (0:00:00.047)       0:00:13.206 ******* 
skipping: [kafka0]

PLAY [kafkas] ***************************************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************
Monday 03 December 2018  06:26:49 +0000 (0:00:00.055)       0:00:13.261 ******* 
ok: [kafka0]

TASK [kafka : create kafka certificate directory] ***************************************************************************************************************************************
Monday 03 December 2018  06:26:50 +0000 (0:00:00.733)       0:00:13.995 ******* 
ok: [kafka0]

TASK [kafka : copy keystore] ************************************************************************************************************************************************************
Monday 03 December 2018  06:26:50 +0000 (0:00:00.431)       0:00:14.427 ******* 
skipping: [kafka0]

TASK [kafka : add kafka default env vars] ***********************************************************************************************************************************************
Monday 03 December 2018  06:26:50 +0000 (0:00:00.061)       0:00:14.488 ******* 
ok: [kafka0]

TASK [kafka : add kafka non-ssl vars] ***************************************************************************************************************************************************
Monday 03 December 2018  06:26:51 +0000 (0:00:00.421)       0:00:14.909 ******* 
ok: [kafka0]

TASK [kafka : add kafka ssl env vars] ***************************************************************************************************************************************************
Monday 03 December 2018  06:26:51 +0000 (0:00:00.138)       0:00:15.047 ******* 
skipping: [kafka0]

TASK [kafka : join kafka ssl env vars] **************************************************************************************************************************************************
Monday 03 December 2018  06:26:51 +0000 (0:00:00.060)       0:00:15.108 ******* 
skipping: [kafka0]

TASK [kafka : join kafka non-ssl env vars] **********************************************************************************************************************************************
Monday 03 December 2018  06:26:51 +0000 (0:00:00.059)       0:00:15.168 ******* 
ok: [kafka0]

TASK [kafka : (re)start kafka using 'wurstmeister/kafka:0.11.0.1'] **********************************************************************************************************************
Monday 03 December 2018  06:26:51 +0000 (0:00:00.154)       0:00:15.322 ******* 
changed: [kafka0]

TASK [kafka : wait until the kafka server started up] ***********************************************************************************************************************************
Monday 03 December 2018  06:26:53 +0000 (0:00:02.314)       0:00:17.637 ******* 
FAILED - RETRYING: wait until the kafka server started up (10 retries left).
FAILED - RETRYING: wait until the kafka server started up (9 retries left).
FAILED - RETRYING: wait until the kafka server started up (8 retries left).
FAILED - RETRYING: wait until the kafka server started up (7 retries left).
FAILED - RETRYING: wait until the kafka server started up (6 retries left).
FAILED - RETRYING: wait until the kafka server started up (5 retries left).
FAILED - RETRYING: wait until the kafka server started up (4 retries left).
FAILED - RETRYING: wait until the kafka server started up (3 retries left).
FAILED - RETRYING: wait until the kafka server started up (2 retries left).
FAILED - RETRYING: wait until the kafka server started up (1 retries left).
fatal: [kafka0]: FAILED! => {"attempts": 10, "changed": true, "cmd": "(echo dump; sleep 1) | nc 172.17.0.1 2181 | grep /brokers/ids/0", "delta": "0:00:01.009079", "end": "2018-12-03 06:27:57.766174", "msg": "non-zero return code", "rc": 1, "start": "2018-12-03 06:27:56.757095", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}

[FAILED]
> (echo dump; sleep 1) | nc 172.17.0.1 2181 | grep /brokers/ids/0
non-zero return code

PLAY RECAP ******************************************************************************************************************************************************************************
kafka0                     : ok=10   changed=4    unreachable=0    failed=1   

Monday 03 December 2018  06:27:57 +0000 (0:01:03.790)       0:01:21.427 ******* 
=============================================================================== 
kafka : wait until the kafka server started up ---------------------------------------------------------------------------------------------------------------------------------- 63.79s
zookeeper : wait until the Zookeeper in this host is up and running -------------------------------------------------------------------------------------------------------------- 7.68s
kafka : (re)start kafka using 'wurstmeister/kafka:0.11.0.1'  --------------------------------------------------------------------------------------------------------------------- 2.31s
zookeeper : (re)start zookeeper -------------------------------------------------------------------------------------------------------------------------------------------------- 2.28s
zookeeper : pull the zookeeper:3.4 image ----------------------------------------------------------------------------------------------------------------------------------------- 2.06s
Gathering Facts ------------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.99s
Gathering Facts ------------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.73s
kafka : create kafka certificate directory --------------------------------------------------------------------------------------------------------------------------------------- 0.43s
kafka : add kafka default env vars ----------------------------------------------------------------------------------------------------------------------------------------------- 0.42s
kafka : join kafka non-ssl env vars ---------------------------------------------------------------------------------------------------------------------------------------------- 0.15s
kafka : add kafka non-ssl vars --------------------------------------------------------------------------------------------------------------------------------------------------- 0.14s
kafka : copy keystore ------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.06s
kafka : add kafka ssl env vars --------------------------------------------------------------------------------------------------------------------------------------------------- 0.06s
kafka : join kafka ssl env vars -------------------------------------------------------------------------------------------------------------------------------------------------- 0.06s
zookeeper : remove zookeeper ----------------------------------------------------------------------------------------------------------------------------------------------------- 0.06s
zookeeper : remove old zookeeper ------------------------------------------------------------------------------------------------------------------------------------------------- 0.05s

I found container wurstmeister/kafka:0.11.0.1 seems already run.

$ docker ps
CONTAINER ID        IMAGE                         COMMAND                  CREATED              STATUS                          PORTS                                                                    NAMES
0d96da4cdf9c        wurstmeister/kafka:0.11.0.1   "start-kafka.sh"         About a minute ago   Restarting (1) 27 seconds ago                                                                            kafka0
a4abd9941fdb        zookeeper:3.4                 "/docker-entrypoint.…"   About a minute ago   Up About a minute               0.0.0.0:2181->2181/tcp, 0.0.0.0:2888->2888/tcp, 0.0.0.0:3888->3888/tcp   zookeeper0
cf286d78c4ae        apache/couchdb:2.1            "tini -- /docker-ent…"   3 minutes ago        Up 3 minutes                    0.0.0.0:4369->4369/tcp, 0.0.0.0:5984->5984/tcp, 0.0.0.0:9100->9100/tcp   couchdb
b0341b9eaeca        openwhisk/apigateway:latest   "/usr/bin/dumb-init …"   23 minutes ago       Up 23 minutes                   80/tcp, 8423/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:9001->8080/tcp         apigateway
2cdefc217be9        redis:4.0                     "docker-entrypoint.s…"   24 minutes ago       Up 24 minutes                   0.0.0.0:6379->6379/tcp                                                   redis

@axiqia
Copy link
Contributor Author

axiqia commented Dec 3, 2018

I tried to start the container wurstmeister/kafka:0.11.0.1 and run the command start-kafka.sh manually.

$ docker run -it wurstmeister/kafka:0.11.0.1 /bin/bash
start-kafka.sh 

And I got

waiting for kafka to be ready
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory.
# An error report file with more information is saved as:
# //hs_err_pid10.log

It is due to my vps memory limit. Maybe I have to buy a larger vps.

@axiqia axiqia closed this as completed Dec 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants