Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deploy failed when run ansible-playbook using openwhisk.yml #5477

Closed
re-xmyl opened this issue Apr 20, 2024 · 8 comments
Closed

deploy failed when run ansible-playbook using openwhisk.yml #5477

re-xmyl opened this issue Apr 20, 2024 · 8 comments

Comments

@re-xmyl
Copy link

re-xmyl commented Apr 20, 2024

I tried the methods of deployment on Ubuntu , ubuntu 18.04
When i run ansible-playbook -i environments/local openwhisk.yml
I deploy agagin but face the same problem,could you please help me find what's wrong,thank you so much!

errors as follow:

TASK [controller : warm up activation path] *********************************************************************************************************************************************************************************************
Saturday 20 April 2024  02:58:17 -0700 (0:00:10.731)       0:01:13.617 ******** 
fatal: [controller0]: FAILED! => {"access_control_allow_headers": "Authorization, Origin, X-Requested-With, Content-Type, 
Accept, User-Agent", "access_control_allow_methods": "GET, DELETE, POST, PUT, HEAD", "access_control_allow_origin": "*", 
"changed": false, "connection": "close", "content": "{\"code\":\"FJgvfXLBQbxrTNajxJZ1gWexHd7VN8HI\",\"error\":\"The 
requested resource does not exist.\"}", "content_length": "92", "content_type": "application/json", "date": "Sat, 20 Apr 2024 
09:58:18 GMT", "json": {"code": "FJgvfXLBQbxrTNajxJZ1gWexHd7VN8HI", "error": "The requested resource does not exist."}, 
"msg": "Status code was 404 and not [200]: HTTP Error 404: Not Found", "redirected": false, "server": "akka-http/10.2.4", 
"status": 404, "url": "https://789c46b1-71f6-4ed5-8c54-816aa4f8c502:abczO3xZCLrMN6v2BKK1dXYFpXlPkccOFqm12CdAsMgRU4VrNZ9lyGVCGuMDGIwP@172.17.0.1:10001/api/v1/namespaces/_/actions/invokerHealthTestAction0?blocking=false&result=false", "x_request_id": "FJgvfXLBQbxrTNajxJZ1gWexHd7VN8HI"}
...ignoring

Status code was 404 and not [200]: HTTP Error 404: Not Found


TASK [schedulers : wait until the Scheduler in this host is up and running] ****
Saturday 20 April 2024  03:17:31 -0700 (0:00:00.812)       0:01:54.732 ******** 
FAILED - RETRYING: wait until the Scheduler in this host is up and running (12 retries left).
FAILED - RETRYING: wait until the Scheduler in this host is up and running (11 retries left).
FAILED - RETRYING: wait until the Scheduler in this host is up and running (10 retries left).
FAILED - RETRYING: wait until the Scheduler in this host is up and running (9 retries left).
FAILED - RETRYING: wait until the Scheduler in this host is up and running (8 retries left).
FAILED - RETRYING: wait until the Scheduler in this host is up and running (7 retries left).
FAILED - RETRYING: wait until the Scheduler in this host is up and running (6 retries left).
FAILED - RETRYING: wait until the Scheduler in this host is up and running (5 retries left).
FAILED - RETRYING: wait until the Scheduler in this host is up and running (4 retries left).
FAILED - RETRYING: wait until the Scheduler in this host is up and running (3 retries left).
FAILED - RETRYING: wait until the Scheduler in this host is up and running (2 retries left).
FAILED - RETRYING: wait until the Scheduler in this host is up and running (1 retries left).
fatal: [scheduler0]: FAILED! => {"attempts": 12, "changed": false, "content": "", "msg": "Status code was -1 and not [200]: 
Request failed: <urlopen error [Errno 111] 拒绝连接>", "redirected": false, "status": -1, "url": "http://172.17.0.1:14001/ping"}

Status code was -1 and not [200]: Request failed: <urlopen error [Errno
111] 拒绝连接>

the logs in container scheduler0 as follow:

[2024-04-20T08:24:39.305Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - Starting up, Akka version [2.6.12] ...
[2024-04-20T08:24:39.336Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - Registered cluster JMX MBean [akka:type=Cluster]
[2024-04-20T08:24:39.336Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - Started up successfully
[2024-04-20T08:24:39.350Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - No downing-provider-class configured, manual cluster downing required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#downing
[2024-04-20T08:24:39.382Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - Node [akka://scheduler-actor-system@172.17.0.1:25520] is JOINING itself (with roles [dc-default], version [0.0.0]) and forming new cluster
[2024-04-20T08:24:39.383Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - is the new leader among reachable nodes (more leaders may exist)
[2024-04-20T08:24:39.389Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - Leader is moving node [akka://scheduler-actor-system@172.17.0.1:25520] to [Up]
[2024-04-20T08:24:39.769Z] [INFO] [#tid_sid_unknown] [Config] environment set value for whisk.scheduler.endpoints.host
[2024-04-20T08:24:39.769Z] [INFO] [#tid_sid_unknown] [Config] environment set value for limits.actions.invokes.concurrent
[2024-04-20T08:24:39.769Z] [INFO] [#tid_sid_unknown] [Config] environment set value for whisk.scheduler.endpoints.rpcPort
[2024-04-20T08:24:39.770Z] [INFO] [#tid_sid_unknown] [Config] environment set value for whisk.scheduler.endpoints.akkaPort
[2024-04-20T08:24:39.770Z] [INFO] [#tid_sid_unknown] [Config] environment set value for runtimes.manifest
[2024-04-20T08:24:39.772Z] [INFO] [#tid_sid_unknown] [Config] environment set value for kafka.hosts
[2024-04-20T08:24:39.772Z] [INFO] [#tid_sid_unknown] [Config] environment set value for port
[2024-04-20T08:24:39.872Z] [WARN] Failed to attach the instrumentation because the Kamon Bundle is not present on the classpath
[2024-04-20T08:24:39.959Z] [INFO] Started the Kamon StatsD reporter
[2024-04-20T08:24:40.571Z] [INFO] [#tid_sid_unknown] [KafkaMessagingProvider] created topic scheduler0
[2024-04-20T08:24:40.690Z] [INFO] [#tid_sid_unknown] [KafkaMessagingProvider] created topic creationAck0
[2024-04-20T08:24:41.809Z] [INFO] [#tid_sid_unknown] [LeaseKeepAliveService] Granted a new lease Lease(7802925036371667984,10)
[2024-04-20T08:24:41.824Z] [INFO] [#tid_sid_unknown] [WatcherService] watch endpoint: WatchEndpoint(whisk/instance/scheduler0/lease,7802925036371667984,false,lease-service,Set(DeleteEvent))
Exception in thread "main" java.lang.ExceptionInInitializerError
	at java.base/java.lang.J9VMInternals.ensureError(J9VMInternals.java:184)
	at java.base/java.lang.J9VMInternals.recordInitializationFailure(J9VMInternals.java:173)
	at org.apache.openwhisk.core.scheduler.queue.ElasticSearchDurationCheckerProvider$.instance(ElasticSearchDurationChecker.scala:121)
	at org.apache.openwhisk.core.scheduler.queue.ElasticSearchDurationCheckerProvider$.instance(ElasticSearchDurationChecker.scala:112)
	at org.apache.openwhisk.core.scheduler.Scheduler.<init>(Scheduler.scala:97)
	at org.apache.openwhisk.core.scheduler.Scheduler$.main(Scheduler.scala:350)
	at org.apache.openwhisk.core.scheduler.Scheduler.main(Scheduler.scala)
Caused by: pureconfig.error.ConfigReaderException: Cannot convert configuration to a org.apache.openwhisk.core.database.elasticsearch.ElasticSearchActivationStoreConfig. Failures are:
  at 'whisk.activation-store.elasticsearch':
    - (jar:file:/scheduler/lib/openwhisk-common-1.0.1-SNAPSHOT.jar!/application.conf:368) Key not found: 'protocol'.
    - (jar:file:/scheduler/lib/openwhisk-common-1.0.1-SNAPSHOT.jar!/application.conf:368) Key not found: 'hosts'.
    - (jar:file:/scheduler/lib/openwhisk-common-1.0.1-SNAPSHOT.jar!/application.conf:368) Key not found: 'index-pattern'.
    - (jar:file:/scheduler/lib/openwhisk-common-1.0.1-SNAPSHOT.jar!/application.conf:368) Key not found: 'username'.
    - (jar:file:/scheduler/lib/openwhisk-common-1.0.1-SNAPSHOT.jar!/application.conf:368) Key not found: 'password'.

	at pureconfig.package$.getResultOrThrow(package.scala:139)
	at pureconfig.package$.loadConfigOrThrow(package.scala:161)
	at org.apache.openwhisk.core.database.elasticsearch.ElasticSearchActivationStore$.<init>(ElasticSearchActivationStore.scala:441)
	at org.apache.openwhisk.core.database.elasticsearch.ElasticSearchActivationStore$.<clinit>(ElasticSearchActivationStore.scala)
	... 5 more
[2024-04-20T08:26:33.563Z] [WARN] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - Scheduled sending of heartbeat was delayed. Previous heartbeat was sent [7139] ms ago, expected interval is [1000] ms. This may cause failure detection to mark members as unreachable. The reason can be thread starvation, e.g. by running blocking tasks on the default dispatcher, CPU overload, or GC.
bash-5.1$ S

docker ps

root@ubuntu:/home/re/openwhisk/ansible# docker ps
CONTAINER ID   IMAGE                           COMMAND                   CREATED         STATUS         PORTS                                                                                                                            NAMES
f5ce0ffc7d10   whisk/scheduler:latest          "/bin/sh -c 'exec /i…"   8 minutes ago   Up 8 minutes   0.0.0.0:13001->13001/tcp, 0.0.0.0:21000->21000/tcp, 0.0.0.0:22000->22000/tcp, 0.0.0.0:25520->3551/tcp, 0.0.0.0:14001->8080/tcp   scheduler0
53e1e6089e0f   whisk/controller:latest         "/bin/sh -c 'exec /i…"   8 minutes ago   Up 8 minutes   0.0.0.0:15000->15000/tcp, 0.0.0.0:16000->16000/tcp, 0.0.0.0:8000->2551/tcp, 0.0.0.0:10001->8080/tcp                              controller0
349459c3d6ed   wurstmeister/kafka:2.13-2.7.0   "start-kafka.sh"          8 minutes ago   Up 8 minutes   0.0.0.0:9072->9072/tcp, 0.0.0.0:9093->9093/tcp                                                                                   kafka0
10f1a16afad4   zookeeper:3.4                   "/docker-entrypoint.…"   9 minutes ago   Up 9 minutes   0.0.0.0:2181->2181/tcp, 0.0.0.0:2888->2888/tcp, 0.0.0.0:3888->3888/tcp                                                           zookeeper0
0dbbe02d025d   quay.io/coreos/etcd:v3.4.0      "/usr/local/bin/etcd…"   9 minutes ago   Up 9 minutes   0.0.0.0:2379->2379/tcp, 0.0.0.0:2480->2480/tcp, 2380/tcp                                                                         etcd0
bdd6e9d4619a   apache/couchdb:2.3              "tini -- /docker-ent…"   2 hours ago     Up 2 hours     0.0.0.0:4369->4369/tcp, 0.0.0.0:5984->5984/tcp, 0.0.0.0:9100->9100/tcp                                                           couchdb
@re-xmyl re-xmyl closed this as completed May 10, 2024
@SCDESPERTATE
Copy link

Hi there, I came up with the same problem as you said. Could you tell me how to fix this issue? @re-xmyl

@style95
Copy link
Member

style95 commented May 13, 2024

That's generally because the elasticsearch activation store is configured, but no relevant configuration exists.
You can add the following configurations

db_activation_backend: ElasticSearch
elastic_cluster_name: <your elasticsearch cluster name>
elastic_protocol: <your elasticsearch protocol>
elastic_index_pattern: <your elasticsearch index pattern>
elastic_base_volume: <your elasticsearch volume directory>
elastic_username: <your elasticsearch username>
elastic_password: <your elasticsearch username>

https://github.com/apache/openwhisk/tree/master/ansible#optional-enable-elasticsearch-activation-store

@SCDESPERTATE
Copy link

SCDESPERTATE commented May 13, 2024

Thanks for @style95 kindness, but I haven't configured elasticsearch as the option db_activation_store 's value, and after changing the configuration in ansible/group_vars/all as follows

  artifact_store:
    backend: "CouchDB"
    #backend: "{{ db_artifact_backend | default('CouchDB') }}"

or specifying as follows when deploying ansible-playbook -i environments/$ENVIRONMENT openwhisk.yml -e db_activation_backend=CouchDB
it still ran into the same error and address 127.0.0.1:14001 still refused connections...
I suspect that may be couchdb service went wrong, but here is my docker ps -a result after running into this problem

CONTAINER ID   IMAGE                        COMMAND                  CREATED              STATUS              PORTS                                                                                                                            NAMES
9ba165dd233f   whisk/scheduler:latest       "/bin/sh -c 'exec /i…"   About a minute ago   Up About a minute   0.0.0.0:13001->13001/tcp, 0.0.0.0:21000->21000/tcp, 0.0.0.0:22000->22000/tcp, 0.0.0.0:25520->3551/tcp, 0.0.0.0:14001->8080/tcp   scheduler0
0a2e38215486   whisk/controller:latest      "/bin/sh -c 'exec /i…"   2 minutes ago        Up 2 minutes        0.0.0.0:15000->15000/tcp, 0.0.0.0:16000->16000/tcp, 0.0.0.0:8000->2551/tcp, 0.0.0.0:10001->8080/tcp                              controller0
778596fd853c   bitnami/kafka:latest         "/opt/bitnami/script…"   3 minutes ago        Up 3 minutes        0.0.0.0:9072->9072/tcp, 0.0.0.0:9093->9093/tcp, 9092/tcp                                                                         kafka0
20fb67579550   zookeeper:3.4                "/docker-entrypoint.…"   3 minutes ago        Up 3 minutes        0.0.0.0:2181->2181/tcp, 0.0.0.0:2888->2888/tcp, 0.0.0.0:3888->3888/tcp                                                           zookeeper0
88ea54ebab17   quay.io/coreos/etcd:v3.4.0   "/usr/local/bin/etcd…"   3 minutes ago        Up 3 minutes        0.0.0.0:2379->2379/tcp, 0.0.0.0:2480->2480/tcp, 2380/tcp                                                                         etcd0
94000a9abafd   apache/couchdb:2.3           "tini -- /docker-ent…"   12 minutes ago       Up 12 minutes       0.0.0.0:4369->4369/tcp, 0.0.0.0:5984->5984/tcp, 0.0.0.0:9100->9100/tcp

@style95
Copy link
Member

style95 commented May 13, 2024

If you don't want to deploy elasticsearch, could you try with NoopDurationCheckerProvider?
You can replace ElasticSearchDurationCheckerProvider in common/scala/src/main/resources/reference.conf.

https://github.com/apache/openwhisk/tree/master/ansible#configure-service-providers-for-the-scheduler

@SCDESPERTATE
Copy link

Thanks @style95 ! Well, I tried it. But the Connection refused error still persists and the log showed different content

[2024-05-13T08:50:34.531Z] [INFO] Slf4jLogger started
[2024-05-13T08:50:34.985Z] [INFO] Remoting started with transport [Artery tcp]; listening on address [akka://scheduler-actor-system@172.17.0.1:25520] and bound to [akka://scheduler-actor-system@172.17.0.8:3551] with UID [2047018891666814238]
[2024-05-13T08:50:35.010Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - Starting up, Akka version [2.6.12] ...
[2024-05-13T08:50:35.083Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - Registered cluster JMX MBean [akka:type=Cluster]
[2024-05-13T08:50:35.083Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - Started up successfully
[2024-05-13T08:50:35.119Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - No downing-provider-class configured, manual cluster downing required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#downing
[2024-05-13T08:50:35.161Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - Node [akka://scheduler-actor-system@172.17.0.1:25520] is JOINING itself (with roles [dc-default], version [0.0.0]) and forming new cluster
[2024-05-13T08:50:35.163Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - is the new leader among reachable nodes (more leaders may exist)
[2024-05-13T08:50:35.175Z] [INFO] Cluster Node [akka://scheduler-actor-system@172.17.0.1:25520] - Leader is moving node [akka://scheduler-actor-system@172.17.0.1:25520] to [Up]
[2024-05-13T08:50:35.962Z] [INFO] [#tid_sid_unknown] [Config] environment set value for whisk.scheduler.endpoints.host
[2024-05-13T08:50:35.963Z] [INFO] [#tid_sid_unknown] [Config] environment set value for limits.actions.invokes.concurrent
[2024-05-13T08:50:35.963Z] [INFO] [#tid_sid_unknown] [Config] environment set value for whisk.scheduler.endpoints.rpcPort
[2024-05-13T08:50:35.963Z] [INFO] [#tid_sid_unknown] [Config] environment set value for whisk.scheduler.endpoints.akkaPort
[2024-05-13T08:50:35.963Z] [INFO] [#tid_sid_unknown] [Config] environment set value for runtimes.manifest
[2024-05-13T08:50:35.963Z] [INFO] [#tid_sid_unknown] [Config] environment set value for kafka.hosts
[2024-05-13T08:50:35.963Z] [INFO] [#tid_sid_unknown] [Config] environment set value for port
[2024-05-13T08:50:36.157Z] [WARN] Failed to attach the instrumentation because the Kamon Bundle is not present on the classpath
[2024-05-13T08:50:36.355Z] [INFO] Started the Kamon StatsD reporter
[2024-05-13T08:50:37.697Z] [INFO] [#tid_sid_unknown] [KafkaMessagingProvider] created topic scheduler0
[2024-05-13T08:50:38.122Z] [INFO] [#tid_sid_unknown] [KafkaMessagingProvider] created topic creationAck0
[2024-05-13T08:50:40.444Z] [INFO] [#tid_sid_unknown] [LeaseKeepAliveService] Granted a new lease Lease(7802925545492717579,10)
[2024-05-13T08:50:40.457Z] [INFO] [#tid_sid_unknown] [WatcherService] watch endpoint: WatchEndpoint(whisk/instance/scheduler0/lease,780
2925545492717579,false,lease-service,Set(DeleteEvent))
Exception in thread "main" java.lang.ClassCastException: org.apache.openwhisk.core.scheduler.queue.NoopDurationChecker$ incompatible with org.apache.openwhisk.spi.Spi
        at org.apache.openwhisk.spi.SpiLoader$.get(SpiLoader.scala:41)
        at org.apache.openwhisk.core.scheduler.Scheduler.<init>(Scheduler.scala:96)
        at org.apache.openwhisk.core.scheduler.Scheduler$.main(Scheduler.scala:350)
        at org.apache.openwhisk.core.scheduler.Scheduler.main(Scheduler.scala)

seems another issue...

@style95
Copy link
Member

style95 commented May 13, 2024

@SCDESPERTATE
How did you configure your reference.conf?

@SCDESPERTATE
Copy link

SCDESPERTATE commented May 13, 2024

My apology, I made a mistake. I messed up the NoopDurationCheckerProvider and NoopDurationChecker.
Sorry for wasting @style95 's time. Appreciate your patience!
After replacing ElasticSearchDurationCheckerProvider, the set up went well and the result was as follows

PLAY RECAP ************************************************************************************************************
172.17.0.1                 : ok=17   changed=6    unreachable=0    failed=0    skipped=9    rescued=0    ignored=0   
ansible                    : ok=2    changed=1    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
controller0                : ok=29   changed=3    unreachable=0    failed=0    skipped=23   rescued=0    ignored=1   
etcd0                      : ok=5    changed=1    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   
invoker0                   : ok=31   changed=7    unreachable=0    failed=0    skipped=40   rescued=0    ignored=0   
kafka0                     : ok=10   changed=4    unreachable=0    failed=0    skipped=7    rescued=0    ignored=0   
scheduler0                 : ok=20   changed=3    unreachable=0    failed=0    skipped=14   rescued=0    ignored=0   

星期一 13 五月 2024  18:13:06 +0800 (0:00:00.167)       0:04:10.413 **************** 
=============================================================================== 
invoker : pull runtime action images per manifest ------------------------------------------------------------- 36.14s
controller : wait until the Controller in this host is up and running ----------------------------------------- 22.66s
invoker : wait until Invoker is up and running ---------------------------------------------------------------- 12.54s
schedulers : wait until the Scheduler in this host is up and running ------------------------------------------ 11.96s
kafka : wait until the kafka server started up ----------------------------------------------------------------- 8.00s
zookeeper : wait until the Zookeeper in this host is up and running -------------------------------------------- 7.88s
cli : Unarchive the individual tarballs ------------------------------------------------------------------------ 7.77s
etcd : (re)start etcd ------------------------------------------------------------------------------------------ 6.83s
schedulers : populate environment variables for scheduler ------------------------------------------------------ 5.79s
kafka : (re)start kafka using 'bitnami/kafka:latest'  ---------------------------------------------------------- 5.26s
controller : warm up activation path --------------------------------------------------------------------------- 5.16s
zookeeper : (re)start zookeeper -------------------------------------------------------------------------------- 4.88s
invoker : copy keystore, key and cert -------------------------------------------------------------------------- 4.85s
invoker : populate environment variables for invoker ----------------------------------------------------------- 3.69s
controller : populate environment variables for controller ----------------------------------------------------- 3.58s
controller : copy certificates --------------------------------------------------------------------------------- 3.48s
cli : Unarchive the individual zipfiles into binaries ---------------------------------------------------------- 3.33s
nginx : copy cert files from local to remote in nginx config directory ----------------------------------------- 3.16s
Gathering Facts ------------------------------------------------------------------------------------------------ 3.09s
nginx : pull the nginx:1.21.1 image ---------------------------------------------------------------------------- 2.93s

I have exported the wsk cli as the environment variable, and configure the CLI as

wsk property set \
  --apihost 'http://localhost:3233' \
  --auth '23bc46b1-71f6-4ed5-8c54-816aa4f8c502:123zO3xZCLrMN6v2BKK1dXYFpXlPkccOFqm12CdAsMgRU4VrNZ9lyGVCGuMDGIwP'

it returned as expected

ok: whisk auth set. Run 'wsk property get --auth' to see the new value.
ok: whisk API host set to http://localhost:3233

but when I tried wsk -i list -d, it still returned error😟

$wsk -i list -d
[-go/whisk.addRouteOptions]:051:[Inf] Adding options &{Limit:0 Skip:0 Docs:false} to route 'actions'
[-go/whisk.addRouteOptions]:076:[Inf] Returning route options 'actions?limit=0&skip=0' from input struct &{Limit:0 Skip:0 Docs:false}
[isk.(*ActionService).List]:189:[Err] Action list route with options: actions
[k.(*Client).NewRequestUrl]:825:[Inf] basepath: http://localhost:3233/api, version/namespace path: v1/namespaces/_, resource path: actions?limit=0&skip=0
[k.(*Client).addAuthHeader]:335:[Inf] Adding basic auth header; using authkey
REQUEST:
[GET]   http://localhost:3233/api/v1/namespaces/_/actions?limit=0&skip=0
Req Headers
{
  "Authorization": [
    "Basic MjNiYzQ2YjEtNzFmNi00ZWQ1LThjNTQtODE2YWE0ZjhjNTAyOjEyM3pPM3haQ0xyTU42djJCS0sxZFhZRnBYbFBrY2NPRnFtMTJDZEFzTWdSVTRWck5aOWx5R1ZDR3VNREdJd1A="
  ],
  "User-Agent": [
    "OpenWhisk-CLI/1.0 (2021-03-26T01:02:38.401+0000) linux amd64"
  ]
}
[ent-go/whisk.(*Client).Do]:389:[Err] HTTP Do() [req http://localhost:3233/api/v1/namespaces/_/actions?limit=0&skip=0] error: Get "http://localhost:3233/api/v1/namespaces/_/actions?limit=0&skip=0": dial tcp 127.0.0.1:3233: connect: connection refused
[isk.(*ActionService).List]:203:[Err] s.client.Do() error - HTTP req http://localhost:3233/api/v1/namespaces/_/actions?limit=0&skip=0; error 'Get "http://localhost:3233/api/v1/namespaces/_/actions?limit=0&skip=0": dial tcp 127.0.0.1:3233: connect: connection refused'
[/commands.entityListError]:038:[Err] Client.Actions.List(default) error: Get "http://localhost:3233/api/v1/namespaces/_/actions?limit=0&skip=0": dial tcp 127.0.0.1:3233: connect: connection refused
[-cli/commands.ExitOnError]:1193:[Inf] err object type: *whisk.WskError
[-cli/commands.ExitOnError]:1204:[Err] Got a *whisk.WskError error: &whisk.WskError{RootErr:(*errors.errorString)(0xc0002843f0), ExitCode:3, DisplayMsg:true, MsgDisplayed:false, DisplayUsage:false, DisplayPrefix:true, ApplicationError:false, TimedOut:false}
error: Unable to obtain the list of entities for namespace 'default': Get "http://localhost:3233/api/v1/namespaces/_/actions?limit=0&skip=0": dial tcp 127.0.0.1:3233: connect: connection refused

@style95
Copy link
Member

style95 commented May 13, 2024

It seems you are using a deployment deployed with the ansible commands but your endpoint looks like the one of standalone mode.
If you deploy OW with ansible, the endpoint is generally https://localhost.
You may want to use -i option if you use a self-signed certificate for HTTPS which is the default.

wsk list -i

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants