Skip to content
This repository has been archived by the owner on Jan 29, 2024. It is now read-only.

reana-cluster env: http://None:80 ?? #73

Closed
mr-c opened this issue Apr 24, 2018 · 7 comments
Closed

reana-cluster env: http://None:80 ?? #73

mr-c opened this issue Apr 24, 2018 · 7 comments

Comments

@mr-c
Copy link
Member

mr-c commented Apr 24, 2018

(reana-cluster) root@ci02:~# reana-cluster status
COMPONENT               STATUS 
wdb                     Running
workflow-controller     Running
yadage-atlas-worker     Running
yadage-cms-worker       Running
yadage-alice-worker     Running
server                  Running
job-controller          Running
zeromq-msg-proxy        Running
workflow-monitor        Running
message-broker          Running
yadage-lhcb-worker      Running
yadage-default-worker   Running
cwl-default-worker      Running
REANA cluster is ready.
(reana-cluster) root@ci02:~# reana-cluster env
export REANA_SERVER_URL=http://None:80

On a system with minikube --vm-driver=none

@tiborsimko tiborsimko added this to the Internal-Consolidation milestone Apr 24, 2018
@tiborsimko
Copy link
Member

It seems that reana-cluster does not detect IPs and ports properly in case of using the none driver...

Please try something like this:

$ kubectl get pods | grep server
server-1488917770-r6nsc                  1/1       Running   0          47m
$ kubectl describe pod server-1488917770-r6nsc | grep ^Node:
Node:           minikube/192.168.39.238
$ kubectl describe service server | grep ^NodePort:
NodePort:                 http  31509/TCP

which gives the IP address (192.168.39.238) and the port number (31509) of the server service so that you can construct REANA_SERVER_URL manually:

$ reana-cluster env
export REANA_SERVER_URL=http://192.168.39.238:31509

Did it work to find the right values? If not, please attach the full output of get and describe commands.

@mr-c
Copy link
Member Author

mr-c commented Apr 24, 2018

Yes, it did; thanks!

(reana-cluster) root@ci02:~/common-workflow-language# kubectl get pods | grep server
server-1488917770-2srnc                  1/1       Running   1          1h
(reana-cluster) root@ci02:~/common-workflow-language# kubectl describe pod server-1488917770-2srnc | grep ^Node:
Node:           ci02.commonwl.org/10.211.1.162
(reana-cluster) root@ci02:~/common-workflow-language# kubectl describe service server | grep ^NodePort:
NodePort:                 http  32587/TCP
(reana-cluster) root@ci02:~/common-workflow-language# export REANA_SERVER_URL=http://10.211.1.162:32587

@tiborsimko
Copy link
Member

OK, good. I'll keep this issue so that we amend reana-cluster env to work properly with none driver.

@BenGalewsky
Copy link

Would it make sense to interrogate the frontend ingress in Kubernetes instead of the service? Do we always have one available if running in a cluster?

@diegodelemos
Copy link
Member

I think that would be best if we were to expose all components outside the cluster. However, we have not intended to do so, in fact, only REANA-Server will be exposed. Initially we thought of this command line tool as a helper for cluster admins who do not know about kubectl. Soon we will move to deploy REANA with Helm Charts instead of with this package and we might directly rely on Kubernetes to do this tasks.

@BenGalewsky
Copy link

I agree that we don't want to expose anything but the Reana server. I think the frontend ingress just forwards to that service. In my installation, only the ingress knows the external DNS name and port- I was thinking it would be a way to fix this specific bug

@diegodelemos
Copy link
Member

diegodelemos commented Jul 29, 2020

Closing as REANA uses Kind for local deployment now (v0.7.0 onwards), so there is no need for specifying --vm-driver=none as Kind runs on the native Docker installation (no need for a VM). The command to get the environment is now reana-dev client-setup-environment.

@diegodelemos diegodelemos removed this from Cluster next in Triage Sep 23, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants