-
Notifications
You must be signed in to change notification settings - Fork 316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UnknownHostException #4
Comments
Hey @Opalo, Thanks for the feedback about the tutorial! In regards to your issue, could you let me know what URL you are hitting when you get this issue? Could you also show me the output of NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.0.0.1 443/TCP 31d productcatalogue 10.0.0.37 8020:31803/TCP 30d shopfront 10.0.0.216 8010:31208/TCP 30d stockmanager 10.0.0.149 8030:30723/TCP 30d |
What I'm getting after running
Running: Thanks for prompt reply! |
Here's also an interesting part:
Why does |
Thanks for this, and a nice piece of debugging on your part, with a look at the pods. My guess is that minikube has not been given enough RAM to play with, and so the productcatalogue JVM is getting OOM-killed due to the small amount of RAM given to this container, which leads the pod to be restarted. I think I should have made clearer in the article that you really need to apply 3Gb for minikube, and ideally 4GB+. This can be done when starting minikube: $ minikube start --cpus 2 --memory 4096 If you stop and restart minikube with these flags does this solve your issue? |
Thanks! I run minikube as follows:
Then applied all the pods. Same result. See the output below:
Any ideas how can it further investigate it? |
I think you might have to delete and re-start your minikube for this config change to take effect? (kubernetes/minikube#567) You can get cluster info by looking at |
I did delete the minikube several times. It for sure has 8 GB of memory. It still does not work, no idea why.
Thank you for your time, once again! |
I think I've got it @Opalo - it looks like I configured the healthcheck of the productcatalogue incorrectly. The productcatalogue service is different than the other two, in that it is a Dropwizard-based service. This means that the health check endpoint is exposed on a different port and endpoint (productcatalogue:8025/healthcheck and not productcatalogue:8020/health) I've now updated the Kubernetes yaml files, and so if you git pull and re-"kubectl apply -f X" all of the services you should be good to go! I would like to say a massive thanks for reporting this, and apologies for any confusion caused! I'm slightly puzzled how this ever worked, although I'm sure it did, as I took the screenshot of the shopfront UI for the article when I was running this in Kubernetes? The only thing I can think of, is that I initially built the app in Kubernetes 1.7 (minikube 0.22). I saw in your debug info that you were running 1.8, and so upgraded my local minikube this afternoon, before testing everything again. I've asked around to see if anyone else has this issue with healthcheck, and will update this issue if I find anything. Thanks again - I'm sure others must have experienced this issue, but no-one else reported it! |
As an FYI for anyone interested, I debugged this issue by using Datawire's Telepresence to proxy to the cluster, and then curled all of the endpoints as if it was the shopfront calling the endpoints. I got a 404 when curling the productcatalogue health check ( I then ran the productcatalogue locally via Docker (mapping the app and admin ports) and after a few curls I realised that the health check endpoint is exposed only on the admin port and under 'healthcheck' (not 'health'). After this I fixed the Kubernetes productcatalogue yaml, and tested in minikube - everything looked good :-) |
Thanks @danielbryantuk. It's stopped to restart all the time but when I hit shopfront in the browser I'm still getting this Whitelabel Error Page as in the first post here. I've retcreated minikube, applied all yaml files again, before that all docker images were rebuilt and pushed.
Of course I've synchronized the repo. |
Hey @Opalo, I can't seem to recreate the issue? Are you using your own Docker Hub account and pushing your own builds of the containers? If so, I'm assuming that you've updated the k8s yaml files to use your containers? As an FYI, I started with a clean K8s cluster, and then You won't be able to curl the healthcheck endpoint on the productcatalogue using the method you've shown, because of the port issue I mentioned in my earlier comment i.e. we aren't exposing the admin port used by healthcheck in the k8s Service yaml (therefore minikube can't expose anything via this port) However, you can exec ( (master) kubernetes $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.0.0.1 443/TCP 22h productcatalogue 10.0.0.115 8020:32608/TCP 8s shopfront 10.0.0.54 8010:32734/TCP 15s stockmanager 10.0.0.207 8030:32558/TCP 1s (master) kubernetes $ minikube service shopfront Opening kubernetes service default/shopfront in default browser... (master) kubernetes $ kubectl get pods NAME READY STATUS RESTARTS AGE productcatalogue-brdvk 1/1 Running 0 37s shopfront-jvlsm 1/1 Running 0 44s stockmanager-m9tcp 1/1 Running 0 30s (master) kubernetes $ kubectl exec -it productcatalogue-brdvk -- /bin/bash root@productcatalogue-brdvk:/# curl localhost:8025/healthcheck {"deadlocks":{"healthy":true},"template":{"healthy":true,"message":"Ok with version: 1.0-SNAPSHOT"}}root@productcatalogue-brdvk:/# |
At the very beginning I created my own docker images, pushed them do docker hub and altered all *yaml files. But now I just use the project as it was provided. I still have this issue:
It looks like
|
it seem the pods shopfront can't resolve the ip address for productcatalogue , as work around |
Hi,
First of all, thanks for cool tutorial - but I've some problems setting it all up:
I'm getting it after executing:
minikube service shopfront
What's the problem? All containers run correctly. Am I missing something?
The text was updated successfully, but these errors were encountered: