Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The connection to the server localhost:8080 was refused - did you specify the right host or port? #50295

Closed
AliYmn opened this issue Aug 8, 2017 · 30 comments

Comments

@AliYmn
Copy link

@AliYmn AliYmn commented Aug 8, 2017

Hi,

>> kubectl get pods --all-namespaces | grep dashboard
Result ;
The connection to the server localhost:8080 was refused - did you specify the right host or port?

>> kubectl create -f https://git.io/kube-dashboard

Result ; 

The connection to the server localhost:8080 was refused - did you specify the right host or port?
@xiangpengzhao

This comment has been minimized.

Copy link
Member

@xiangpengzhao xiangpengzhao commented Aug 8, 2017

Can you check if your kube-apiserver is running and insecure-port 8080 is enabled?

@AliYmn

This comment has been minimized.

Copy link
Author

@AliYmn AliYmn commented Aug 8, 2017

@xiangpengzhao No, no employee.

@xiangpengzhao

This comment has been minimized.

Copy link
Member

@xiangpengzhao xiangpengzhao commented Aug 8, 2017

It should be running. How do you setup your cluster?

@AliYmn

This comment has been minimized.

Copy link
Author

@AliYmn AliYmn commented Aug 8, 2017

root@ubuntu-512mb-nyc3-01:~$ lsof -i
COMMAND     PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sshd       1527 root    3u  IPv4  15779      0t0  TCP *:ssh (LISTEN)
sshd       1527 root    4u  IPv6  15788      0t0  TCP *:ssh (LISTEN)
VBoxHeadl 15644 root   22u  IPv4  37266      0t0  TCP localhost:2222 (LISTEN)
sshd      18809 root    3u  IPv4  42637      0t0  TCP 104.131.172.65:ssh->78.187.60.13.dynamic.ttnet.com.tr:63690 (ESTABLISHED)
redis-ser 25193 root    4u  IPv6  56627      0t0  TCP *:6380 (LISTEN)
redis-ser 25193 root    5u  IPv4  56628      0t0  TCP *:6380 (LISTEN)
kubectl   31904 root    3u  IPv4  89722      0t0  TCP localhost:8001 (LISTEN)
@xiangpengzhao

This comment has been minimized.

@xiangpengzhao

This comment has been minimized.

Copy link
Member

@xiangpengzhao xiangpengzhao commented Aug 9, 2017

/sig cluster-lifecycle

@joshualevy2

This comment has been minimized.

Copy link

@joshualevy2 joshualevy2 commented Nov 10, 2017

I had this problem because there was no admin.conf file and I did not have KUBECONFIG=/root/admin.conf set. the admin.conf file is created in /etc/kubernetes by the "kubeadmin init" command and you need to copy it to all your minion nodes. kubeadmin does not do this for you.

@gkatsanos

This comment has been minimized.

Copy link

@gkatsanos gkatsanos commented Nov 28, 2017

~/D/p/i/server (master|✔) $ kubectl create -f wtf.yml                                                 16:34:07
W1128 16:34:09.944864   27487 factory_object_mapping.go:423] Failed to download OpenAPI (Get http://localhost:8080/swagger-2.0.0.pb-v1: dial tcp [::1]:8080: getsockopt: connection refused), falling back to swagger
The connection to the server localhost:8080 was refused - did you specify the right host or port?```

```yaml
~/D/p/i/server (master|✔) $ cat wtf.yml                                                               16:34:10
apiVersion: v1
kind: Pod
metadata:
  name: myserver
  labels:
    purpose: demonstrate-envars
spec:
  containers:
  - name: myserver
    image: gkatsanos/server
    env:
    - name: JWT_EXPIRATION_MINUTES
      value: "1140"
    - name: JWT_SECRET
      value: "XXX"
    - name: MONGO_URI
      value: "mongodb://mongodb:27017/isawyou"
    - name: CLIENT_URI
      value: "//localhost:8080/"
    - name: MONGO_URI_TESTS
      value: "mongodb://mongodb:27017/isawyou-test"
    - name: PORT
      value: "3000"
~/D/p/i/server (master|✔) $ kubectl version                                                           16:35:00
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
@ghost ghost referenced this issue Dec 8, 2017
@Didd

This comment has been minimized.

Copy link

@Didd Didd commented Feb 1, 2018

On my case this is happening due to a failing kubelet service('service kubelet status') and I had to do 'swapoff -a' to disable paging and swapping which fixed the problem. You can read about the "why" here.

@MulticsYin

This comment has been minimized.

Copy link

@MulticsYin MulticsYin commented Feb 26, 2018

Maybe you not set environment variables, try this:
export KUBERNETES_MASTER=http://MasterIP:8080
MasterIP was you kubernetes master IP

@clenk

This comment has been minimized.

Copy link

@clenk clenk commented Mar 2, 2018

I had this problem because I was running kubectl as the wrong user. I had copied /etc/kubernetes/admin.conf to .kube/config in one user's home directory and needed to run kubectl as that user.

@moqichenle

This comment has been minimized.

Copy link

@moqichenle moqichenle commented Mar 27, 2018

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

@fengerzh

This comment has been minimized.

Copy link

@fengerzh fengerzh commented Jun 18, 2018

I don't understand, why this command must be run by normal user but not root user?

@Sam-Fireman

This comment has been minimized.

Copy link

@Sam-Fireman Sam-Fireman commented Sep 11, 2018

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

@prabhakarsultane

This comment has been minimized.

Copy link

@prabhakarsultane prabhakarsultane commented Oct 11, 2018

There is a configuration issue, if you have setup kubernetes using root and trying to execute kubectl command from the different user then this error will occur.
To resolved this issue run simply below command
root@devops:~# cp -r .kube/ /home/ubuntu/

root@devops:~# chown -R ubuntu:ubuntu /home/ubuntu/.kube

root@devops:~# su ubuntu

root@devops:~# kubectl get pod -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
cron 1/1 Running 0 2h 10.244.0.97 devops

@helloworlde

This comment has been minimized.

Copy link

@helloworlde helloworlde commented Oct 17, 2018

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

I tried by this solution on Ubuntu 18.04, but it still not work. At last I found it caused by Swap! So I fixed by disable swap like this:

sudo swapoff -a
sudo chown $(id -u):$(id -g) $HOME/.kube/config
@neolit123

This comment has been minimized.

Copy link
Member

@neolit123 neolit123 commented Oct 26, 2018

please try tools like kops or kubeadm that will handle all the setup for you.
they also print instructions in the terminal on how to setup admin.conf or pod-network-plugins.

closing this issue.
for similar questions try stackoverflow:
https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#user-support-response-example

/close

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Oct 26, 2018

@neolit123: Closing this issue.

In response to this:

please try tools like kops or kubeadm that will handle all the setup for you.
they also print instructions in the terminal on how to setup admin.conf or pod-network-plugins.

closing this issue.
for similar questions try stackoverflow:
https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#user-support-response-example

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Oyunbold

This comment has been minimized.

Copy link

@Oyunbold Oyunbold commented Nov 2, 2018

kubectl config set-cluster demo-cluster --server=http://localhost:8001

@jvleminc

This comment has been minimized.

Copy link

@jvleminc jvleminc commented Jan 10, 2019

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

I fixed it through similar commands:
kubernetes-sigs/kubespray#1615 (comment)

@HiMyFriend

This comment has been minimized.

Copy link

@HiMyFriend HiMyFriend commented Apr 4, 2019

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Mission complete!

@azoaib

This comment has been minimized.

Copy link

@azoaib azoaib commented Apr 16, 2019

I am using docker-for-mac, got the same issue but restarting docker daemon solved the issue.

@soromamadou

This comment has been minimized.

Copy link

@soromamadou soromamadou commented Apr 19, 2019

hello,
be sure to not run your command as root. You need to use user account

@avaslev

This comment has been minimized.

Copy link

@avaslev avaslev commented Apr 19, 2019

If after running sudo cp /etc/kubernetes/admin.conf $HOME/ && sudo chown $(id -u):$(id -g) $HOME/admin.conf
Command kubectl config view display like this:

apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []

Running this command unset KUBECONFIG solved it.

@Tarvinder91

This comment has been minimized.

Copy link

@Tarvinder91 Tarvinder91 commented Apr 28, 2019

Maybe you not set environment variables, try this:
export KUBERNETES_MASTER=http://MasterIP:8080
MasterIP was you kubernetes master IP

Or in case your master is running on different port. Specify that port instead of 8080. 6443 in my case

@subeeshvasu

This comment has been minimized.

Copy link

@subeeshvasu subeeshvasu commented Sep 3, 2019

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

For me kubectl didn't work with the above commands. However, I could make it work after running the following export command in addition to the above commands.
export KUBECONFIG=$HOME/.kube/config

Just to be clear, what worked for me is the following sequence.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config

@bogere

This comment has been minimized.

Copy link

@bogere bogere commented Sep 4, 2019

sometimes especially if you on Mac OS, just enable kubernetes on your Docker desktop for Mac.
Ensure that it is running , that is what i did to resolve the above error.
Screenshot 2019-09-04 at 9 55 13 AM

@Anuradha677

This comment has been minimized.

Copy link

@Anuradha677 Anuradha677 commented Oct 3, 2019

I have found same issue. I ran the below command.
gcloud container clusters get-credentials micro-cluster --zone us-central1-a
The issue is resolved.

@p8ul

This comment has been minimized.

Copy link

@p8ul p8ul commented Oct 11, 2019

I experience this error after switching between projects & login. I solved the issue by running this command

gcloud container clusters get-credentials --region your-region gke-us-east1-01

REF

@aescobar-icc

This comment has been minimized.

Copy link

@aescobar-icc aescobar-icc commented Oct 12, 2019

thanks @p8ul, That solved my issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.