Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide an easy way to bootstrap a cluster on Mac OS #3244

Closed
pires opened this issue Jan 6, 2015 · 24 comments
Closed

Provide an easy way to bootstrap a cluster on Mac OS #3244

pires opened this issue Jan 6, 2015 · 24 comments
Labels
priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@pires
Copy link
Contributor

pires commented Jan 6, 2015

I've tried bootstrapping a cluster with boot2docker and Vagrant but without luck. As a developer using Mac OS X, this is quite frustrating. Also, going with a cloud provider like GCE comes at a cost.

Is it possible to provide a simple way of bootstrapping a cluster (even that one with only one node)?

@proppy
Copy link
Contributor

proppy commented Jan 6, 2015

This could be fixed by building a kubernetes-ready VM images using something like https://packer.io/.

That's something @kelseyhightower mentioned before.

This could also build up on the work @jbeda is doing in #2303

@brendandburns
Copy link
Contributor

Vagrant should work. I know that we have run it successfully on OS X in
the past. What errors are you seeing?

--brendan

On Tue, Jan 6, 2015 at 5:50 AM, Johan Euphrosine notifications@github.com
wrote:

This could be fixed by building a kubernetes-ready VM images using
something like https://packer.io/.

That's something @kelseyhightower https://github.com/kelseyhightower
mentioned before.

This could also build up on the work @jbeda https://github.com/jbeda is
doing in #2303
#2303


Reply to this email directly or view it on GitHub
#3244 (comment)
.

@arun-gupta
Copy link
Contributor

@brendanburns Are there instructions on how you run Kubernetes with Vagrant on OS X?

@brendandburns
Copy link
Contributor

the standard vagrant instructions:

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md

Should work.

--brendan

On Tue, Jan 6, 2015 at 3:36 PM, Arun Gupta notifications@github.com wrote:

@brendanburns https://github.com/brendanburns Are there instructions on
how you run Kubernetes with Vagrant on OS X?


Reply to this email directly or view it on GitHub
#3244 (comment)
.

@goltermann goltermann added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Jan 7, 2015
@pires
Copy link
Contributor Author

pires commented Jan 8, 2015

I'm working on this with Vagrant + CoreOS and am almost getting there! I'm just trying to understand why my containers can't read from http://$KUBERNETES_RO_SERVICE_HOST:KUBERNETES_RO_SERVICE_PORT

java.net.SocketException: SocketException invoking http://10.244.61.139:80/api/v1beta1/pods: Unexpected end of file from server

Can anyone help me figuring this out? I'm running Kubernetes 0.8.0.

@pires
Copy link
Contributor Author

pires commented Jan 8, 2015

In case it helps, here's minion output

core@node-02 ~ $ docker inspect e1e5179cbdcf
[{
    "AppArmorProfile": "",
    "Args": [
        "-c",
        "java -jar bootstrapper.jar"
    ],
    "Config": {
        "AttachStderr": false,
        "AttachStdin": false,
        "AttachStdout": false,
        "Cmd": [
            "/bin/sh",
            "-c",
            "java -jar bootstrapper.jar"
        ],
        "CpuShares": 0,
        "Cpuset": "",
        "Domainname": "",
        "Entrypoint": null,
        "Env": [
            "KUBERNETES_RO_SERVICE_HOST=10.244.61.139",
            "KUBERNETES_RO_SERVICE_PORT=80",
            "KUBERNETES_RO_PORT=tcp://10.244.61.139:80",
            "KUBERNETES_RO_PORT_80_TCP=tcp://10.244.61.139:80",
            "KUBERNETES_RO_PORT_80_TCP_PROTO=tcp",
            "KUBERNETES_RO_PORT_80_TCP_PORT=80",
            "KUBERNETES_RO_PORT_80_TCP_ADDR=10.244.61.139",
            "KUBERNETES_SERVICE_HOST=10.244.113.105",
            "KUBERNETES_SERVICE_PORT=443",
            "KUBERNETES_PORT=tcp://10.244.113.105:443",
            "KUBERNETES_PORT_443_TCP=tcp://10.244.113.105:443",
            "KUBERNETES_PORT_443_TCP_PROTO=tcp",
            "KUBERNETES_PORT_443_TCP_PORT=443",
            "KUBERNETES_PORT_443_TCP_ADDR=10.244.113.105",
            "HAZELCAST_SERVICE_HOST=10.244.157.151",
            "HAZELCAST_SERVICE_PORT=5701",
            "HAZELCAST_PORT=tcp://10.244.157.151:5701",
            "HAZELCAST_PORT_5701_TCP=tcp://10.244.157.151:5701",
            "HAZELCAST_PORT_5701_TCP_PROTO=tcp",
            "HAZELCAST_PORT_5701_TCP_PORT=5701",
            "HAZELCAST_PORT_5701_TCP_ADDR=10.244.157.151",
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8"
        ],
        "ExposedPorts": {
            "5701/tcp": {}
        },
        "Hostname": "055d2eec-9742-11e4-b781-0800273f776f",
        "Image": "pires/hazelcast-k8s",
        "MacAddress": "",
        "Memory": 0,
        "MemorySwap": 0,
        "NetworkDisabled": false,
        "OnBuild": null,
        "OpenStdin": false,
        "PortSpecs": null,
        "StdinOnce": false,
        "Tty": false,
        "User": "",
        "Volumes": null,
        "WorkingDir": "/opt/hazelcast-k8s"
    },
    "Created": "2015-01-08T14:38:14.550239572Z",
    "Driver": "btrfs",
    "ExecDriver": "native-0.2",
    "HostConfig": {
        "Binds": null,
        "CapAdd": null,
        "CapDrop": null,
        "ContainerIDFile": "",
        "Devices": null,
        "Dns": null,
        "DnsSearch": null,
        "ExtraHosts": null,
        "IpcMode": "",
        "Links": null,
        "LxcConf": null,
        "NetworkMode": "container:acea3ab5a34bc9a26e06b13e7f542adce0fd37482c7a266ad3ce5b33e5926fc1",
        "PortBindings": null,
        "Privileged": false,
        "PublishAllPorts": false,
        "RestartPolicy": {
            "MaximumRetryCount": 0,
            "Name": ""
        },
        "SecurityOpt": null,
        "VolumesFrom": null
    },
    "HostnamePath": "",
    "HostsPath": "/var/lib/docker/containers/acea3ab5a34bc9a26e06b13e7f542adce0fd37482c7a266ad3ce5b33e5926fc1/hosts",
    "Id": "e1e5179cbdcf80d7f18c00b8e32b68b71bf1815d1898cec40ac069fb2b3ec418",
    "Image": "cb1a377c9bfdbcee35e3252aff6bd66bcf09f09a2e570c1824f3df2cc3b3b8e7",
    "MountLabel": "",
    "Name": "/k8s_hazelcast.3bfd6b04_055d2eec-9742-11e4-b781-0800273f776f.default.etcd_055d2eec-9742-11e4-b781-0800273f776f_3c0267d3",
    "NetworkSettings": {
        "Bridge": "",
        "Gateway": "",
        "IPAddress": "",
        "IPPrefixLen": 0,
        "MacAddress": "",
        "PortMapping": null,
        "Ports": null
    },
    "Path": "/bin/sh",
    "ProcessLabel": "",
    "ResolvConfPath": "/var/lib/docker/containers/acea3ab5a34bc9a26e06b13e7f542adce0fd37482c7a266ad3ce5b33e5926fc1/resolv.conf",
    "State": {
        "Error": "",
        "ExitCode": 1,
        "FinishedAt": "2015-01-08T14:38:24.109933118Z",
        "OOMKilled": false,
        "Paused": false,
        "Pid": 0,
        "Restarting": false,
        "Running": false,
        "StartedAt": "2015-01-08T14:38:14.913602291Z"
    },
    "Volumes": {},
    "VolumesRW": {}
}
]

@pires
Copy link
Contributor Author

pires commented Jan 8, 2015

Does this happens because I try http instead of tcp, which seems to be the protocol set in the container? I'm confused about this.

Also, if inside a container I try to access both insecure and secure endpoints

root@81221ff331dc:/# curl http://10.244.61.139
curl: (52) Empty reply from server
root@81221ff331dc:/# curl http://10.244.61.139:81
^C
root@81221ff331dc:/# curl https://10.244.61.139
^C
root@81221ff331dc:/# curl https://10.244.113.105
curl: (35) Unknown SSL protocol error in connection to 10.244.113.105:443
root@81221ff331dc:/# curl http://10.244.113.105:443
curl: (52) Empty reply from server

See that accessing other ports hangs curl and eventually return a timeout.
Ping'ing both IPs doesn't work, as well.
This proves that the minion can somehow access 10.244.61.139:80 and 10.244.113.105:443, but the master may be unable to route the response back.

@pires
Copy link
Contributor Author

pires commented Jan 8, 2015

If anyone wants to help debug this, please use https://github.com/pires/kubernetes-vagrant-coreos-cluster. All instructions included.

@brendandburns
Copy link
Contributor

No http should work. It looks like the service proxy on the minion can't
talk to the master. Can you ssh onto the minion and try to connect
directly from there without a container?

Brendan
On Jan 8, 2015 8:27 AM, "Paulo Pires" notifications@github.com wrote:

If anyone wants to help debug this, please use
https://github.com/pires/kubernetes-vagrant-coreos-cluster. All
instructions included.


Reply to this email directly or view it on GitHub
#3244 (comment)
.

@pires
Copy link
Contributor Author

pires commented Jan 8, 2015

Do you mean I should try and access master at VM level, right?

$ curl http://172.17.8.101:8080
<html><body>Welcome to Kubernetes</body></html>

It works.

@arun-gupta
Copy link
Contributor

@brendandburns I'm seeing #3326 when running on OS X 10.10.1. Any suggestions?

@pires
Copy link
Contributor Author

pires commented Jan 8, 2015

Testing with flannel IPs doesn't work, same issue:

$ curl https://10.244.113.105
curl: (35) Unknown SSL protocol error in connection to 10.244.113.105:443
$ curl http://10.244.61.139
curl: (52) Empty reply from server

Any routing issues? IPTables on master and minions accepts everything by default.

@pires
Copy link
Contributor Author

pires commented Jan 8, 2015

In case it helps

$ sudo iptables -t nat -L

(...)

Chain KUBE-PORTALS-HOST (1 references)
target     prot opt source               destination
DNAT       tcp  --  anywhere             10.244.113.105       /* kubernetes */ tcp dpt:https to:10.0.2.15:42774
DNAT       tcp  --  anywhere             10.244.61.139        /* kubernetes-ro */ tcp dpt:http to:10.0.2.15:59682
$ sudo netstat -atnp |grep 42774
tcp6       0      0 :::42774                :::*                    LISTEN      1033/kube-proxy

$ sudo netstat -atnp |grep 59682
tcp6       0      0 :::59682                :::*                    LISTEN      1033/kube-proxy

So, kube-proxy is working but is it proxying k8s api?

$ sudo netstat -atnp |grep proxy
tcp        0      0 172.17.8.102:49065      172.17.8.101:4001       ESTABLISHED 1033/kube-proxy
tcp        0      0 172.17.8.102:49064      172.17.8.101:4001       ESTABLISHED 1033/kube-proxy
tcp6       0      0 :::43425                :::*                    LISTEN      1033/kube-proxy
tcp6       0      0 :::59682                :::*                    LISTEN      1033/kube-proxy
tcp6       0      0 :::10249                :::*                    LISTEN      1033/kube-proxy
tcp6       0      0 :::42774                :::*                    LISTEN      1033/kube-proxy

etcd is being properly proxied. Shouldn't we expect a different result for ports that supposedly are proxying the API (42774 and 59682)?

@pires
Copy link
Contributor Author

pires commented Jan 8, 2015

Routing table

$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.2.2        0.0.0.0         UG    1024   0        0 eth0
10.0.2.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
10.0.2.2        0.0.0.0         255.255.255.255 UH    1024   0        0 eth0
10.244.0.0      0.0.0.0         255.255.0.0     U     0      0        0 flannel.1
10.244.2.0      0.0.0.0         255.255.255.0   U     0      0        0 docker0
172.17.8.0      0.0.0.0         255.255.255.0   U     0      0        0 eth1

@derekwaynecarr
Copy link
Member

@pires if you are running a version of Vagrant greater >= 1.7.1 there was a change to how provisioners are setup that is causing the bug you are probably seeing. I think most of us on the project are still on 1.6.x plug-in version, so we did not encounter it recently.

As I said on #3326, I can fix this tomorrow AM to work on the latest version of Vagrant, and then you should be set to go.

For now, if you run Vagrant 1.6.x our current solution will work fine on Mac OS X.

@pires
Copy link
Contributor Author

pires commented Jan 9, 2015

@derekwaynecarr you're a life-saver then, thanks. Will be waiting on your commit then.

@pires
Copy link
Contributor Author

pires commented Jan 9, 2015

@derekwaynecarr tried with Vagrant 1.6.5. Same issue :(

@derekwaynecarr
Copy link
Member

I am looking into this this morning with a colleague on a Mac, we will get this addressed. There should not be a need to define an alternate setup. This has been relatively stable for a long time, and is in wide use, but its possible there was a regression.

@pires
Copy link
Contributor Author

pires commented Jan 9, 2015

So, it seems I only had to pass --public_address_override=172.17.8.101 to kube-apiserver.

@derekwaynecarr
Copy link
Member

@csrwng is trying to reproduce this issue on mac from HEAD, I did upgrade to vagrant 1.7.2 on linux to try to see if that was the cause, but we actually appeared to work fine there too.

@csrwng
Copy link
Contributor

csrwng commented Jan 9, 2015

So my builds were failing with boot2docker v1.2. I upgraded to boot2docker 1.4.1 (latest) and installed gnu-tar (brew install gnu-tar) and now 'make release' succeeds.

@pires
Copy link
Contributor Author

pires commented Jan 9, 2015

@csrwng care to comment on #2378?

@csrwng
Copy link
Contributor

csrwng commented Jan 9, 2015

@pires - I don't think it's the same issue. I was running 'make release' and everything would run, including e2e tests, up to the point of syncing the binaries back to the host. And then it would time out. The boot2docker update got me past that.

@pires
Copy link
Contributor Author

pires commented Jan 9, 2015

Ah, my version works now. It's realy easy for anyone working with Vagrant and Virtualbox and you can set how many minions you want. Networking issues are fixed :-)

I'm closing this now. See kuebernetes-vagrant-coreos-cluster and if you're interested, I'd be more than glad to make a PR.

@pires pires closed this as completed Jan 9, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

7 participants