Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using alpine linux image causes deployment issues to kubernetes #474

Closed
gheibia opened this issue Mar 23, 2017 · 12 comments
Closed

Using alpine linux image causes deployment issues to kubernetes #474

gheibia opened this issue Mar 23, 2017 · 12 comments

Comments

@gheibia
Copy link

gheibia commented Mar 23, 2017

This is a BUG report.

I'm using 0.74 version. After building the docker image and deploying to kubernetes, I noticed that volume service is failing to find its master.

$ kubectl get po
NAME                                    READY     STATUS    RESTARTS   AGE
weedmasterdeployment-2281143875-skrzl   1/1       Running   0          22m
weedvolumedeployment-659146161-bdm51    1/1       Running   0          22m


$ kubectl get svc
NAME                   CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
weedmasterdeployment   10.0.0.144   <nodes>       9333:30064/TCP   19m
weedvolumedeployment   10.0.0.247   <nodes>       8080:30065/TCP   19m


$ minikube ip
192.168.42.174


$ curl -v 192.168.42.174:30064/dir/assign
*   Trying 192.168.42.174...
* Connected to 192.168.42.174 (192.168.42.174) port 30064 (#0)
> GET /dir/assign HTTP/1.1
> Host: 192.168.42.174:30064
> User-Agent: curl/7.47.0
> Accept: */*
> 
< HTTP/1.1 404 Not Found
< Content-Type: application/json
< Date: Thu, 23 Mar 2017 04:04:59 GMT
< Content-Length: 33
< 
* Connection #0 to host 192.168.42.174 left intact
{"error":"No free volumes left!"}


$ kubectl exec -ti weedvolumedeployment-659146161-bdm51 -- nslookup weedmasterdeployment
nslookup: can't resolve '(null)': Name does not resolve

Name:      weedmasterdeployment
Address 1: 10.0.0.144 weedmasterdeployment.default.svc.cluster.local


$ kubectl log weedvolumedeployment-659146161-bdm51
W0322 21:07:22.119836   10502 cmd.go:325] log is DEPRECATED and will be removed in a future version. Use logs instead.
I0323 03:44:17     1 file_util.go:20] Folder /var/containerdata/weedData Permission: -rwxr-xr-x
I0323 03:44:17     1 disk_location.go:97] Store started on dir: /var/containerdata/weedData with 0 volumes max 7
I0323 03:44:17     1 volume.go:141] Start Seaweed volume server 0.74 at 0.0.0.0:8080
I0323 03:44:17     1 volume_grpc_client.go:17] Volume server bootstraps with master weedmasterdeployment.default.svc.cluster.local:9333
2017/03/23 03:44:17 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp [::1]:9333: getsockopt: connection refused"; Reconnecting to {localhost:9333 <nil>}

As it can be seen from the logs, the volume is struggling to find its master. Searching after the nslookup error (*can't resolve '(null)': *) I came across this:

https://github.com/gliderlabs/docker-alpine/blob/master/docs/caveats.md#dns

@chrislusf
Copy link
Collaborator

I actually do not know anything about kubernetes. Need some help here.

Also, it'll be good to know how you setup everything, for others to try to reproduce it.

@gheibia
Copy link
Author

gheibia commented Jun 27, 2017

Well, "what is Kubernetes" could end up being a big question. Quoting "https://kubernetes.io/":

Kubernetesis an open-source system for automating deployment, scaling, and management of containerized applications.

Something similar to docker swarm, if that help.

Now, developers normally don't setup Kubernetes locally. There is a lot of networking and operational detail to learn which could be overwhelming if one would want to stay productive and focused. Instead, Google built Minikube, which is a single node Kubernetes cluster running inside a VM. So you'll need a virtualisation infrastructure in your local dev environment. I personally use KVM on Linux which allows me to preserve the same IP for my VM between restarts.

This is a really good guideline on how to setup minikube: https://thenewstack.io/tutorial-configuring-ultimate-development-environment-kubernetes/

As for this particular bug, I basically rebuilt the docker image from "scratch" and solved the problem that way.

Here are some details:

Docker file:


FROM scratch
ADD weed /
ENTRYPOINT ["/weed"]

The script that uses the docker file to build my image:


#!/bin/sh

go get github.com/chrislusf/seaweedfs/weed/...
CGO_ENABLED=0 GOOS=linux go build github.com/chrislusf/seaweedfs/weed 
docker build -t weed:latest -f ./dockerfiles/weed .
rm -f weed
docker rmi $(docker images -qa -f 'dangling=true') 2>/dev/null
exit 0

In order to deploy this image to minikube in form of "pods" and "services", we need Yaml files. Quickly explaining what POD and Service are:

  • A POD is a group of containers that are deployed together on the same host. A given POD by default has 1 container. In this case, I have 2 PODs for master and volume containers (created through "deployment") each of which has 1 container.
  • A service is a grouping of pods that are running on the cluster. In this case, I'm creating one service for both master and volume PODs.

The yaml file (weed.yml) that creates the master POD:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: weedmasterdeployment
spec:
  template: # create pods using pod definition in this template
    metadata:
      labels:
        app: haystack
    spec:
      containers:
      - name: weedmaster
        image: 192.168.42.23:80/weed:latest
        args: ["-log_dir", "/var/containerdata/logs", "master", "-port", "9333", "-mdir", "/var/containerdata/haystack/master", "-ip", "haystackservice"]
        ports:
        - containerPort: 9333
        volumeMounts:
        - mountPath: /var/containerdata
          name: vlm
      volumes:
      - name: vlm
        hostPath:
          path: '/data/vlm'

The yaml file (weed2.yml) that creates the volume POD:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: weedvolumedeployment
spec:
  template:
    metadata:
      labels:
        app: haystack
    spec:
      containers:
      - name: weedvol
        image: 192.168.42.23:80/weed:latest
        args: ["-log_dir", "/var/containerdata/logs", "volume", "-port", "8080", "-mserver", "haystackservice:9333", "-dir", "/var/containerdata/haystack/volume", "-ip", "haystackservice"]
        ports:
        - containerPort: 8080
        volumeMounts:
        - mountPath: /var/containerdata
          name: vlm
      volumes:
      - name: vlm
        hostPath:
          path: '/data/vlm'

And the yaml file (weed3.yml) that create a service exposing both PODs to outside the cluster:

apiVersion: v1
kind: Service
metadata:
  name: haystackservice
spec:
  selector:
    app: haystack
  ports:
  - name: mport
    protocol: TCP
    port: 9333
    nodePort: 30069
  - name: vport
    protocol: TCP
    port: 8080
    nodePort: 30070
  type: NodePort

You can then run them using kubectl CLI tool (installed when you install minikube).

$ kubectl create -f weed.yml
deployment "weedmasterdeployment" created
$ kubectl create -f weed2.yml
deployment "weedvolumedeployment" created
$ kubectl create -f weed3.yml
service "haystackservice" created

You can then try calling the APIs:

http://minikubecluster:30069/dir/assign
curl -F file=@/home/amir/Downloads/hardhat.png http://minikubecluster:30070/2,0333d4fea4
http://minikubecluster:30070/2,0333d4fea4
http://minikubecluster:30070/ui/index.html

minikubecluster in my environment resolves to the IP address of the minikube's VM which you can get using minikube ip command.

One more thing to note is those directories in the yaml files (e.g.: /var/containerdata/haystack/volume). You'll need to create them manually before deploy your stuff. You can do so by SSHing into the minikube's VM (using minikube ssh command) and creating them:

$$ minikube ssh
$ sudo mkdir -p /data/vlm/logs && \
  sudo mkdir -p /data/vlm/haystack/master && \
  sudo mkdir  /data/vlm/haystack/volume && \
  sudo chown -R docker:root /mnt/sda1/data/

/data is a softlink to /mnt/sda1/data, hence the use of full path in the last command.

@chrislusf
Copy link
Collaborator

Thanks for the detailed steps!

Myself do not really understand the details. But I feel this would be helpful to some Kurbernetes users. Could you please turn this into some sort of tutorial?

@gheibia
Copy link
Author

gheibia commented Jun 28, 2017

Where do you intend to keep the tutorial?

@chrislusf
Copy link
Collaborator

https://github.com/chrislusf/seaweedfs/wiki

Not sure whether you have permission. If not, just post it here and I can copy to the wiki.

@kartojal
Copy link

kartojal commented Jul 6, 2017

@gheibia did you find any workaround? could an Ubuntu base image work? Having the same issue, tried to update the alpine image to 3.6 and have the same error, DNS discovery does not work.

EDIT: Didn't see that you used a different dockerfile. In that way and splitting the deployment in two different deployments worked for me. Thanks!

@gheibia
Copy link
Author

gheibia commented Jul 6, 2017

@kartojal I never tried Ubuntu for this application. There were quite a few things that needed changing. Also, I didn't really need anything but the application itself (thus building it from "scratch"). But I've deployed Ubuntu based containers in general to Kubernetes. So I don't see why not.

@kartojal
Copy link

kartojal commented Jul 6, 2017

Tried with a minidebian image and got the same results. Still can not use the Service IP inside the Volume pod. But outside the pod i can call to the master server with the Service IP.

What changes you did to got it working? 😄

@gheibia
Copy link
Author

gheibia commented Jul 6, 2017

To the application, none. I just used the "scratch" image and compiled the application as a self contained binary. See my steps and explanation above.

@gheibia
Copy link
Author

gheibia commented Jul 6, 2017

Please note though that I'm pulling the latest revision of the code. You might wanna pull the latest stable release instead.

@gheibia
Copy link
Author

gheibia commented Jul 27, 2017

@gheibia gheibia closed this as completed Jul 27, 2017
@fengyuad
Copy link

Hi there, I used the latest docker image and I deployed it on kubernetes. My master servers will always re-elect leaders when any one of them gets shut down or reboot;I am trying to achieve high availability so I am doing some testing right now. I don't have such issue when I run weedfs on localhost. Have you met this problem? Do you have any clue about what might cause this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants