Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Run in a container #558

Conversation

errordeveloper
Copy link
Contributor

@errordeveloper errordeveloper commented May 16, 2018

I've had a look at deploy/skaffold/Dockerfile, and it uses Ubuntu base image and I couldn't actually build it locally. Also, it seems like that container image is meant for integration tests.

I'd like to see an official slim container that I can use in CI. I'm not sure what's the preferred way to go about it, so I did the simplest thing first – multi-stage build based on Alpine. Please take a look and let me know.

I suppose we will need to consider:

  • location on Dockerfile (it's in root at the moment)
  • distroless instead of Alpine
  • image location in GCR and tagging
  • use of GCB (at the moment I'm inclined to Dockerfile, so to keep it simple for contributors to build locally)

@errordeveloper
Copy link
Contributor Author

I've tested this on Docker for Mac:

docker run -ti --rm \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v $HOME/.kube:/root/.kube \
  -v $(pwd):$(pwd) -w $(pwd) \
  errordeveloper/skaffold:1c894265d81c618c782500cde2d788fb77f3b416 \
     skaffold dev

The build worked, but deploy hits

Error: starting logger: Get https://localhost:6443/api/v1/pods?includeUninitialized=true&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused

I suppose I could fix it by running in a pod, but I'm not so sure if it makes much sense, as the user experience is kind of convoluted.

My primary use-case was to make it easy to run in Docker-enabled CI, without having to download the binary.

I was able to use skaffold build on Docker for Mac this way:

docker run -ti --rm \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v $HOME/.kube:/root/.kube \
  -v $(pwd):$(pwd) -w $(pwd) \
  errordeveloper/skaffold:1c894265d81c618c782500cde2d788fb77f3b416 \
     skaffold build

This broadly satisfies my needs, although I've noticed that removing -v $HOME/.kube:/root/.kube results in an error, it'd be good to eliminate it.

Error: getting skaffold config: getting k8s client: Error creating kubeConfig: invalid configuration: no configuration has been provided

I'll try this image in CI now and report back.

@errordeveloper
Copy link
Contributor Author

errordeveloper commented May 16, 2018

I've tried using this in CircleCI with a following config:

# .circleci/config.yml 
version: 2
jobs:
  build:
    docker:
      - image: errordeveloper/skaffold:1c894265d81c618c782500cde2d788fb77f3b416
    steps:
      - checkout
      - run: skaffold build 
workflows:
  version: 2
  build_and_test:
    jobs:
      - build

I got the same error, I didn't expect anything different. The config is quite nice an simple.

I think I could actually download the binary, but that won't make any difference. I'll try to stab-out kubeconfig for now, let's see if it will work.

I've opened #559 to discuss the general issue of kubeconfig dependency.

@errordeveloper
Copy link
Contributor Author

errordeveloper commented May 16, 2018

With a dummy kubeconfig added to the image, I've been able to use this config:

# .circleci/config.yml 
version: 2
jobs:
  build:
    docker:
      - image: errordeveloper/skaffold:66cc263ef18f107adce245b8fc622a8ea46385f2
    steps:
      - checkout
      - setup_remote_docker: {docker_layer_caching: true}
      - run: skaffold build 
workflows:
  version: 2
  build_and_test:
    jobs:
      - build

I still need to add registry auth, but so far I think it's rather nice and simple config, it demonstrates how easy it is to use Skaffold in a Docker-native CI. In Travis CI I'd have to call docker run, but it's still better then having to download all the binaries.

Let's figure out how to fix #559 properly and then review how to build the container image. Also #550 would be handy at some point.

@r2d4
Copy link
Contributor

r2d4 commented May 23, 2018

Sorry for the delayed response - I just got back from vacation after KubeCon :)

location on Dockerfile (it's in root at the moment)

root SGTM

distroless instead of Alpine

I'm inclined to use distroless, but no strong opinions here.

image location in GCR and tagging
use of GCB (at the moment I'm inclined to Dockerfile, so to keep it simple for contributors to build locally)

we can follow the example and set GCB as a profile if users want

I'm also inclined to either build the integration test image on top of this one or have them be the same. There's currently nothing extra in the integration test image and theoretically they should really be the same.

The real issue is shipping support for all the builders and deployers. The image is a bit large now because it needs to include the binaries for each builder and deployer (and its actually incorrect right now since it doesn't include bazel). I think anything we publish needs to have all of them, otherwise we'll have to maintain many.

@r2d4 r2d4 added the wip label May 24, 2018
@errordeveloper
Copy link
Contributor Author

errordeveloper commented May 25, 2018

I'm also inclined to either build the integration test image on top of this one or have them be the same. There's currently nothing extra in the integration test image and theoretically they should really be the same.

Yes, that'd be a good idea. I just didn't know whether the way the other Dockerfile is done is important in any way or not, so I've not decided to do any refactoring there yet.

The real issue is shipping support for all the builders and deployers. The image is a bit large now because it needs to include the binaries for each builder and deployer (and its actually incorrect right now since it doesn't include bazel). I think anything we publish needs to have all of them, otherwise we'll have to maintain many.

I see, good point, I didn't realise this. I suppose it'd have to come before we refactor the integration image.

@errordeveloper
Copy link
Contributor Author

@r2d4 I've just rebased this and update it to include latest versions of all builder.

RUN curl --silent --location "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${DOCKER_CREDENTIAL_GCR_VERSION}/docker-credential-gcr_linux_amd64-${DOCKER_CREDENTIAL_GCR_VERSION}.tar.gz" \
| tar xz ./docker-credential-gcr \
&& mv docker-credential-gcr usr/local/bin/docker-credential-gcr
# TODO: docker-credential-gcr configure-docker
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this setup step required? Do we know what does it actually do?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docker-credential-gcr configure-docker sets up the "gcr.io" repository patterns to use the docker-credential-gcr credential helper for the docker deamon.
Skaffold needs the gcr credential helper to be able to push to gcr.io repos (https://github.com/GoogleContainerTools/skaffold/blob/master/pkg/skaffold/docker/auth.go) but as far as I understand we don't depend on the docker daemon to push so this line can probably go away. @r2d4?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, we follow the same auth flow as the CLI to push - read the .docker/config to figure out what credential helpers we need to call


RUN ln -s /lib /lib64

ENV KUBECTL_VERSION v1.10.6
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems plausible to support latest 1.10, do folks agree?

RUN curl --silent --location "https://dl.k8s.io/${KUBECTL_VERSION}/bin/linux/amd64/kubectl" --output usr/local/bin/kubectl \
&& chmod +x usr/local/bin/kubectl

ENV DOCKER_VERSION 18.03.0
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One constraint I have here, is that it has to support multi-stage build, and I'm happy to include the latest version available (perhaps LTS one), but I don't know if we have other constrains or not...

@codecov-io
Copy link

Codecov Report

Merging #558 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #558   +/-   ##
=======================================
  Coverage   38.27%   38.27%           
=======================================
  Files          56       56           
  Lines        2576     2576           
=======================================
  Hits          986      986           
  Misses       1476     1476           
  Partials      114      114

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3f11067...70a742e. Read the comment docs.

@errordeveloper
Copy link
Contributor Author

Also, here is how integration image could be built on top of this

FROM skaffold as distribution
FROM golang:1.10-alpine AS integration

COPY --from=distribution  / /

RUN apk add --update \
      make \
      && true

ENV SKAFFOLD $GOPATH/src/github.com/GoogleContainerTools/skaffold
RUN mkdir -p "$(dirname ${SKAFFOLD})"
COPY . $SKAFFOLD

WORKDIR $SKAFFOLD

RUN make integration


ENV BAZEL_VERSION 0.16.1
RUN curl --silent --location "https://github.com/bazelbuild/bazel/releases/download/${BAZEL_VERSION}/bazel-${BAZEL_VERSION}-linux-x86_64" --output usr/local/bin/bazel \
&& chmod +x usr/local/bin/bazel
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like it needs glibc... 😿

dfc0b8b9e9f8:/go# ldd /usr/local/bin/bazel
        /lib64/ld-linux-x86-64.so.2 (0x7f2384c48000)
        librt.so.1 => /lib64/ld-linux-x86-64.so.2 (0x7f2384c48000)
        libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7f2384c48000)
        libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f2384c48000)
        libm.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f2384c48000)
        libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x7f23848f6000)
        libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x7f23846e4000)
        libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f2384c48000)
Error relocating /usr/local/bin/bazel: __realpath_chk: symbol not found
Error relocating /usr/local/bin/bazel: __memcpy_chk: symbol not found
Error relocating /usr/local/bin/bazel: __sprintf_chk: symbol not found
dfc0b8b9e9f8:/go# bazel
Error relocating /usr/local/bin/bazel: __realpath_chk: symbol not found
Error relocating /usr/local/bin/bazel: __memcpy_chk: symbol not found
Error relocating /usr/local/bin/bazel: __sprintf_chk: symbol not found
dfc0b8b9e9f8:/go#

We have libc6-compat already, and I tried using instructions from mongodb-js/mongodb-prebuilt#35 without any luck. Will have another go later on.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also filed bazelbuild/bazel#5891.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, I'm gonna compile bazel, as it's relatively easy to do. However, it'd be good to compare how much do we win (in terms of image size) from having alpine vs e.g. ubuntu.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also filed bazelbuild/bazel#5909.

@errordeveloper
Copy link
Contributor Author

errordeveloper commented Aug 22, 2018

As discuss on thew call today, I'll look at the existing Dockerfile and see if that works for the use-case.

@balopat
Copy link
Contributor

balopat commented Oct 8, 2018

@errordeveloper so, I'd rename this PR to "make skaffold Docker image smaller", what do you think? I seems that the current gcr.io/k8s-skaffold/skaffold image should be good enough for CI, just a bit on the larger side due to the ubuntu baseimage?

@errordeveloper
Copy link
Contributor Author

errordeveloper commented Oct 8, 2018 via email

@dgageot
Copy link
Contributor

dgageot commented Oct 9, 2018

@errordeveloper Are you ok if I close it, then?

@errordeveloper
Copy link
Contributor Author

errordeveloper commented Oct 9, 2018 via email

@dgageot dgageot closed this Oct 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants