Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skaffold/Kaniko with ECR #731

Closed
jicowan opened this issue Jun 21, 2018 · 28 comments · Fixed by #2975 or #3389
Closed

Skaffold/Kaniko with ECR #731

jicowan opened this issue Jun 21, 2018 · 28 comments · Fixed by #2975 or #3389
Assignees
Labels
area/build build/kaniko good first issue Good for newcomers help wanted We would love to have this done, but don't have the bandwidth, need help from contributors kind/feature-request priority/p1 High impact feature/bug.

Comments

@jicowan
Copy link

jicowan commented Jun 21, 2018

This is a feature request. I'd like to be able to use Skaffold to create a Kaniko container/pod on KOPS cluster for building container images and then pushing them to an ECR registry. I'd also like to be able to use an S3 bucket for the build context.

@dlorenc
Copy link
Contributor

dlorenc commented Jun 21, 2018

Using s3 for the build context needs to be implemented first in kaniko. I think there's a feature request over in that repo detailing more flexible build context locations.

@bhack
Copy link

bhack commented Jun 21, 2018

@dgageot dgageot changed the title Skaffold with ECR Skaffold/Kaniko with ECR Jun 22, 2018
@priyawadhwa
Copy link
Contributor

Pushing to ECR with kaniko is tricky because the docker config has to specify a specific ECR registry which looks like this:

{
	"credHelpers": {
		"aws_account_id.dkr.ecr.region.amazonaws.com": "ecr-login"
	}
}

where the user has to specify their own aws account id and region. For this to work in skaffold, a user would need some way of mounting in their own docker config.

@jicowan
Copy link
Author

jicowan commented Jun 26, 2018

If Kaniko is running on an EC2 instance, you can assign the instance an IAM role that grants it access to ECR. You shouldn't need to use the credential helper in that instance.

@bhack
Copy link

bhack commented Jun 26, 2018

On EC2 it is solvable also with https://github.com/jtblin/kube2iam

@dlorenc
Copy link
Contributor

dlorenc commented Jun 26, 2018

Do you have any pointers to how docker actually fetches access tokens in an environment like that?

gcr works the same way from gce, but the access token fetching happens from inside the credential helper.

Basically auth works like this currently:

  • docker or kaniko parse the registry out of the image name
  • they look up the credential helper for that registry from the docker config.json
  • they call the credential helper to get an access token
  • the credential helper determines it is running on gce and gets an access token using the IAM roles the VM service account is bound to

This all works easily with gcr because there is a small set of valid registry urls (us.gcr.io, eu.gcr.io, etc.) that we can hardcode inside the config.json, instructing the calling tool to use the gcr credential helper.

@bhack
Copy link

bhack commented Jun 26, 2018

Simply you need to give the listed permission in https://kubernetes.io/docs/concepts/containers/images/#using-aws-ec2-container-registry to the pod with kube2iam

@bhack
Copy link

bhack commented Jun 26, 2018

Of course these are for read you need to add the write ones for pushing

@dlorenc
Copy link
Contributor

dlorenc commented Jun 26, 2018

In this case it looks like the Kubelet has direct support for ec2 in it:

The kubelet will fetch and periodically refresh ECR credentials. It needs the following permissions to do this:

We would need to do something similar in https://github.com/google/go-containerregistry to make this work.

@bhack
Copy link

bhack commented Jun 26, 2018

@dlorenc https://github.com/kubernetes/kubernetes/blob/master/pkg/credentialprovider/aws/aws_credentials.go#L76.
But I think that if you can have the aws ecr credential helper and you authorize i.e. with kube2iam it can work with the basic auth pipeline that you have described.

@dlorenc
Copy link
Contributor

dlorenc commented Jul 31, 2018

Kaniko has the ECR credential helper now: https://github.com/GoogleContainerTools/kaniko/blob/master/deploy/Dockerfile

Do you think we need to do anything else now?

@dlorenc
Copy link
Contributor

dlorenc commented Aug 11, 2018

@bhack does this work for you now?

@bhack
Copy link

bhack commented Aug 23, 2018

Sorry I am not on this anymore currently so I have not the setup ready test this immediately. Can you test it in the meantime?

@nkubala
Copy link
Contributor

nkubala commented Nov 29, 2018

I haven't tested it personally, but Kaniko's ECR support should fix this issue. Going to close for now, please reopen if anyone sees any issues with it.

@nkubala nkubala closed this as completed Nov 29, 2018
@azaiter
Copy link

azaiter commented Feb 25, 2019

Although Kaniko has ECR support, the pod template is made for Google Container Registry, as in

Value: "/secret/kaniko-secret",
kaniko secret should be adjusted to a volume mount under /root/.aws/aws-secret instead of google's GOOGLE_APPLICATION_CREDENTIALS environment variable (maybe aws has also a similar env var too? if so, this should be an easy ternary operation to choose which env var.)

@azaiter
Copy link

azaiter commented Feb 25, 2019

@nkubala could you kindly re-open this issue? thanks!

@priyawadhwa
Copy link
Contributor

For this to work, we could add a PullSecretMountPath field to ClusterDetails, and then mount in that path in the generated pod here.

Would anyone be interested in submitting a PR?

@priyawadhwa priyawadhwa added good first issue Good for newcomers help wanted We would love to have this done, but don't have the bandwidth, need help from contributors labels Mar 19, 2019
@jonjohnsonjr
Copy link
Contributor

IIUC, kaniko is using k8schain on-cluster, so this getting fixed (soon) should help: google/go-containerregistry#355

@tejal29
Copy link
Member

tejal29 commented Apr 8, 2019

@azaiter Going to link this issue to #1892 and #1906 which will solve this problem.

@balopat balopat added the priority/p0 Highest priority. We are actively looking at delivering it. label Sep 20, 2019
@balopat
Copy link
Contributor

balopat commented Sep 20, 2019

@priyawadhwa can we add this to your Kaniko registry revamp todo list? :)

@priyawadhwa
Copy link
Contributor

@balopat for sure!

@tejal29 tejal29 added priority/p1 High impact feature/bug. and removed priority/p0 Highest priority. We are actively looking at delivering it. labels Sep 23, 2019
priyawadhwa pushed a commit to priyawadhwa/skaffold-1 that referenced this issue Oct 2, 2019
This will give users the option to specify where the pull secret should
be mounted within the container. This should fix GoogleContainerTools#731 and enable ECR
support.
@priyawadhwa
Copy link
Contributor

Hey @azaiter , I added PullSecretMountPath as an option to ClusterDetails in #2975. Please comment here if this doesn't resolve your issue & we can reopen this one.

@azaiter
Copy link

azaiter commented Oct 4, 2019

Hey @priyawadhwa, The MR looks good and should do the job, I've been using the build from this MR for the longest time and the only difference is the environment variable setting for ECR region (in which I think it defaults to us-east-1 but I'm not sure). Are there any plans to support setting up environment variables on Kaniko pod? I believe we discussed that #1892 and #1906 were part of the original proposal of generalizing container registry config. @tejal29 What do you think?

I'll pull the latest build and test it with our setup, if it happens that the environment variable piece is necessary for ECR, I'll report back and re-open the issue. Thanks for your efforts!

@priyawadhwa
Copy link
Contributor

Sounds good, thanks @azaiter ! If the env variable is necessary I think it makes sense to support that in the pod.

@cyrildiagne
Copy link
Contributor

Thanks a lot @priyawadhwa for adding the PullSecretMountPath property.

I'm still facing an issue when trying to build with Kaniko on an EKS cluster pushing the image to ECR. I can see the logs of the image building in the Kaniko pod but as soon as the build is complete skaffold returns an error: getting image: unsupported status code 401; body: Not Authorized.

I don't see any specific reference to the AWS_REGION env var, but I wonder if that could be the problem since it's the only thing that seems different between the failing skaffold config and the working kaniko config.

Edited skaffold dev log:

Listing files to watch...
 - XXXXXXXXXX.dkr.ecr.eu-west-1.amazonaws.com/hello-world
List generated in 4.0196944s
Generating tags...
 - XXXXXXXXXX.dkr.ecr.eu-west-1.amazonaws.com/hello-world -> XXXXXXXXXX.dkr.ecr.eu-west-1.amazonaws.com/hello-world:2019-10-21_19-04-20.994_UTC
Tags generated in 240.2µs
Starting build...
Creating kaniko secret [default/aws-secret]...
Creating docker config secret [docker-kaniko-secret]...
Building [XXXXXXXXXX.dkr.ecr.eu-west-1.amazonaws.com/hello-world]...
Storing build context at /tmp/context-c730a796c18102623c402918e34b137a.tar.gz

[ ... DOCKER BUILD LOG ... ]

Pruning images...
Image prune complete in 31.4µs
FATA[0057] exiting dev mode because first build failed: build failed: building [XXXXXXXXXX.dkr.ecr.eu-west-1.amazonaws.com/hello-world]: kaniko build for [XXXXXXXXXX.dkr.ecr.eu-west-1.amazonaws.com/hello-world]: getting image: unsupported status code 401; body: Not Authorized 

My skaffold config:

apiVersion: skaffold/v1beta16
kind: Config
build:
  artifacts:
    - image: XXXXXXXXXX.dkr.ecr.eu-west-1.amazonaws.com/hello-world
      kaniko:
        buildContext:
          localDir: {}
  cluster:
    pullSecretName: aws-secret
    pullSecretMountPath: /root/.aws/
    dockerConfig: 
      secretName: docker-kaniko-secret
    namespace: default
  tagPolicy:
    dateTime: {}
deploy:
  kubectl:
    manifests:
      - k8.yaml

The following kaniko yaml builds successfully when applied directly:

apiVersion: v1
kind: Pod
metadata:
  name: kaniko
spec:
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:latest
    args: ["--dockerfile=Dockerfile",
            "--context=s3://elasticbeanstalk-eu-west-1-XXXXXXXXXX/test.tar.gz",
            "--destination=XXXXXXXXXX.dkr.ecr.eu-west-1.amazonaws.com/hello-world:test-1"]
    volumeMounts:
      - name: aws-secret
        mountPath: /root/.aws/
      - name: docker-config
        mountPath: /kaniko/.docker/
    env:
      - name: AWS_REGION
        value: eu-west-1
  restartPolicy: Never
  volumes:
    - name: aws-secret
      secret:
        secretName: aws-secret
    - name: docker-config
      configMap:
        name: docker-config

@priyawadhwa
Copy link
Contributor

Hey @cyrildiagne we probably need to add configuration for additional env in the skaffold config for the kaniko pod.

We could add an Env field in the kaniko config here and then add it to the pod template here

Would you be interested in opening a PR for this?

@priyawadhwa priyawadhwa reopened this Oct 21, 2019
@cyrildiagne
Copy link
Contributor

Hi @priyawadhwa, thanks a lot for the rapid feedback and pointers.
Yes I'll give it a try.

@destpat
Copy link

destpat commented Nov 3, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/build build/kaniko good first issue Good for newcomers help wanted We would love to have this done, but don't have the bandwidth, need help from contributors kind/feature-request priority/p1 High impact feature/bug.
Projects
None yet