Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Self Hosting of Resources #715

Closed
chrislovecnm opened this issue Oct 21, 2016 · 23 comments
Closed

Self Hosting of Resources #715

chrislovecnm opened this issue Oct 21, 2016 · 23 comments
Assignees
Milestone

Comments

@chrislovecnm
Copy link
Contributor

chrislovecnm commented Oct 21, 2016

During the DNS attack today on github.com and other internet end-points, we were not able to deploy new K8s clusters. We need to address capability of self-hosting docker containers and other components that are downloaded. For instance with channels:

"https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable": error fetching

Larger companies will want to self-host binaries, dockers, and metadata.


Keywords:

  • Private S3 bucket
  • No public IPs
  • Isolated standalone cluster
@krisnova
Copy link
Contributor

This should be a global discussion about all dependency management in the project. We could/should offer a way to easily overload some of these parameters...

My gut makes me think yml with Viper....

@justinsb
Copy link
Member

Agreed - this was really just an error/shortcut I made - I didn't think it through.

I'd say a "base directory" for all our resources would be helpful. And we should pull from there:

  • protokube (moving it to a tar file)
  • nodeup
  • dns-controller docker image (moving it to a tar file)
  • the "stable" channel metadata (though this should only be on kops upgrade, which is really just a shortcut over existing edit functionality)

And then you could repoint your base directory to your private builds / whatever.

We already can preload docker images over HTTP and then docker load them - we use that for e2e. So we're close.

I think it's mostly "just" a matter of improving our build process. I saw @mikedanese 's super cool work on getting bazel into the core, and I think it would be great to leverage that once it's in (though we will still want an easy "make" for building kops the CLI tool itself)

@chrislovecnm
Copy link
Contributor Author

I would say that we can define a docker Reg for -

  • all docker images like protocube
  • all docker images like etcd

Http / s3 repo for all artifacts like nodeup.

@chrislovecnm
Copy link
Contributor Author

#730 another one

@krisnova
Copy link
Contributor

I1028 16:17:44.574437 78106 channel.go:68] Loading channel from "https://raw.githubusercontent.com/kubernetes/kops/master/channels/stable"

Will start logging these as I notice them

@chrislovecnm
Copy link
Contributor Author

Thanks ... nodeup as well. All of the dockers :)

@chrislovecnm
Copy link
Contributor Author

@robertojrojas this is what I was talking about. This is a tip of the ice burg problem. You interested in assisting?

@robertojrojas
Copy link
Contributor

@chrislovecnm sure! So, there are deps needed at the time kops is executing and deps needed within the cloud provider (with or without internet access), right?

@chrislovecnm
Copy link
Contributor Author

We have

  • k8s binaries
  • containers: etcd, dnscontroller, etc, etc
  • and cni binaries
  • nodeup and kops binaries <- already dynamic.

What is the best way to communicate this to you?

@chrislovecnm
Copy link
Contributor Author

Oh and thanks. This is a huge need for the community btw. For example DNS attacks have stopped deployments. Aka not good.

@vendrov
Copy link
Contributor

vendrov commented Dec 23, 2016

We should support K8S internal containers such as pause-amd64, for that we should pass the flag "--pod_infra_container_image" to the kubelet.
kubernetes/kubernetes#4896

@justinsb justinsb added this to the 1.5.1 milestone Dec 28, 2016
@sstarcher
Copy link
Contributor

External dependencies

CNI

Can be specified using environment variable CNI_VERSION_URL
The current source is storage.googleapis.com

defaultCNIAsset = "https://storage.googleapis.com/kubernetes-release/network-plugins/cni-07a8a28637e97b22eb8dfe710eeae1344f69d16e.tar.gz"

Channel

Can be specified on the command line via --channel
The current source is github.com

const DefaultChannelBase = "https://raw.githubusercontent.com/kubernetes/kops/master/channels/"

NodeUp / Protokube

The base url can be changed via KOPS_BASE_URL
The specific urls can be changed via NODEUP_URL and PROTOKUBE_IMAGE
The current source is s3

Protokube -

// Either a docker name (e.g. gcr.io/protokube:1.4), or a URL (https://...) in which case we download

NodeUp -

nodeUpLocation = os.Getenv("NODEUP_URL")

Images

If c.Cluster.Spec.KubernetesVersion is a url the following images are loaded from that url.

  • kube-proxy
  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler

imagePath := baseURL + "/bin/linux/amd64/" + component + ".tar"

Individual images can be specified for each of the above items via the config - https://github.com/kubernetes/kops/blob/97afdf9f97f56ab5a369b444d2c39621e8e6ba73/pkg/apis/kops/v1alpha2/componentconfig.go

Pause container

Can be specified on the kubelet via --pod-infra-container-image
The current source is gcr

Containers referenced on gcr

gcr.io/google_containers/hyperkube-amd64
gcr.io/google_containers/etcd:2.2.1
gcr.io/google_containers/exechealthz-amd64:1.2
gcr.io/google_containers/cluster-proportional-autoscaler-{{Arch}}:1.0.0
gcr.io/google_containers/kubedns-{{Arch}}:1.9
gcr.io/google_containers/kube-dnsmasq-{{Arch}}:1.4
gcr.io/google_containers/dnsmasq-metrics-{{Arch}}:1.0
gcr.io/google_containers/exechealthz-{{Arch}}:1.2

Currently these containers depend on gcr.io and can no be pre-loaded.

@chrislovecnm
Copy link
Contributor Author

Networking provider such as weave or calico as well ...

@raghu67
Copy link

raghu67 commented Apr 5, 2017

List above looks quite comprehensive. But I don't see kubelet in the list. Where does that come from?

@chrislovecnm
Copy link
Contributor Author

Implementation

  1. create kops toolbox bill-of-materials - which will generate a list of items that are installed with kops.
  2. determine which components do not have configurations in the api
  3. chunk up parts and start implementing

Extended List

@sstarcher has a great list, but here are a few more.

  1. kubelet
  2. custom kernel
  3. packages installed kops that are from an external repo, such as docker

@chrislovecnm
Copy link
Contributor Author

#2419 provides a list of inventory items.

#2571 will provide a tool to stage those items.

Final PR will be the implementation of API values to allow dynamic setting for the staging area. The staging area for assets will be a docker repo and VFS.

@chrislovecnm chrislovecnm mentioned this issue May 15, 2017
4 tasks
@DerekV
Copy link
Contributor

DerekV commented Jun 19, 2017

Larger companies will want to self-host binaries, dockers, and metadata.

And also the security paranoid. In our case, it would potentially simplify some things.
I'm already using my own nodeup bucket, and freeze/promote nodeup testing environment with a small s3sync, but many things just pull from gcr.io without my direct control.

@chrislovecnm
Copy link
Contributor Author

Still a work in progress

/assign

@chrislovecnm
Copy link
Contributor Author

/close

as this is implemented

@s1rc0
Copy link

s1rc0 commented Jul 6, 2018

as this is implemented
@chrislovecnm , done? where can I read the documentation how to use ?

@smartlin5228
Copy link

Is there a document for us to refer?

@michaelajr
Copy link

@chrislovecnm Done? Is there a link on how to get started?

@ReillyTevera
Copy link
Contributor

ReillyTevera commented Dec 12, 2018

I would also like to see documentation on this as well. Our use case is that we would like to push a docker config to all of our nodes to require that all images must come from our private registry and must be signed which would obviously break cluster components without this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests