Skip to content
This repository has been archived by the owner on Jan 28, 2020. It is now read-only.

Commit

Permalink
Convert to builds, image streams and deployment configs.
Browse files Browse the repository at this point in the history
Deploying will now create a BuildConfig, ImageStream, and
DeploymentConfig. However until you push images there's nothing for the
DeploymentConfig to run.

You can push images in two ways, either start-build to build the latest
git master, or a make target to push your locally built images.
(development)
  • Loading branch information
dgoodwin committed Apr 27, 2018
1 parent ddbd8ee commit 8dfd4bf
Show file tree
Hide file tree
Showing 8 changed files with 222 additions and 50 deletions.
29 changes: 29 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Dockerfile to perform a full cluster-operator build from source.
#
# This Dockerfile is intended for performing in-cluster OpenShift builds. It differs
# from those in build/ in that it does not expect to compile the binary externally
# and add to the container, everything is built internally during docker build.
FROM golang:1.9

ENV PATH=/go/bin:$PATH GOPATH=/go

ADD . /go/src/github.com/openshift/cluster-operator

WORKDIR /go/src/github.com/openshift/cluster-operator
RUN NO_DOCKER=1 make build
ENTRYPOINT ["bin/cluster-operator"]

11 changes: 11 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -390,6 +390,17 @@ aws-machine-controller-image: build/aws-machine-controller/Dockerfile $(BINDIR)/
docker tag $(AWS_MACHINE_CONTROLLER_IMAGE) $(AWS_MACHINE_CONTROLLER_MUTABLE_IMAGE)
docker tag $(AWS_MACHINE_CONTROLLER_IMAGE) $(AWS_MACHINE_CONTROLLER_PUBLIC_IMAGE)

# Push our Docker Images to the integrated registry:
INTEGRATED_REGISTRY ?= 172.30.1.1
integrated-registry-push:
# WARNING: this will fail if logged in as system:admin, see README for creating an "admin" account
# you can use separately that will work here:
$(eval OPENSHIFT_TOKEN := $(shell oc whoami -t))
docker login -u admin -p $(OPENSHIFT_TOKEN) $(INTEGRATED_REGISTRY):5000
# NOTE: the in-cluster ImageStream tag we use is latest:
docker tag cluster-operator:$(MUTABLE_TAG) $(INTEGRATED_REGISTRY):5000/openshift-cluster-operator/cluster-operator:latest
docker push $(INTEGRATED_REGISTRY):5000/openshift-cluster-operator/cluster-operator:latest


# Push our Docker Images to a registry
######################################
Expand Down
28 changes: 18 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,25 +30,33 @@
* Fedora: `oc cluster up --image="docker.io/openshift/origin"`
* Mac OSX: Follow the Minishift [Getting Started Guide](https://docs.openshift.org/latest/minishift/getting-started/index.html)
* Note: Startup output will contain the URL to the web console for your openshift cluster, save this for later
* Login to the OpenShift cluster as admin:
* Login to the OpenShift cluster as system:admin:
* `oc login -u system:admin`
* Grant admin rights to login to the web console
* Create an "admin" account with cluster-admin role which you can use to login to the [WebUI](https://localhost:8443) or with oc:
* `oc adm policy add-cluster-role-to-user cluster-admin admin`
* Login to the OpenShift cluster as a normal admin account:
* `oc login -u admin -p password`
* Ensure the following files are available on your local machine:
* `$HOME/.aws/credentials` - your AWS credentials
* `$HOME/.aws/credentials` - your AWS credentials, default section will be used but can be overridden by vars when running the create cluster playbook.
* `$HOME/.ssh/libra.pem` - the SSH private key to use for AWS


## Deploy / Re-deploy Cluster Operator

* Compile the Go code and create the Cluster Operator images (both Go and Ansible):
* Mac OSX only: `eval $(minishift docker-env)`
* `make images`
* Deploy cluster operator to the OpenShift cluster you are currently logged into. (see above for oc login example)
* Deploy cluster operator to the OpenShift cluster you are currently logged into. (see above for oc login instructions above)
* `ansible-playbook contrib/ansible/deploy-devel-playbook.yaml`
* If your code/image changed, but the kubernetes config did not (which is usually the case), you should delete pods appropriately:
* `oc delete pod -l app=cluster-operator-controller-manager`
* Or if you would rather delete all pods including the apiserver (which seldom changes) and our etcd (which would delete your stored clusters): `oc delete pod --all -n openshift-cluster-operator`
* This creates an OpenShift BuildConfig and ImageStream for the cluster-operator image. (which does not yet exist)
* Compile and push an image.
* If you would just like to deploy Cluster Operator from the latest code in git:
* `oc start-build cluster-operator`
* If you are a developer and would like to quickly compile code locally and deploy to your cluster:
* Mac OSX only: `eval $(minishift docker-env)`
* `NO_DOCKER=1 make images`
* This will compile the go code locally, and build both cluster-operator and cluster-operator-ansible images.
* `make integrated-registry-push`
* This will attempt to get your current OpenShift whoami token, login to the integrated cluster registry, and push your local images.
* This will immediately trigger a deployment now that the images are available.
* Re-run these steps to deploy new code as often as you like, once the push completes the ImageStream will trigger a new deployment.

## Creating a Test Cluster

Expand Down
11 changes: 11 additions & 0 deletions contrib/ansible/create-cluster-playbook.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,17 @@
- import_role:
name: kubectl-ansible

- name: process cluster versions template
oc_process:
template_file: "{{ playbook_dir }}/../examples/cluster-versions-template.yaml"
parameters:
CLUSTER_VERSION_NS: "{{ cluster_version_namespace }}"
register: cluster_versions_reg

- name: create/update cluster versions
kubectl_apply:
definition: "{{ cluster_versions_reg.result | to_json }}"

# If no cluster_namespace was defined on the CLI, we want to create the cluster in the current:
- name: lookup current namespace if none defined
command: "oc project -q"
Expand Down
16 changes: 2 additions & 14 deletions contrib/ansible/deploy-devel-playbook.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,21 +20,9 @@
- import_role:
name: kubectl-ansible

- name: process cluster versions template
oc_process:
template_file: "{{ playbook_dir }}/../examples/cluster-versions-template.yaml"
register: cluster_versions_reg

- name: wait for our apiserver and create/update cluster versions
kubectl_apply:
definition: "{{ cluster_versions_reg.result | to_json }}"
register: result
# Wait up to 2 minutes for our apiserver to be accepting our types:
until: result.failed == false
retries: 24
delay: 5

- name: create/update playbook mock deployment
kubectl_apply:
namespace: "{{ cluster_operator_namespace }}"
src: "{{ playbook_dir }}/../examples/deploy-playbook-mock.yaml"

- debug: msg="Deployment complete, you may need to push images to the registry or start a build before the Cluster Operator will deploy. (see README.md)"
5 changes: 5 additions & 0 deletions contrib/ansible/deploy-playbook.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,9 @@
# This CA will be generated once and then re-used for all certs. Will be created
# in the cert dir defined above.
apiserver_ca_name: ca
# Cluster operator git repository to build:
cluster_operator_git_repo: "https://github.com/openshift/cluster-operator"
cluster_operator_git_ref: master

tasks:
- import_role:
Expand Down Expand Up @@ -130,6 +133,8 @@
parameters:
CLUSTER_OPERATOR_NAMESPACE: "{{ cluster_operator_namespace }}"
SERVING_CA: "{{ l_serving_ca }}"
GIT_REPO: "{{ cluster_operator_git_repo }}"
GIT_REF: "{{ cluster_operator_git_ref }}"
register: cluster_app_reg

- name: deploy cluster operator
Expand Down
67 changes: 67 additions & 0 deletions contrib/examples/cluster-operator-roles-template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,62 @@ objects:
name: cluster-operator-apiserver
namespace: ${CLUSTER_OPERATOR_NAMESPACE}

# We have to create our openshift-cluster-operator namespace outside of "oc new-project"
# due to restrictions on project names starting with "openshift-". This results in the default
# service accounts (namely "deployer") being unable to view ReplicationControllers in the
# namespace, and thus unable to actually deploy. A controller is incoming to monitor for this.
#
# NOTE: may not be required in 3.10, a controller was added to maintain these bindings.
#
# Until then we manually bind the deployer/builder service accounts to appropriate roles:
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "system:deployers"
namespace: ${CLUSTER_OPERATOR_NAMESPACE}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:deployer
subjects:
- apiGroup: ""
kind: ServiceAccount
name: deployer
namespace: ${CLUSTER_OPERATOR_NAMESPACE}
#userNames:
#- system:serviceaccount:${CLUSTER_OPERATOR_NAMESPACE}:deployer
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "system:image-builders"
namespace: ${CLUSTER_OPERATOR_NAMESPACE}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:image-builder
subjects:
- apiGroup: ""
kind: ServiceAccount
name: builder
namespace: ${CLUSTER_OPERATOR_NAMESPACE}
#userNames:
#- system:serviceaccount:${CLUSTER_OPERATOR_NAMESPACE}:builder
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "system:image-pullers"
namespace: ${CLUSTER_OPERATOR_NAMESPACE}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:image-puller
subjects:
- apiGroup: ""
kind: Group
name: system:serviceaccounts:${CLUSTER_OPERATOR_NAMESPACE}
#userNames:
#- system:serviceaccount:${CLUSTER_OPERATOR_NAMESPACE}:deployer

# API Server gets the ability to read authentication. This allows it to
# read the specific configmap that has the requestheader-* entries to
# enable api aggregation
Expand Down Expand Up @@ -121,3 +177,14 @@ objects:
name: cluster-operator-controller-manager
namespace: ${CLUSTER_OPERATOR_NAMESPACE}

# Create a role for the master installer to save a kubeconfig
# from a managed cluster
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "clusteroperator.openshift.io:master-controller"
rules:
# CRUD secrets with kubeconfig data
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "delete", "get", "list", "update"]
Loading

0 comments on commit 8dfd4bf

Please sign in to comment.