Skip to content

Latest commit

 

History

History
381 lines (247 loc) · 16.9 KB

CONTRIBUTING.adoc

File metadata and controls

381 lines (247 loc) · 16.9 KB

Contributing to OpenShift

The OpenShift architecture builds upon the flexibility and scalability of Docker and Kubernetes to deliver a powerful new Platform-as-a-Service system. This article explains how to set up a development environment and get involved with this latest version of OpenShift. Kubernetes is included in this repo for ease of development, and the version we include is periodically updated.

To get started you can either:

Or if you are interested in development, start with:

Download from GitHub

The OpenShift team periodically publishes binaries to GitHub on the Releases page. These are Linux, Windows, or Mac OS X 64bit binaries (note that Mac and Windows are client only). You’ll need Docker installed on your local system (see the installation page if you’ve never installed Docker before).

The tar file for each platform contains a single binary openshift which is the all-in-one OpenShift installation.

  • Use sudo openshift start to launch the server. Root access is required to create services due to the need to modify IPTables. See issue: kubernetes/kubernetes#1859.

  • Use oc login <server> …​ to connect to an OpenShift server

  • Use openshift help to see more about the commands in the binary

OpenShift Development

To get started, fork the origin repo.

Develop locally on your host

You can develop OpenShift 3 on Windows, Mac, or Linux, but you’ll need Docker installed on Linux to actually launch containers.

  • For OpenShift 3 development, install the Go programming language

  • To launch containers, install the Docker platform

Here’s how to get set up:

  1. For Go, Git and optionally also Docker, follow the links below to get to installation information for these tools:

    • Installing Go. Currently, OpenShift supports building using Go 1.6.x. Do NOT use $HOME/go for Go installation, save that for the Go workspace below.

    • Installing Git

    • Installing Docker

      Note
      As of now, OpenShift requires Docker 1.9 or higher. The exact version requirement is documented here.
  2. Next, create a Go workspace directory:

    $ mkdir $HOME/go
  3. In your .bashrc file or .bash_profile file, set a GOPATH and update your PATH:

    export GOPATH=$HOME/go
    export PATH=$PATH:$GOPATH/bin
    export OS_OUTPUT_GOPATH=1
  4. Open up a new terminal or source the changes in your current terminal. Then clone this repo:

    $ mkdir -p $GOPATH/src/github.com/openshift
    $ cd $GOPATH/src/github.com/openshift
    $ git clone git://github.com/<forkid>/origin  # Replace <forkid> with the your github id
    $ cd origin
    $ git remote add upstream git://github.com/openshift/origin
  5. From here, you can generate the OpenShift binaries by running:

    $ make clean build
  6. Next, assuming you have not changed the kubernetes/openshift service subnet configuration from the default value of 172.30.0.0/16, you need to instruct the Docker daemon to trust any Docker registry on the 172.30.0.0/16 subnet. If you are running Docker as a service via systemd, add the --insecure-registry 172.30.0.0/16 argument to the options value in /etc/sysconfig/docker and restart the Docker daemon. For more details on controlling Docker options with systemd, please refer here. Otherwise, add "--insecure-registry 172.30.0.0/16" to the Docker daemon invocation, for example:

    $ docker daemon --insecure-registry 172.30.0.0/16
  7. Then, the OpenShift firewalld rules are also a work in progress. For now it is easiest to disable firewalld altogether:

    $ sudo systemctl stop firewalld
  8. Firewalld will start again on your next reboot, but you can manually restart it with this command when you are done running OpenShift:

    $ sudo systemctl start firewalld
  9. Now change into the directory with the OpenShift binaries, and start the OpenShift server:

    $ cd _output/local/bin/linux/amd64
    $ sudo ./openshift start
    Note
    Replace "linux/amd64" with the appropriate value for your platform/architecture.
  10. Launch another terminal, change into the same directory you started OpenShift, and deploy the private docker registry within OpenShift with the following commands:

    $ sudo chmod +r openshift.local.config/master/admin.kubeconfig
    $ ./oadm registry -n default --config=openshift.local.config/master/admin.kubeconfig
  11. If it is not there already, add the current directory to the $PATH, so you can leverage the OpenShift commands elsewhere.

  12. You are now ready to edit the source, rebuild and restart OpenShift to test your changes.

  13. NOTE: to properly stop OpenShift and clean up, so that you can start fresh instance of OpenShift, execute:

    $ sudo pkill -x openshift
    $ docker ps | awk 'index($NF,"k8s_")==1 { print $1 }' | xargs -l -r docker stop
    $ mount | grep "openshift.local.volumes" | awk '{ print $3}' | xargs -l -r sudo umount
    $ cd <to the dir you ran openshift start> ; sudo rm -rf openshift.local.*

Develop on virtual machine using Vagrant

To facilitate rapid development we’ve put together a Vagrantfile you can use to stand up a development environment.

  1. Install Vagrant

  2. Install VirtualBox (Ex: yum install VirtualBox from the RPM Fusion repository)

  3. In your .bashrc file or .bash_profile file, set a GOPATH:

    export GOPATH=$HOME/go
  4. Clone the project and change into the directory:

    $ mkdir -p $GOPATH/src/github.com/openshift
    $ cd $GOPATH/src/github.com/openshift
    $ git clone git://github.com/<forkid>/origin  # Replace <forkid> with the your github id
    $ cd origin
    $ git remote add upstream git://github.com/openshift/origin
  5. Bring up the VM (If you are new to Vagrant, consider Vagrant Docs for help on items like provider selection. Also consider the enablement of your hardware’s virtualization extensions, such as RHEL for example.). Also note, for the make clean build in step 7 to work, a sufficient amount of memory needs to be allocated for the VM, where that amount of memory is not necessarily needed if you are not doing a compile, but simply running openshift (and hence is not set as the default):

    $ export OPENSHIFT_MEMORY=4192
    $ vagrant up
    Tip
    To ensure you get the latest image first run vagrant box remove fedora_inst. And if later on you employ a dev cluster, additionally run vagrant box remove fedora_deps.
  6. You are now ready to edit the source, rebuild and restart OpenShift to test your changes. SSH in:

    $ vagrant ssh
  7. Run a build:

    $ cd /data/src/github.com/openshift/origin
    $ make clean build
  8. Now start the OpenShift server:

    $ sudo systemctl start openshift
    Or:
    # must cd / to use prepopulated $KUBECONFIG
    $ cd /
    # redirect the logs to /home/vagrant/openshift.log for easier debugging
    $ sudo `which openshift` start --public-master=localhost &> $HOME/openshift.log &
    Note
    This will generate three directories in / (openshift.local.config, openshift.local.etcd, openshift.local.volumes) as well as create the /home/vagrant/openshift.log file.
    Note
    By default your origin directory (on your host machine) will be mounted as a vagrant synced folder into /data/src/github.com/openshift/origin.
  9. Deploy the private docker registry within OpenShift with the following command:

    $ oadm registry
  10. At this point it may be helpful to load some image streams and templates. These commands will make use of fixtures from the openshift/origin/examples dir:

    # load image streams
    $ oc create -f /data/src/github.com/openshift/origin/examples/image-streams/image-streams-centos7.json -n openshift
    # load templates
    $ oc create -f /data/src/github.com/openshift/origin/examples/sample-app/application-template-stibuild.json -n openshift
    $ oc create -f /data/src/github.com/openshift/origin/examples/db-templates -n openshift
  11. At this point you can open a browser on your host system and navigate to https://localhost:8443/console to view the web console. You can log in with any username and password combination.

  12. NOTE: to properly stop OpenShift and clean up, so that you can start fresh instance of OpenShift, execute:

    # shut down openshift
    $ sudo pkill openshift
    # stop the docker containers
    $ docker ps | awk 'index($NF,"k8s_")==1 { print $1 }' | xargs -l -r docker stop
    # deleting all the internal config files, etcd, etc and starting openshift fresh
    $ sudo rm -rf openshift.local.*
    # if you used the --volume-dir=/home/vagrant/volumes flag, then run these
Tip
See https://github.com/openshift/vagrant-openshift for more advanced options

Ensure virtual box interfaces are not managed by Network Manager

If you are developing on a Linux host, then you need to ensure that Network Manager is ignoring the virtual box interfaces, otherwise they cause issues with multi-vm networking.

Follow these steps to ensure that virtual box interfaces are unmanaged:

  1. Check the status of Network Manager devices:

    $ nmcli d
  2. If any devices whose name start with vboxnet* are not unmanaged, then they need to be added to NetworkManager configuration to be ignored.

    $ cat /etc/NetworkManager/NetworkManager.conf
    [keyfile]
    unmanaged-devices=mac:0a:00:27:00:00:00;mac:0a:00:27:00:00:01;mac:0a:00:27:00:00:02
  3. One can use the following command to help generate the configuration:

    $ ip link list | grep vboxnet  -A 1 | grep link/ether | awk '{print "mac:" $2}' |  paste -sd ";" -
  4. Reload the Network Manager configuration:

    $ sudo nmcli con reload

Develop and test using a docker-in-docker cluster

It’s possible to run an OpenShift multinode cluster on a single host thanks to docker-in-docker (dind). Cluster creation is cheaper since each node is a container instead of a VM. This was initially implemented to support multinode network testing, but has proven useful for development as well.

Prerequisites:

  1. A host running docker and with SELinux disabled.

  2. It is acceptable to load some kernel modules (overlay, openvswitch and br_netfilter) on the docker host.

  3. It is acceptable to have net.bridge.bridge-nf-call-iptables set to 0 on the docker host.

  4. An environment with the tools necessary to build origin.

  5. A clone of the origin repo.

From the root of the origin repo, run the following command to launch a new cluster:

# -b to build origin, -i to build images
$ hack/dind-cluster.sh start -b -i

Once the cluster is up, source the cluster’s rc file to configure the environment to use it:

$ . dind-openshift.rc

Now the 'oc' command can be used to interact with the cluster:

$ oc get nodes

It’s also possible to login to the participating containers (openshift-master, openshift-node-1, openshift-node-2, etc) via docker exec:

$ docker exec -ti openshift-master bash

While it is possible to manage the OpenShift daemon in the containers, dind cluster management is fast enough that the suggested approach is to manage at the cluster level instead.

Invoking the dind-cluster.sh script without arguments will provide a usage message:

Usage: hack/dind-cluster.sh {start|stop|restart|...}

Additional documentation of how a dind cluster is managed can be found at the top of the dind-cluster.sh script.

Attempting to start a cluster when one is already running will result in an error message from docker indicating that the named containers already exist. To redeploy a cluster use the 'start' command with the '-r' flag to remove an existing cluster.

Testing networking with docker-in-docker

It is possible to run networking tests against a running docker-in-docker cluster (i.e. after 'hack/dind-cluster.sh start' has been invoked):

$ OPENSHIFT_CONFIG_ROOT=dind test/extended/networking.sh

Since a cluster can only be configured with a single network plugin at a time, this method of invoking the networking tests will only validate the active plugin. It is possible to target all plugins by invoking the same script in 'ci mode' by not setting a config root:

$ test/extended/networking.sh

In ci mode, for each networking plugin, networking.sh will create a new dind cluster, run the tests against that cluster, and tear down the cluster. The test dind clusters are isolated from any user-created clusters, and test output and artifacts of the most recent test run are retained in /tmp/openshift-extended-tests/networking.

It’s possible to override the default test regexes via the NETWORKING_E2E_FOCUS and NETWORKING_E2E_SKIP environment variables. These variables set the '-focus' and '-skip' arguments supplied to the ginkgo test runner.

To debug a test run with delve, make sure the dlv executable is installed in your path and run the tests with DLV_DEBUG set:

$ DLV_DEBUG=1 test/extended/networking.sh

Running networking tests against any cluster

It’s possible to run networking tests against any cluster. To target the default vm dev cluster:

$ OPENSHIFT_CONFIG_ROOT=dev test/extended/networking.sh

To target an arbitrary cluster, the config root (parent of openshift.local.config) can be supplied instead:

$ OPENSHIFT_CONFIG_ROOT=[cluster config root] test/extended/networking.sh

It’s also possible to supply the path to a kubeconfig file:

$ OPENSHIFT_TEST_KUBECONFIG=./admin.kubeconfig test/extended/networking.sh

See the script’s inline documentation for further details.

Running Kubernetes e2e tests

It’s possible to target the Kubernetes e2e tests against a running OpenShift cluster. From the root of an origin repo:

$ pushd ..
$ git clone http://github.com/kubernetes/kubernetes/
$ pushd kubernetes/build
$ ./run hack/build-go.sh
$ popd && popd
$ export KUBE_ROOT=../kubernetes
$ hack/test-kube-e2e.sh --ginkgo.focus="[regex]"

The previous sequence of commands will target a vagrant-based OpenShift cluster whose configuration is stored in the default location in the origin repo. To target a dind cluster, an additional environment variable needs to be set before invoking test-kube-e2e.sh:

$ export OS_CONF_ROOT=/tmp/openshift-dind-cluster/openshift

Development: What’s on the Menu?

Right now you can see what’s happening with OpenShift development at:

Ready to play with some code? Hop down and read up on our roadmap for ideas on where you can contribute. You can also try to take a stab at any issue tagged with the help-wanted label.

If you are interested in contributing to Kubernetes directly:
Join the Kubernetes community and check out the contributing guide.

Troubleshooting

If you run into difficulties running OpenShift, start by reading through the troubleshooting guide.

The Roadmap

The OpenShift project roadmap lives on Trello. A summary of the roadmap, releases, and other info can be found here.

Stay in Touch

Reach out to the OpenShift team and other community contributors through IRC and our mailing list: