diff --git a/README.md b/README.md index 93e494c..3216d66 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,5 @@ # Openshift Python Client + @@ -8,73 +9,82 @@ - [Overview](#overview) - [Reader Prerequisites](#reader-prerequisites) - [Setup](#setup) - - [Prerequisites](#prerequisites) - - [Installation Instructions](#installation-instructions) - - [Using PIP](#using-pip) - - [For development](#for-development) + - [Prerequisites](#prerequisites) + - [Installation Instructions](#installation-instructions) + - [Using PIP](#using-pip) + - [For development](#for-development) - [Usage](#usage) - - [Quickstart](#quickstart) - - [Selectors](#selectors) - - [APIObjects](#apiobjects) - - [Making changes to APIObjects](#making-changes-to-apiobjects) - - [Running within a Pod](#running-within-a-pod) - - [Tracking oc invocations](#tracking-oc-invocations) - - [Time limits](#time-limits) - - [Advanced contexts](#advanced-contexts) - - [Something missing?](#something-missing) - - [Running oc on a bastion host](#running-oc-on-a-bastion-host) - - [Gathering reports and logs with selectors](#gathering-reports-and-logs-with-selectors) - - [Advanced verbs:](#advanced-verbs) + - [Quickstart](#quickstart) + - [Selectors](#selectors) + - [APIObjects](#apiobjects) + - [Making changes to APIObjects](#making-changes-to-apiobjects) + - [Running within a Pod](#running-within-a-pod) + - [Tracking oc invocations](#tracking-oc-invocations) + - [Time limits](#time-limits) + - [Advanced contexts](#advanced-contexts) + - [Something missing?](#something-missing) + - [Running oc on a bastion host](#running-oc-on-a-bastion-host) + - [Gathering reports and logs with selectors](#gathering-reports-and-logs-with-selectors) + - [Advanced verbs:](#advanced-verbs) - [Examples](#examples) - [Environment Variables](#environment-variables) - - [Defaults when invoking `oc`](#defaults-when-invoking-oc) - - [Master timeout](#master-timeout) - - [SSH Client Host](#ssh-client-host) + - [Defaults when invoking `oc`](#defaults-when-invoking-oc) + - [Master timeout](#master-timeout) + - [SSH Client Host](#ssh-client-host) ## Overview + The [openshift-client-python](https://www.github.com/openshift/openshift-client-python) library aims to provide a readable, concise, comprehensive, and fluent API for rich interactions with an [OpenShift](https://www.openshift.com) cluster. Unlike other clients, this library exclusively uses the command line tool (oc) to achieve the interactions. This approach comes with important benefits and disadvantages when compared to other client libraries. Pros: + - No additional software needs to be installed on the cluster. If a system with python support can (1) invoke `oc` -locally OR (2) ssh to a host and invoke `oc`, you can use the library. + locally OR (2) ssh to a host and invoke `oc`, you can use the library. - Portable. If you have python and `oc` working, you don't need to worry about OpenShift versions or machine architectures. - Custom resources are supported and treated just like any other resource. There is no need to generate code to support them. - Quick to learn. If you understand the `oc` command line interface, you can use this library. Cons: + - This API is not intended to implement something as complex as a controller. For example, it does not implement -watch functionality. If you can't imagine accomplishing your use case through CLI interactions, this API is probably -not the right starting point for it. + watch functionality. If you can't imagine accomplishing your use case through CLI interactions, this API is probably + not the right starting point for it. - If you care about whether a REST API returns a particular error code, this API is probably not for you. Since it -is based on the CLI, high level return codes are used to determine success or failure. + is based on the CLI, high level return codes are used to determine success or failure. ## Reader Prerequisites -* Familiarity with OpenShift [command line interface](https://docs.openshift.org/latest/cli_reference/basic_cli_operations.html) -is highly encouraged before exploring the API's features. The API leverages the [oc](https://docs.openshift.org/latest/cli_reference/index.html) -binary and, in many cases, passes method arguments directly on to the command line. This document cannot, therefore, -provide a complete description of all possible OpenShift interactions -- the user may need to reference -the CLI documentation to find the pass-through arguments a given interaction requires. + +* Familiarity with OpenShift [command line interface](https://docs.okd.io/latest/cli_reference/index.html) + is highly encouraged before exploring the API's features. The API leverages + the [oc](https://docs.okd.io/latest/cli_reference/openshift_cli/getting-started-cli.html#cli-getting-started) + binary and, in many cases, passes method arguments directly on to the command line. This document cannot, therefore, + provide a complete description of all possible OpenShift interactions -- the user may need to reference + the CLI documentation to find the pass-through arguments a given interaction requires. * A familiarity with Python is assumed. ## Setup + ### Prerequisites + 1. Download and install the OpenShift [command-line Tools](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/) needed to access your OpenShift cluster. ### Installation Instructions #### Using PIP + 1. Install the `openshift-client` module from PyPI. ```bash sudo pip install openshift-client ``` #### For development + 1. Git clone https://github.com/openshift/openshift-client-python.git (or your fork). 2. Add required libraries ```bash @@ -86,6 +96,7 @@ the CLI documentation to find the pass-through arguments a given interaction req ## Usage ### Quickstart + Any standard Python application should be able to use the API if it imports the openshift package. The simplest possible way to begin using the API is login to your target cluster before running your first application. @@ -99,20 +110,20 @@ print('OpenShift client version: {}'.format(oc.get_client_version())) print('OpenShift server version: {}'.format(oc.get_server_version())) # Set a project context for all inner `oc` invocations and limit execution to 10 minutes -with oc.project('openshift-infra'), oc.timeout(10*60): +with oc.project('openshift-infra'), oc.timeout(10 * 60): # Print the list of qualified pod names (e.g. ['pod/xyz', 'pod/abc', ...] in the current project print('Found the following pods in {}: {}'.format(oc.get_project_name(), oc.selector('pods').qnames())) - + # Read in the current state of the pod resources and represent them as python objects for pod_obj in oc.selector('pods').objects(): - + # The APIObject class exposes several convenience methods for interacting with objects print('Analyzing pod: {}'.format(pod_obj.name())) pod_obj.print_logs(timestamps=True, tail=15) - + # If you need access to the underlying resource definition, get a Model instance for the resource pod_model = pod_obj.model - + # Model objects enable dot notation and allow you to navigate through resources # to an arbitrary depth without checking if any ancestor elements exist. # In the following example, there is no need for boilerplate like: @@ -127,6 +138,7 @@ with oc.project('openshift-infra'), oc.timeout(10*60): ``` ### Selectors + Selectors are a central concept used by the API to interact with collections of OpenShift resources. As the name implies, a "selector" selects zero or more resources on a server which satisfy user specified criteria. An apt @@ -148,10 +160,10 @@ sa_selector = oc.selector(["serviceaccount/deployer", "serviceaccount/builder"]) # Performing an operation will act on all selected resources. In this case, # both serviceaccounts are labeled. -sa_selector.label({"mylabel" : "myvalue"}) +sa_selector.label({"mylabel": "myvalue"}) # Selectors can also select based on kind and labels. -sa_label_selector = oc.selector("sa", labels={"mylabel":"myvalue"}) +sa_label_selector = oc.selector("sa", labels={"mylabel": "myvalue"}) # We should find the service accounts we just labeled. print("Found labeled serviceaccounts: " + sa_label_selector.names()) @@ -187,7 +199,7 @@ project = projects[0] # The APIObject exposes methods providing simple access to metadata and common operations. print('The project is: {}/{}'.format(project.kind(), project.name())) -project.label({ 'mylabel': 'myvalue' }) +project.label({'mylabel': 'myvalue'}) # And the APIObject allow you to interact with an object's data via the 'model' attribute. # The Model is similar to a standard dict, but also allows dot notation to access elements @@ -224,6 +236,7 @@ oc.selector('node/alpha').object().model.status.conditions.can_match( } ) + # can_match can also ensure nest objects and list are present within a resource. Several # of these types of checks are already implemented in the openshift.status module. def is_route_admitted(apiobj): @@ -241,8 +254,8 @@ def is_route_admitted(apiobj): }) ``` - ### Making changes to APIObjects + ```python # APIObject exposes simple interfaces to delete and patch the resource it represents. # But, more interestingly, you can make detailed changes to the model and apply those @@ -251,6 +264,7 @@ def is_route_admitted(apiobj): project.model.metadata.labels['my_label'] = 'myvalue' project.apply() + # If modifying the underlying API resources could be contentious, use the more robust # modify_and_apply method which can retry the operation multiple times -- refreshing # with the current object state between failures. @@ -260,6 +274,7 @@ def make_model_change(apiobj): apiobj.model.data['somefile.yaml'] = 'wyxz' return True + # modify_and_apply will call the function and attempt to apply its changes to the model # if it returns True. If the apply is rejected by the API, the function will pull # the latest object content, call make_model_change again, and try the apply again @@ -288,18 +303,20 @@ def set_unmanaged_in_cvo(apiobj): apiobj.model.spec.overrides.append(desired_entry) return True + result, changed = oc.selector('clusterversion.config.openshift.io/version').object().modify_and_apply(set_unmanaged_in_cvo) if changed: context.report_change('Instructed CVO to ignore openshift-samples operator') ``` - ### Running within a Pod + It is simple to use the API within a Pod. The `oc` binary automatically detects it is running within a container and automatically uses the Pod's serviceaccount token/cacert. ### Tracking oc invocations + It is good practice to setup at least one tracking context within your application so that you will be able to easily analyze what `oc` invocations were made on your behalf and the result of those operations. *Note that details about all `oc` invocations performed within the context will @@ -315,61 +332,63 @@ with oc.tracking() as tracker: print('Current user: {}'.format(oc.whoami())) except: print('Error acquiring current username') - + # Print out details about the invocations made within this context. print(tracker.get_result()) ``` In this case, the tracking output would look something like: + ```json { - "status": 0, - "operation": "tracking", - "actions": [ - { - "status": 0, - "verb": "project", - "references": {}, - "in": null, - "out": "aos-cd\n", - "err": "", - "cmd": [ - "oc", - "project", - "-q" - ], - "elapsed_time": 0.15344810485839844, - "internal": false, - "timeout": false, - "last_attempt": true - }, - { - "status": 0, - "verb": "whoami", - "references": {}, - "in": null, - "out": "aos-ci-jenkins\n", - "err": "", - "cmd": [ - "oc", - "whoami" - ], - "elapsed_time": 0.6328380107879639, - "internal": false, - "timeout": false, - "last_attempt": true - } - ] + "status": 0, + "operation": "tracking", + "actions": [ + { + "status": 0, + "verb": "project", + "references": {}, + "in": null, + "out": "aos-cd\n", + "err": "", + "cmd": [ + "oc", + "project", + "-q" + ], + "elapsed_time": 0.15344810485839844, + "internal": false, + "timeout": false, + "last_attempt": true + }, + { + "status": 0, + "verb": "whoami", + "references": {}, + "in": null, + "out": "aos-ci-jenkins\n", + "err": "", + "cmd": [ + "oc", + "whoami" + ], + "elapsed_time": 0.6328380107879639, + "internal": false, + "timeout": false, + "last_attempt": true + } + ] } ``` -Alternatively, you can record actions yourself by passing an action_handler to the tracking +Alternatively, you can record actions yourself by passing an action_handler to the tracking contextmanager. Your action handler will be invoked each time an `oc` invocation completes. ```python def print_action(action): print('Performed: {} - status={}'.format(action.cmd, action.status)) + with oc.tracking(action_handler=print_action): try: print('Current project: {}'.format(oc.get_project_name())) @@ -380,14 +399,16 @@ with oc.tracking(action_handler=print_action): ``` ### Time limits + Have a script you want to ensure succeeds or fails within a specific period of time? Use -a `timeout` context. Timeout contexts can be nested - if any timeout context expires, -the current oc invocation will be killed. +a `timeout` context. Timeout contexts can be nested - if any timeout context expires, +the current oc invocation will be killed. ```python #!/usr/bin/python import openshift_client as oc + def node_is_ready(node): ready = node.model.status.conditions.can_match({ 'type': 'Ready', @@ -406,38 +427,42 @@ You will be able to see in `tracking` context results that a timeout occurred fo invocation. The `timeout` field will be set to `True`. ### Advanced contexts -If you are unable to use a KUBECONFIG environment variable or need fine grained control over the -server/credentials you communicate with for each invocation, use openshift-client-python contexts. -Contexts can be nested and cause oc invocations within them to use the most recently established + +If you are unable to use a KUBECONFIG environment variable or need fine grained control over the +server/credentials you communicate with for each invocation, use openshift-client-python contexts. +Contexts can be nested and cause oc invocations within them to use the most recently established context information. ```python with oc.api_server('https:///....'): # use the specified api server for nested oc invocations. - + with oc.token('abc..'): # --server=... --token=abc... will be included in inner oc invocations. print("Current project: " + oc.get_project_name()) - + with oc.token('def..'): # --server=... --token=def... will be included in inner oc invocations. print("Current project: " + oc.get_project_name()) ``` -You can control the loglevel specified for `oc` invocations. +You can control the loglevel specified for `oc` invocations. + ```python with oc.loglevel(6): - # all oc invocations within this context will be invoked with --loglevel=6 + # all oc invocations within this context will be invoked with --loglevel=6 oc... ``` You ask `oc` to skip TLS verification if necessary. + ```python with oc.tls_verify(enable=False): - # all oc invocations within this context will be invoked with --insecure-skip-tls-verify + # all oc invocations within this context will be invoked with --insecure-skip-tls-verify oc... ``` ### Something missing? -Most common API iterations have abstractions, but if there is no openshift-client-python API -exposing the `oc` function you want to run, you can always use `oc.invoke` to directly pass arguments to + +Most common API iterations have abstractions, but if there is no openshift-client-python API +exposing the `oc` function you want to run, you can always use `oc.invoke` to directly pass arguments to an `oc` invocation on your host. ```python @@ -463,6 +488,7 @@ occur on the remote host. ### Gathering reports and logs with selectors Various objects within OpenShift have logs associated with them: + - pods - deployments - daemonsets @@ -481,6 +507,7 @@ with oc.project('openshift-monitoring'): ``` The above example would output something like: + ``` Container: openshift-monitoring:pod/node-exporter-hw5r5(node-exporter) time="2018-10-22T21:07:36Z" level=info msg="Starting node_exporter (version=0.16.0, branch=, revision=)" source="node_exporter.go:82" @@ -489,19 +516,21 @@ time="2018-10-22T21:07:36Z" level=info msg=" - arp" source="node_exporter.go:97" ... ``` -Note that these logs are held in memory. Use tail or other available method parameters to ensure +Note that these logs are held in memory. Use tail or other available method parameters to ensure predictable and efficient results. To simplify even further, you can ask the library to pretty-print the logs for you: + ```python oc.selector(['daemonset', 'deployment']).print_logs() ``` -And to quickly pull together significant diagnostic data on selected objects, use `report()` or `print_report()`. +And to quickly pull together significant diagnostic data on selected objects, use `report()` or `print_report()`. A report includes the following information for each selected object, if available: + - `object` - The current state of the object. - `describe` - The output of describe on the object. -- `logs` - If applicable, a map of logs -- one of each container associated with the object. +- `logs` - If applicable, a map of logs -- one of each container associated with the object. ```python # Pretty-print a detail set of data about all deploymentconfigs, builds, and configmaps in the @@ -512,23 +541,26 @@ oc.selector(['dc', 'build', 'configmap']).print_report() ### Advanced verbs: Running oc exec on a pod. + ```python result = oc.selector('pod/alertmanager-main-0').object().execute(['cat'], container_name='alertmanager', stdin='stdin for cat') - print(result.out()) +print(result.out()) ``` Finding all pods running on a node: + ```python with oc.client_host(): for node_name in oc.selector('nodes').qnames(): print('Pods running on node: {}'.format(node_name)) - for pod_obj in oc.get_pods_by_node(node_name): - print(' {}'.format(pod_obj.fqname())) + for pod_obj in oc.get_pods_by_node(node_name): + print(' {}'.format(pod_obj.fqname())) ``` Example output: + ``` ... Pods running on node: node/ip-172-31-18-183.ca-central-1.compute.internal @@ -544,11 +576,14 @@ Pods running on node: node/ip-172-31-18-183.ca-central-1.compute.internal - [Some unit tests](examples/cluster_tests.py) ## Environment Variables -To allow openshift-client-python applications to be portable between environments without needing to be modified, -you can specify many default contexts in the environment. + +To allow openshift-client-python applications to be portable between environments without needing to be modified, +you can specify many default contexts in the environment. ### Defaults when invoking `oc` + Establishing explicit contexts within an application will override these environment defaults. + - `OPENSHIFT_CLIENT_PYTHON_DEFAULT_OC_PATH` - default path to use when invoking `oc` - `OPENSHIFT_CLIENT_PYTHON_DEFAULT_CONFIG_PATH` - default `--kubeconfig` argument - `OPENSHIFT_CLIENT_PYTHON_DEFAULT_API_SERVER` - default `--server` argument @@ -558,20 +593,22 @@ Establishing explicit contexts within an application will override these environ - `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SKIP_TLS_VERIFY` - default `--insecure-skip-tls-verify` ### Master timeout + Defines an implicit outer timeout(..) context for the entire application. This allows you to ensure that an application terminates within a reasonable time, even if the author of the application has not included explicit timeout contexts. Like any `timeout` context, this value is not overridden by subsequent `timeout` contexts within the application. It provides an upper bound for the entire application's oc interactions. -- `OPENSHIFT_CLIENT_PYTHON_MASTER_TIMEOUT` +- `OPENSHIFT_CLIENT_PYTHON_MASTER_TIMEOUT` ### SSH Client Host -In some cases, it is desirable to run an openshift-client-python application using a local `oc` binary and + +In some cases, it is desirable to run an openshift-client-python application using a local `oc` binary and in other cases, the `oc` binary resides on a remote client. Encoding this decision in the application itself is unnecessary. -Simply wrap you application in a `client_host` context without arguments. This will try to pull +Simply wrap you application in a `client_host` context without arguments. This will try to pull client host information from environment variables if they are present. If they are not present, the application will execute on the local host. @@ -580,7 +617,7 @@ in the environment. Otherwise, `oc` interactions will be executed on the host ru ```python with oc.client_host(): # if OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_HOSTNAME if not defined in the environment, this is a no-op - print( 'Found nodes: {}'.format(oc.selector('nodes').qnames()) ) + print('Found nodes: {}'.format(oc.selector('nodes').qnames())) ``` - `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_HOSTNAME` - The hostname on which the `oc` binary resides