diff --git a/admin_guide/building_dependency_trees.adoc b/admin_guide/building_dependency_trees.adoc index 35a9c961a0b1..51de29fbc936 100644 --- a/admin_guide/building_dependency_trees.adoc +++ b/admin_guide/building_dependency_trees.adoc @@ -15,7 +15,7 @@ toc::[] {product-title} uses xref:../dev_guide/builds/triggering_builds.adoc#image-change-triggers[image change triggers] in a `BuildConfig` to detect when an xref:../architecture/core_concepts/builds_and_image_streams.adoc#image-stream-tag[image -stream tag] has been updated. You can use the `oadm build-chain` command to +stream tag] has been updated. You can use the `oc adm build-chain` command to build a dependency tree that identifies which xref:../architecture/core_concepts/containers_and_images.adoc#docker-images[images] would be affected by updating an image in a specified @@ -47,13 +47,13 @@ The following table describes common `build-chain` usage and general syntax: |Build the dependency tree for the *latest* tag in ``. |---- -$ oadm build-chain +$ oc adm build-chain ---- |Build the dependency tree for the *v2* tag in DOT format, and visualize it using the DOT utility. |---- -$ oadm build-chain :v2 \ +$ oc adm build-chain :v2 \ -o dot \ \| dot -T svg -o deps.svg ---- @@ -61,7 +61,7 @@ $ oadm build-chain :v2 \ |Build the dependency tree across all projects for the specified image stream tag found the *test* project. |---- -$ oadm build-chain :v1 \ +$ oc adm build-chain :v1 \ -n test --all ---- |=== diff --git a/admin_guide/cluster_capacity.adoc b/admin_guide/cluster_capacity.adoc index f7474531ee78..eaa57dd1383a 100644 --- a/admin_guide/cluster_capacity.adoc +++ b/admin_guide/cluster_capacity.adoc @@ -138,7 +138,7 @@ $ oc create sa cluster-capacity-sa . Add the role to the service account: + ---- -$ oadm policy add-cluster-role-to-user cluster-capacity-role \ +$ oc adm policy add-cluster-role-to-user cluster-capacity-role \ system:serviceaccount:default:cluster-capacity-sa ---- diff --git a/admin_guide/manage_nodes.adoc b/admin_guide/manage_nodes.adoc index 7a6cb381edc1..45df6062edb1 100644 --- a/admin_guide/manage_nodes.adoc +++ b/admin_guide/manage_nodes.adoc @@ -181,14 +181,14 @@ $ oc label -h To list all or selected pods on one or more nodes: ---- -$ oadm manage-node \ +$ oc adm manage-node \ --list-pods [--pod-selector=] [-o json|yaml] ---- To list all or selected pods on selected nodes: ---- -$ oadm manage-node --selector= \ +$ oc adm manage-node --selector= \ --list-pods [--pod-selector=] [-o json|yaml] ---- @@ -203,7 +203,7 @@ scheduled on the node. Existing pods on the node are not affected. To mark a node or nodes as unschedulable: ---- -$ oadm manage-node --schedulable=false +$ oc adm manage-node --schedulable=false ---- For example: @@ -211,7 +211,7 @@ For example: ==== [options="nowrap"] ---- -$ oadm manage-node node1.example.com --schedulable=false +$ oc adm manage-node node1.example.com --schedulable=false NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled ---- @@ -220,7 +220,7 @@ node1.example.com kubernetes.io/hostname=node1.example.com Ready,Schedul To mark a currently unschedulable node or nodes as schedulable: ---- -$ oadm manage-node --schedulable +$ oc adm manage-node --schedulable ---- Alternatively, instead of specifying specific node names (e.g., ` @@ -245,21 +245,21 @@ To list pods that will be migrated without actually performing the evacuation, use the `--dry-run` option: ---- -$ oadm manage-node \ +$ oc adm manage-node \ --evacuate --dry-run [--pod-selector=] ---- To actually evacuate all or selected pods on one or more nodes: ---- -$ oadm manage-node \ +$ oc adm manage-node \ --evacuate [--pod-selector=] ---- You can force deletion of bare pods by using the `--force` option: ---- -$ oadm manage-node \ +$ oc adm manage-node \ --evacuate --force [--pod-selector=] ---- @@ -517,7 +517,7 @@ On the node where you want to restart Docker storage: . Run the following command to mark the node as unschedulable: + ---- -$ oadm manage-node --schedulable=false +$ oc adm manage-node --schedulable=false ---- . Run the following command to shut down Docker and the *atomic-openshift-node* service: @@ -568,7 +568,7 @@ $ systemctl start docker atomic-openshift-node . Run the following command to mark the node as schedulable: + ---- -$ oadm manage-node --schedulable=true +$ oc adm manage-node --schedulable=true ---- [[manage-node-change-node-traffic-interface]] diff --git a/admin_guide/manage_scc.adoc b/admin_guide/manage_scc.adoc index c7a4f0b2b3b7..fefc0f422103 100644 --- a/admin_guide/manage_scc.adoc +++ b/admin_guide/manage_scc.adoc @@ -339,7 +339,7 @@ It is recommended that xref:../architecture/additional_concepts/storage.adoc#architecture-additional-concepts-storage[persistent storage] using `*PersistentVolume*` and `*PersistentVolumeClaim*` objects be used for xref:../install_config/registry/index.adoc#install-config-registry-overview[registry deployments]. If -you are testing and would like to instead use the `oadm registry` command with +you are testing and would like to instead use the `oc adm registry` command with the `--mount-host` option, you must first create a new xref:service_accounts.adoc#admin-guide-service-accounts[service account] for the registry and add it to the *privileged* SCC. See the diff --git a/admin_guide/managing_networking.adoc b/admin_guide/managing_networking.adoc index 6b7674087ab1..4d52b1110dbc 100644 --- a/admin_guide/managing_networking.adoc +++ b/admin_guide/managing_networking.adoc @@ -40,7 +40,7 @@ To join projects to an existing project network: [source, bash] ---- -$ oadm pod-network join-projects --to= +$ oc adm pod-network join-projects --to= ---- In the above example, all the pods and services in `` and `` @@ -59,7 +59,7 @@ To isolate the project network in the cluster and vice versa, run: [source, bash] ---- -$ oadm pod-network isolate-projects +$ oc adm pod-network isolate-projects ---- In the above example, all of the pods and services in `` and @@ -76,7 +76,7 @@ To allow projects to access all pods and services in the cluster and vice versa: [source, bash] ---- -$ oadm pod-network make-projects-global +$ oc adm pod-network make-projects-global ---- In the above example, all the pods and services in `` and `` @@ -209,9 +209,9 @@ edit the ones you create in their project. There are also several other restrictions on where `EgressNetworkPolicy` can be created: * The `default` project (and any other project that has been made global via -`oadm pod-network make-projects-global`) cannot have egress policy. +`oc adm pod-network make-projects-global`) cannot have egress policy. -* If you merge two projects together (via `oadm pod-network join-projects`), +* If you merge two projects together (via `oc adm pod-network join-projects`), then you cannot use egress policy in _any_ of the joined projects. * No project may have more than one egress policy object. diff --git a/admin_guide/managing_pods.adoc b/admin_guide/managing_pods.adoc index ade13f7d0fab..4df6a4786af2 100644 --- a/admin_guide/managing_pods.adoc +++ b/admin_guide/managing_pods.adoc @@ -216,9 +216,9 @@ edit the ones you create in their project. There are also several other restrictions on where `EgressNetworkPolicy` can be created: . The `default` project (and any other project that has been made global via -`oadm pod-network make-projects-global`) cannot have egress policy. +`oc adm pod-network make-projects-global`) cannot have egress policy. -. If you merge two projects together (via `oadm pod-network join-projects`), +. If you merge two projects together (via `oc adm pod-network join-projects`), then you cannot use egress policy in _any_ of the joined projects. . No project may have more than one egress policy object. diff --git a/admin_guide/managing_projects.adoc b/admin_guide/managing_projects.adoc index d29327b04056..95224a376a0f 100644 --- a/admin_guide/managing_projects.adoc +++ b/admin_guide/managing_projects.adoc @@ -41,7 +41,7 @@ To create your own custom project template: . Start with the current default project template: + ---- -$ oadm create-bootstrap-project-template -o yaml > template.yaml +$ oc adm create-bootstrap-project-template -o yaml > template.yaml ---- . Use a text editor to modify the *_template.yaml_* file by adding objects or modifying existing objects. @@ -95,7 +95,7 @@ xref:../architecture/additional_concepts/authorization.adoc#roles[cluster role] from authenticated user groups will deny permissions for self-provisioning any new projects. ---- -$ oadm policy remove-cluster-role-from-group self-provisioner system:authenticated system:authenticated:oauth +$ oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated system:authenticated:oauth ---- When disabling self-provisioning, set the `projectRequestMessage` parameter in the @@ -172,7 +172,7 @@ The following creates a new project named `myproject` and dictates that pods be deployed onto nodes labeled `user-node` and `east`: ---- -$ oadm new-project myproject \ +$ oc adm new-project myproject \ --node-selector='type=user-node,region=east' ---- @@ -181,14 +181,14 @@ all pods contained in the specified project. [NOTE] ==== -While the `new-project` subcommand is available for both `oadm` and `oc`, the +While the `new-project` subcommand is available for both `oc adm` and `oc`, the cluster administrator and developer commands respectively, creating a new -project with a node selector is only available with the `oadm` command. The +project with a node selector is only available with the `oc adm` command. The `new-project` subcommand is not available to project developers when self-provisioning projects. ==== -Using the `oadm new-project` command adds an `annotation` section to the +Using the `oc adm new-project` command adds an `annotation` section to the project. You can edit a project, and change the `openshift.io/node-selector` value to override the default: @@ -200,7 +200,7 @@ metadata: ... ---- -If `openshift.io/node-selector` is set to an empty string (`oadm new-project +If `openshift.io/node-selector` is set to an empty string (`oc adm new-project --node-selector=""`), the project will not have an adminstrator-set node selector, even if the cluster-wide default has been set. This means that, as a cluster administrator, you can set a default to restrict developer projects to a diff --git a/admin_guide/monitoring_images.adoc b/admin_guide/monitoring_images.adoc index 7c24fe898a2d..f34aa80cf4bd 100644 --- a/admin_guide/monitoring_images.adoc +++ b/admin_guide/monitoring_images.adoc @@ -30,7 +30,7 @@ To view the usage statistics: ==== ---- -$ oadm top images +$ oc adm top images NAME IMAGESTREAMTAG PARENTS USAGE METADATA STORAGE sha256:80c985739a78b openshift/python (3.5) yes 303.12MiB sha256:64461b5111fc7 openshift/ruby (2.2) yes 234.33MiB @@ -59,7 +59,7 @@ To view the usage statistics: ==== ---- -$ oadm top imagestreams +$ oc adm top imagestreams NAME STORAGE IMAGES LAYERS openshift/python 1.21GiB 4 36 openshift/ruby 717.76MiB 3 27 diff --git a/admin_guide/pruning_resources.adoc b/admin_guide/pruning_resources.adoc index 8c9ec1b527ca..27cf766dc693 100644 --- a/admin_guide/pruning_resources.adoc +++ b/admin_guide/pruning_resources.adoc @@ -28,7 +28,7 @@ still taking up disk space. The CLI groups prune operations under a common parent command. ---- -$ oadm prune +$ oc adm prune ---- This specifies: @@ -45,7 +45,7 @@ In order to prune deployments that are no longer required by the system due to age and status, administrators may run the following command: ---- -$ oadm prune deployments [] +$ oc adm prune deployments [] ---- .Prune Deployments CLI Configuration Options @@ -79,14 +79,14 @@ and hours (`h`). To see what a pruning operation would delete: ---- -$ oadm prune deployments --orphans --keep-complete=5 --keep-failed=1 \ +$ oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m ---- To actually perform the prune operation: ---- -$ oadm prune deployments --orphans --keep-complete=5 --keep-failed=1 \ +$ oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m --confirm ---- @@ -98,7 +98,7 @@ In order to prune builds that are no longer required by the system due to age and status, administrators may run the following command: ---- -$ oadm prune builds [] +$ oc adm prune builds [] ---- .Prune Builds CLI Configuration Options @@ -130,14 +130,14 @@ current time. (default `60m`) To see what a pruning operation would delete: ---- -$ oadm prune builds --orphans --keep-complete=5 --keep-failed=1 \ +$ oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m ---- To actually perform the prune operation: ---- -$ oadm prune builds --orphans --keep-complete=5 --keep-failed=1 \ +$ oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m --confirm ---- @@ -155,7 +155,7 @@ In order to prune images that are no longer required by the system due to age, status, or exceed limits, administrators may run the following command: ---- -$ oadm prune images [] +$ oc adm prune images [] ---- [NOTE] @@ -285,7 +285,7 @@ streams and pods) younger than sixty minutes: + ==== ---- -$ oadm prune images --keep-tag-revisions=3 --keep-younger-than=60m +$ oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m ---- ==== @@ -293,7 +293,7 @@ $ oadm prune images --keep-tag-revisions=3 --keep-younger-than=60m + ==== ---- -$ oadm prune images --prune-over-size-limit +$ oc adm prune images --prune-over-size-limit ---- ==== @@ -301,9 +301,9 @@ To actually perform the prune operation for the previously mentioned options accordingly: ---- -$ oadm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm +$ oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm -$ oadm prune images --prune-over-size-limit --confirm +$ oc adm prune images --prune-over-size-limit --confirm ---- [[pruning-images-secure-or-insecure]] @@ -369,7 +369,7 @@ When default options are used, the image will not ever be pruned because it occurs at position `0` in a history of `myapp:v2.1-may-2016` tag. For an image to be considered for pruning, the administrator must either: -. Specify `--keep-tag-revisions=0` with the `oadm prune images` command. +. Specify `--keep-tag-revisions=0` with the `oc adm prune images` command. + [CAUTION] ==== @@ -398,8 +398,8 @@ naming.] [[using-secure-connection-against-insecure-registry]] ==== Using a Secure Connection Against Insecure Registry -If you see a message similar to the following in the output of the `oadm prune -images`, then your registry is not secured and the `oadm prune images` client +If you see a message similar to the following in the output of the `oc adm prune +images`, then your registry is not secured and the `oc adm prune images` client will attempt to use secure connection: ---- @@ -415,9 +415,9 @@ recommended)*. [[using-insecure-connection-against-secured-registry]] ==== Using an Insecure Connection Against a Secured Registry -If you see one of the following errors in the output of the `oadm prune images` +If you see one of the following errors in the output of the `oc adm prune images` command, it means that your registry is secured using a certificate signed by a -certificate authority other than the one used by `oadm prune images` client for +certificate authority other than the one used by `oc adm prune images` client for connection verification. ---- @@ -544,7 +544,7 @@ $ service_account=$(oc get -n default \ .. Add the *system:image-pruner* cluster role to the service account: + ---- -$ oadm policy add-cluster-role-to-user \ +$ oc adm policy add-cluster-role-to-user \ system:image-pruner \ ${service_account} ---- diff --git a/admin_guide/router.adoc b/admin_guide/router.adoc index 01970f55a21a..46f525823851 100644 --- a/admin_guide/router.adoc +++ b/admin_guide/router.adoc @@ -32,14 +32,14 @@ specify *0* as the stats port number. ifdef::openshift-enterprise[] ==== ---- -$ oadm router hap --service-account=router --stats-port=0 +$ oc adm router hap --service-account=router --stats-port=0 ---- ==== endif::[] ifdef::openshift-origin[] ==== ---- -$ oadm router hap --service-account=router --stats-port=0 +$ oc adm router hap --service-account=router --stats-port=0 ---- ==== endif::[] diff --git a/admin_guide/scheduling/taints_tolerations.adoc b/admin_guide/scheduling/taints_tolerations.adoc index 76fcf96557c9..0af13a0360e6 100644 --- a/admin_guide/scheduling/taints_tolerations.adoc +++ b/admin_guide/scheduling/taints_tolerations.adoc @@ -98,9 +98,9 @@ For example: * The node has the following taints: + ---- -$ oadm taint nodes node1 key1=value1:NoSchedule -$ oadm taint nodes node1 key1=value1:NoExecute -$ oadm taint nodes node1 key2=value2:NoSchedule +$ oc adm taint nodes node1 key1=value1:NoSchedule +$ oc adm taint nodes node1 key1=value1:NoExecute +$ oc adm taint nodes node1 key2=value2:NoSchedule ---- * The pod has the following tolerations: @@ -125,16 +125,16 @@ one of the three that is not tolerated by the pod. [[admin-guide-taints-add]] == Adding a Taint to an Existing Node -You add a taint to a node using the `oadm taint` command with the parameters described in the xref:taint-components-table[Taint and toleration components] table: +You add a taint to a node using the `oc adm taint` command with the parameters described in the xref:taint-components-table[Taint and toleration components] table: ---- -$ oadm taint nodes =: +$ oc adm taint nodes =: ---- For example: ---- -$ oadm taint nodes node1 key1=value1:NoSchedule +$ oc adm taint nodes node1 key1=value1:NoSchedule ---- The example places a taint on `node1` that has key `key1`, value `value1`, and taint effect `NoSchedule`. @@ -169,7 +169,7 @@ tolerations: tolerationSeconds: 3600 ---- -Both of these tolerations match the xref:admin-guide-taints-add[taint created by the `oadm taint` command above]. A pod with either toleration would be able to schedule onto `node1`. +Both of these tolerations match the xref:admin-guide-taints-add[taint created by the `oc adm taint` command above]. A pod with either toleration would be able to schedule onto `node1`. [[admin-guide-taints-tolerationSeconds]] === Using Toleration Seconds to Delay Pod Evictions @@ -352,7 +352,7 @@ To specify dedicated nodes: For example: + ---- -$ oadm taint nodes node1 dedicated=groupName:NoSchedule +$ oc adm taint nodes node1 dedicated=groupName:NoSchedule ---- . Add a corresponding toleration to the pods by writing a custom xref:../../install_config/master_node_configuration.adoc#master-config-admission-control-config[admission controller]. @@ -371,7 +371,7 @@ To configure a node so that users can use only that node: For example: + ---- -$ oadm taint nodes node1 dedicated=groupName:NoSchedule +$ oc adm taint nodes node1 dedicated=groupName:NoSchedule ---- . Add a corresponding toleration to the pods by writing a custom xref:../../install_config/master_node_configuration.adoc#master-config-admission-control-config[admission controller]. @@ -391,8 +391,8 @@ To ensure pods are blocked from the specialized hardware: . Taint the nodes that have the specialized hardware using one of the following commands: + ---- -$ oadm taint nodes disktype=ssd:NoSchedule -$ oadm taint nodes disktype=ssd:PreferNoSchedule +$ oc adm taint nodes disktype=ssd:NoSchedule +$ oc adm taint nodes disktype=ssd:PreferNoSchedule ---- . Adding a corresponding toleration to pods that use the special hardware using an xref:../../install_config/master_node_configuration.adoc#master-config-admission-control-config[admission controller]. diff --git a/admin_guide/sdn_troubleshooting.adoc b/admin_guide/sdn_troubleshooting.adoc index 620ccc90e92f..92baace30129 100644 --- a/admin_guide/sdn_troubleshooting.adoc +++ b/admin_guide/sdn_troubleshooting.adoc @@ -581,7 +581,7 @@ As a cluster administrator, run the diagnostics tool to diagnose common network issues: ---- -# oadm diagnostics NetworkCheck +# oc adm diagnostics NetworkCheck ---- The diagnostics tool runs a series of checks for error conditions for the @@ -598,7 +598,7 @@ on the master (or from another machine with access to the master) to generate useful debugging information. However, this script is unsupported. ==== -By default, `oadm diagnostics NetworkCheck` logs errors into *_/tmp/openshift/_*. This can be configured with the `--network-logdir` option: +By default, `oc adm diagnostics NetworkCheck` logs errors into *_/tmp/openshift/_*. This can be configured with the `--network-logdir` option: ---- # oc adm diagnostics NetworkCheck --network-logdir= diff --git a/admin_guide/securing_builds.adoc b/admin_guide/securing_builds.adoc index bd1abca0ef7d..e0a6a7aac06a 100644 --- a/admin_guide/securing_builds.adoc +++ b/admin_guide/securing_builds.adoc @@ -69,10 +69,10 @@ xref:../architecture/additional_concepts/authorization.adoc#roles[*cluster-admin privileges and remove the corresponding role from the *system:authenticated* group: ---- -$ oadm policy remove-cluster-role-from-group system:build-strategy-custom system:authenticated -$ oadm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated -$ oadm policy remove-cluster-role-from-group system:build-strategy-source system:authenticated -$ oadm policy remove-cluster-role-from-group system:build-strategy-jenkinspipeline system:authenticated +$ oc adm policy remove-cluster-role-from-group system:build-strategy-custom system:authenticated +$ oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated +$ oc adm policy remove-cluster-role-from-group system:build-strategy-source system:authenticated +$ oc adm policy remove-cluster-role-from-group system:build-strategy-jenkinspipeline system:authenticated ---- ifdef::openshift-origin[] @@ -122,7 +122,7 @@ For example, to add the *system:build-strategy-docker* cluster role to the user + ==== ---- -$ oadm policy add-cluster-role-to-user system:build-strategy-docker devuser +$ oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser ---- ==== @@ -148,6 +148,6 @@ For example, to add the *system:build-strategy-docker* role within the project * + ==== ---- -$ oadm policy add-role-to-user system:build-strategy-docker devuser -n devproject +$ oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject ---- ==== diff --git a/admin_guide/tcp_ingress_external_ports.adoc b/admin_guide/tcp_ingress_external_ports.adoc index ad5e0a848b46..b17fb8e06271 100644 --- a/admin_guide/tcp_ingress_external_ports.adoc +++ b/admin_guide/tcp_ingress_external_ports.adoc @@ -64,7 +64,7 @@ pool in the master configuration file. In this case, {product-title} implements [CAUTION] ==== You must ensure that the IP address pool you assign terminates at one or more nodes in your cluster. You can use the existing -xref:../admin_guide/high_availability.adoc#configuring-ip-failover[`*oadm ipfailover*`] to ensure that the external IPs are highly available. +xref:../admin_guide/high_availability.adoc#configuring-ip-failover[`*oc adm ipfailover*`] to ensure that the external IPs are highly available. ==== For manually-configured external IPs, potential port clashes are handled on a first-come, first-served basis. If you request a port, it is only available if it has not yet been assigned for that IP address. For example: diff --git a/admin_solutions/user_role_mgmt.adoc b/admin_solutions/user_role_mgmt.adoc index 6bf8be0008ad..2158251488e0 100644 --- a/admin_solutions/user_role_mgmt.adoc +++ b/admin_solutions/user_role_mgmt.adoc @@ -287,7 +287,7 @@ For example, to bind a role to the `system:authenticated` group for all projects in the cluster: ---- -$ oadm policy add-cluster-role-to-group system:authenticated +$ oc adm policy add-cluster-role-to-group system:authenticated ---- Currently, by default the `system:authenticated` and `sytem:authenticated:oauth` @@ -389,7 +389,7 @@ For a complete list of all available roles: [source, bash] ---- -$ oadm policy +$ oc adm policy ---- The following section includes examples of some common operations related to @@ -403,7 +403,7 @@ To bind a role to a user for the current project: [source, bash] ---- -$ oadm policy add-role-to-user +$ oc adm policy add-role-to-user ---- You can specify a project with the `-n` flag. @@ -414,7 +414,7 @@ To remove a role from a user for the current project: [source, bash] ---- -$ oadm policy remove-role-from-user +$ oc adm policy remove-role-from-user ---- You can specify a project with the `-n` flag. @@ -427,7 +427,7 @@ to a user for all projects: [source, bash] ---- -$ oadm policy add-cluster-role-to-user +$ oc adm policy add-cluster-role-to-user ---- === Removing a Cluster Role from a User for All Projects @@ -438,7 +438,7 @@ from a user for all projects: [source, bash] ---- -$ oadm policy remove-cluster-role-from-user +$ oc adm policy remove-cluster-role-from-user ---- === Adding a Role to a Group @@ -447,7 +447,7 @@ To bind a role to a specified group in the current project: [source, bash] ---- -$ oadm policy add-role-to-group +$ oc adm policy add-role-to-group ---- You can specify a project with the `-n` flag. @@ -458,7 +458,7 @@ To remove a role from a specified group in the current project: [source, bash] ---- -$ oadm policy remove-role-from-group +$ oc adm policy remove-role-from-group ---- You can specify a project with the `-n` flag. @@ -469,7 +469,7 @@ To bind a role to a specified group for all projects in the cluster: [source, bash] ---- -$ oadm policy add-cluster-role-to-group +$ oc adm policy add-cluster-role-to-group ---- === Removing a Cluster Role from a Group for All Projects @@ -478,7 +478,7 @@ To remove a role from a specified group for all projects in the cluster: [source, bash] ---- -$ oadm policy remove-cluster-role-from-group +$ oc adm policy remove-cluster-role-from-group ---- [[role-binding-restriction]] @@ -576,9 +576,9 @@ that is not matched by some `RoleBindingRestriction` in the namespace: .Example of RoleBindingRestriction Enforcement [source, bash] ---- -$ oadm policy add-role-to-user view joe -n group1 +$ oc adm policy add-role-to-user view joe -n group1 Error from server: rolebindings "view" is forbidden: rolebindings to User "joe" are not allowed in project "group1" -$ oadm policy add-role-to-user view john jane -n group1 +$ oc adm policy add-role-to-user view john jane -n group1 $ oc get rolebindings/view -n group1 NAME ROLE USERS GROUPS SERVICE ACCOUNTS SUBJECTS view /view john, jane @@ -686,7 +686,7 @@ To create a basic administrator role within a project: [source, bash] ---- -$ oadm policy add-role-to-user admin -n +$ oc adm policy add-role-to-user admin -n ---- === Creating a Cluster Administrator @@ -705,5 +705,5 @@ if not used carefully. [source, bash] ---- -$ oadm policy add-cluster-role-to-user cluster-admin +$ oc adm policy add-cluster-role-to-user cluster-admin ---- diff --git a/architecture/additional_concepts/authentication.adoc b/architecture/additional_concepts/authentication.adoc index 126ef3fee289..814b4b84294d 100644 --- a/architecture/additional_concepts/authentication.adoc +++ b/architecture/additional_concepts/authentication.adoc @@ -117,7 +117,7 @@ impersonate *system:admin*, which in turn has cluster administrator permissions. This grants some protection against typos (but not security) for someone administering the cluster. For example, `oc delete nodes --all` would be forbidden, but `oc delete nodes --all --as=system:admin` would be allowed. You -can add a user to that group using `oadm policy add-cluster-role-to-user sudoer +can add a user to that group using `oc adm policy add-cluster-role-to-user sudoer `. [[oauth]] diff --git a/architecture/topics/f5_big_ip.adoc b/architecture/topics/f5_big_ip.adoc index 7b66df399b6e..28056c9bbd42 100644 --- a/architecture/topics/f5_big_ip.adoc +++ b/architecture/topics/f5_big_ip.adoc @@ -207,7 +207,7 @@ access to the `nodes` resource in the cluster, which you can accomplish by giving the service account appropriate privileges. Use the following command: ---- -$ oadm policy add-cluster-role-to-user system:sdn-reader system:serviceaccount:default:router +$ oc adm policy add-cluster-role-to-user system:sdn-reader system:serviceaccount:default:router ---- [discrete] diff --git a/dev_guide/builds/advanced_build_operations.adoc b/dev_guide/builds/advanced_build_operations.adoc index cdd9bf9a34c8..485b62a119d8 100644 --- a/dev_guide/builds/advanced_build_operations.adoc +++ b/dev_guide/builds/advanced_build_operations.adoc @@ -298,6 +298,6 @@ Builds are sorted by their creation timestamp with the oldest builds being prune ifdef::openshift-enterprise,openshift-origin[] [NOTE] ==== -Administrators can manually prune builds using the xref:../../admin_guide/pruning_resources.adoc#pruning-builds['oadm' object pruning command]. +Administrators can manually prune builds using the xref:../../admin_guide/pruning_resources.adoc#pruning-builds['oc adm' object pruning command]. ==== endif::[] diff --git a/dev_guide/expose_service/expose_internal_ip_service.adoc b/dev_guide/expose_service/expose_internal_ip_service.adoc index 77808ee46b5b..921193ebf2d5 100644 --- a/dev_guide/expose_service/expose_internal_ip_service.adoc +++ b/dev_guide/expose_service/expose_internal_ip_service.adoc @@ -64,7 +64,7 @@ request to reach the IP address. * Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: + ---- -oadm policy add-cluster-role-to-user cluster-admin username +oc adm policy add-cluster-role-to-user cluster-admin username ---- * Have an {product-title} cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. @@ -480,19 +480,19 @@ To configure IP failover: . On the master, make sure the `ipfailover` service account has sufficient security privileges: + ---- -oadm policy add-scc-to-user privileged -z ipfailover +oc adm policy add-scc-to-user privileged -z ipfailover ---- . Run the following command to create the IP failover: + ---- -oadm ipfailover --virtual-ips= --watch-port= --replicas= --create +oc adm ipfailover --virtual-ips= --watch-port= --replicas= --create ---- + For example: + ---- -oadm ipfailover --virtual-ips="172.30.233.169" --watch-port=32315 --replicas=4 --create +oc adm ipfailover --virtual-ips="172.30.233.169" --watch-port=32315 --replicas=4 --create --> Creating IP failover ipfailover ... serviceaccount "ipfailover" created deploymentconfig "ipfailover" created diff --git a/getting_started/administrators.adoc b/getting_started/administrators.adoc index e5f4c0f08ddb..aca1109db5fa 100644 --- a/getting_started/administrators.adoc +++ b/getting_started/administrators.adoc @@ -271,7 +271,7 @@ $ oc project default . Set up an integrated Docker registry for the {product-title} cluster: + ---- -$ oadm registry +$ oc adm registry ---- + It will take a few minutes for the registry image to download and start - use diff --git a/getting_started/configure_openshift.adoc b/getting_started/configure_openshift.adoc index 3061f37a9658..4e887a9ddc0c 100644 --- a/getting_started/configure_openshift.adoc +++ b/getting_started/configure_openshift.adoc @@ -82,7 +82,7 @@ systemctl restart atomic-openshift-master everything. + ---- -oadm policy add-cluster-role-to-user cluster-admin admin +oc adm policy add-cluster-role-to-user cluster-admin admin ---- . You can use this username/password combination to log in via the web @@ -124,7 +124,7 @@ oc delete all -l router=router . Create a new default router. + ---- -$ oadm router --replicas=1 --service-account=router +$ oc adm router --replicas=1 --service-account=router ---- The OpenShift documentation contains detailed information on @@ -150,7 +150,7 @@ oc delete all -l docker-registry=default *registry* service account. + ---- -$ oadm registry +$ oc adm registry ---- [[create-persistent-storage-for-registry]] diff --git a/getting_started/dedicated_administrators.adoc b/getting_started/dedicated_administrators.adoc index 75309f9b337a..9bd809a00692 100644 --- a/getting_started/dedicated_administrators.adoc +++ b/getting_started/dedicated_administrators.adoc @@ -17,7 +17,7 @@ xref:../cli_reference/basic_cli_operations.adoc#cli-reference-basic-cli-operatio command) allows you increased visibility and management capabilities over objects across projects, while the xref:../cli_reference/admin_cli_operations.adoc#cli-reference-admin-cli-operations[administrator CLI] (commands -under the `oc adm` command, and formerly the `oadm` command) open up additional +under the `oc adm` command) open up additional operations. [NOTE] diff --git a/getting_started/install_openshift.adoc b/getting_started/install_openshift.adoc index a01374bf3755..be06e115d28b 100644 --- a/getting_started/install_openshift.adoc +++ b/getting_started/install_openshift.adoc @@ -194,9 +194,9 @@ basic authentication, user access, and routes. {product-title} provides two command line utilities to interact with it. * `oc`: for normal project and application management -* `oadm`: for administrative tasks +* `oc adm`: for administrative tasks -Use `oc --help` and `oadm --help` to view all available options. +Use `oc --help` and `oc adm --help` to view all available options. In addition, you can use the web console to manage projects and applications. The web console is available at `https://:8443/console`. In the diff --git a/install_config/advanced_ldap_configuration/sssd_for_ldap_failover.adoc b/install_config/advanced_ldap_configuration/sssd_for_ldap_failover.adoc index c0684e134c69..45770e29deb4 100644 --- a/install_config/advanced_ldap_configuration/sssd_for_ldap_failover.adoc +++ b/install_config/advanced_ldap_configuration/sssd_for_ldap_failover.adoc @@ -67,14 +67,6 @@ in this topic they are kept separate. [[sssd-phase-1-certificate-generation]] == Phase 1: Certificate Generation -[NOTE] -==== -This phase generates certificate files that are valid for two years (or five -years for the certification authority (CA) certificate). This can be altered -with the `--expire-days` and `--signer-expire-days` options, but for security -reasons, it is recommended to not make them greater than these values. -==== - . To ensure that communication between the authenticating proxy and {product-title} is trustworthy, create a set of Transport Layer Security (TLS) certificates to use during the other phases of this setup. In the @@ -95,7 +87,7 @@ this signing certificate to generate keys to use on the authenticating proxy. ==== ---- # mkdir -p /etc/origin/proxy/ -# oadm ca create-server-cert \ +# oc adm ca create-server-cert \ --cert='/etc/origin/proxy/proxy.example.com.crt' \ --key='/etc/origin/proxy/proxy.example.com.key' \ --hostnames=proxy.example.com \ @@ -115,7 +107,7 @@ proxy are listed. Otherwise, the HTTPS connection will fail. + ==== ---- -# oadm ca create-signer-cert \ +# oc adm ca create-signer-cert \ --cert='/etc/origin/proxy/proxyca.crt' \ --key='/etc/origin/proxy/proxyca.key' \ --name='openshift-proxy-signer@UNIQUESTRING' \ <1> @@ -129,7 +121,7 @@ to prove its identity to {product-title}. + ==== ---- -# oadm create-api-client-config \ +# oc adm create-api-client-config \ --certificate-authority='/etc/origin/proxy/proxyca.crt' \ --client-dir='/etc/origin/proxy' \ --signer-cert='/etc/origin/proxy/proxyca.crt' \ diff --git a/install_config/aggregate_logging.adoc b/install_config/aggregate_logging.adoc index 0d891bd7470b..44efc71a87cd 100644 --- a/install_config/aggregate_logging.adoc +++ b/install_config/aggregate_logging.adoc @@ -77,7 +77,7 @@ to specify a node-selector on it. Otherwise, the `openshift-logging` role will create a project. + ---- -$ oadm new-project logging --node-selector="" +$ oc adm new-project logging --node-selector="" $ oc project logging ---- + diff --git a/install_config/aggregate_logging_sizing.adoc b/install_config/aggregate_logging_sizing.adoc index fa75504935af..8af41e22eae2 100644 --- a/install_config/aggregate_logging_sizing.adoc +++ b/install_config/aggregate_logging_sizing.adoc @@ -40,7 +40,7 @@ xref:../admin_guide/managing_projects.adoc#using-node-selectors[node selector] should be used. ---- -$ oadm new-project logging --node-selector="" +$ oc adm new-project logging --node-selector="" ---- In conjunction with node labeling, which is done later, this controls pod diff --git a/install_config/cluster_metrics.adoc b/install_config/cluster_metrics.adoc index 82f2ee452b38..44b63dfcc80d 100644 --- a/install_config/cluster_metrics.adoc +++ b/install_config/cluster_metrics.adoc @@ -637,7 +637,7 @@ The are some diagnostics for metrics to assist in evaluating the state of the metrics stack. To execute diagnostics for metrics: ---- -$ oadm diagnostics MetricsApiProxy +$ oc adm diagnostics MetricsApiProxy ---- [[install-setting-the-metrics-public-url]] @@ -928,7 +928,7 @@ $ ansible-playbook -vvv -i ${INVENTORY_FILE} playbooks/byo/openshift-cluster/ope Label the node on which you want to deploy Prometheus: ---- -# oadm label node/$NODE ${KEY}=${VALUE} +# oc adm label node/$NODE ${KEY}=${VALUE} ---- Deploy Prometheus with Ansible and container resources: diff --git a/install_config/configuring_authentication.adoc b/install_config/configuring_authentication.adoc index 845b433cce46..f3237ac43681 100644 --- a/install_config/configuring_authentication.adoc +++ b/install_config/configuring_authentication.adoc @@ -852,7 +852,7 @@ should be used as the file name for `*clientCA*` in the xref:requestheader-master-ca-config[master's identity provider configuration]. ---- -# oadm ca create-signer-cert \ +# oc adm ca create-signer-cert \ --cert='/etc/origin/master/proxyca.crt' \ --key='/etc/origin/master/proxyca.key' \ --name='openshift-proxy-signer@1432232228' \ @@ -860,16 +860,16 @@ xref:requestheader-master-ca-config[master's identity provider configuration]. ---- [NOTE] -The `oadm ca create-signer-cert` command generates a certificate that is valid +The `oc adm ca create-signer-cert` command generates a certificate that is valid for five years. This can be altered with the `--expire-days` option, but for security reasons, it is recommended to not make it greater than this value. Generate a client certificate for the proxy. This can be done using any x509 -certificate tooling. For convenience, the `oadm` CLI can be used: +certificate tooling. For convenience, the `oc adm` CLI can be used: ---- -# oadm create-api-client-config \ +# oc adm create-api-client-config \ --certificate-authority='/etc/origin/master/proxyca.crt' \ --client-dir='/etc/origin/master/proxy' \ --signer-cert='/etc/origin/master/proxyca.crt' \ @@ -894,10 +894,10 @@ instead of using the default master certificate as shown above. The value for `*masterPublicURL*` in the *_/etc/origin/master/master-config.yaml_* file must be included in the `X509v3 Subject Alternative Name` in the certificate that is specified for `*SSLCertificateFile*`. If a new certificate needs to be -created, the `oadm ca create-server-cert` command can be used. +created, the `oc adm ca create-server-cert` command can be used. [NOTE] -The `oadm create-api-client-config` command generates a certificate that is +The `oc adm create-api-client-config` command generates a certificate that is valid for two years. This can be altered with the `--expire-days` option, but for security reasons, it is recommended to not make it greater than this value. diff --git a/install_config/install/disconnected_install.adoc b/install_config/install/disconnected_install.adoc index a05fce024016..b5e2b32a3cdd 100644 --- a/install_config/install/disconnected_install.adoc +++ b/install_config/install/disconnected_install.adoc @@ -684,7 +684,7 @@ administrative user: + [source, bash] ---- -$ oadm policy add-role-to-user system:image-builder +$ oc adm policy add-role-to-user system:image-builder ---- . Next, add the administrative role to the user in the *openshift* project. This @@ -693,7 +693,7 @@ case, push the container images: + [source, bash] ---- -$ oadm policy add-role-to-user admin -n openshift +$ oc adm policy add-role-to-user admin -n openshift ---- [[disconnected-editing-the-image-stream-definitions]] diff --git a/install_config/install/docker_registry.adoc b/install_config/install/docker_registry.adoc index 81fca416419d..31dd70d88892 100644 --- a/install_config/install/docker_registry.adoc +++ b/install_config/install/docker_registry.adoc @@ -35,12 +35,12 @@ information. endif::[] ifdef::openshift-origin[] -To deploy the integrated Docker registry, use the `oadm registry` command from +To deploy the integrated Docker registry, use the `oc adm registry` command from the *_admin.kubeconfig_* file's location, as a user with cluster administrator privileges: ---- -$ oadm registry --config=admin.kubeconfig \//<1> +$ oc adm registry --config=admin.kubeconfig \//<1> --service-account=registry <2> ---- endif::[] @@ -60,11 +60,11 @@ Or, Or, - You deleted the registry and need to deploy it again. -To deploy the integrated Docker registry, use the `oadm registry` command as a +To deploy the integrated Docker registry, use the `oc adm registry` command as a user with cluster administrator privileges. For example: ---- -$ oadm registry --config=/etc/origin/master/admin.kubeconfig \//<1> +$ oc adm registry --config=/etc/origin/master/admin.kubeconfig \//<1> --service-account=registry \//<2> --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' \//<3> --selector='region=infra' <4> @@ -97,13 +97,13 @@ similar to *docker-registry-1-cpty9*. To see a full list of options that you can specify when creating the registry: ---- -$ oadm registry --help +$ oc adm registry --help ---- endif::[] === Deploying the Registry as a DaemonSet -Use the `oadm registry` command to deploy the registry as a DaemonSet with the +Use the `oc adm registry` command to deploy the registry as a DaemonSet with the `--daemonset` option. Daemonsets ensure that when nodes are created, they contain copies of a @@ -280,21 +280,21 @@ $ oc create serviceaccount registry -n default to the list of users allowed to run privileged containers: + ---- -$ oadm policy add-scc-to-user privileged system:serviceaccount:default:registry +$ oc adm policy add-scc-to-user privileged system:serviceaccount:default:registry ---- . Create the registry and specify that it use the new *registry* service account: + ---- -$ oadm registry --service-account=registry \ +$ oc adm registry --service-account=registry \ --config=admin.kubeconfig \ --mount-host= ---- endif::[] ifdef::openshift-enterprise[] ---- -$ oadm registry --service-account=registry \ +$ oc adm registry --service-account=registry \ --config=/etc/origin/master/admin.kubeconfig \ --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' \ --mount-host= @@ -347,7 +347,7 @@ with, for example, those used in step 3 of the instructions in the xref:registry-non-production-use[Non-Production Use] section: + ---- -$ oadm registry -o yaml > registry.yaml +$ oc adm registry -o yaml > registry.yaml ---- . Edit *_registry.yaml_*, find the `Service` there, @@ -522,21 +522,21 @@ using the following command: - The user must have the *system:registry* role. To add this role: + ---- -# oadm policy add-role-to-user system:registry +# oc adm policy add-role-to-user system:registry ---- - Have the *admin* role for the project associated with the Docker operation. For example, if accessing images in the global *openshift* project: + ---- - $ oadm policy add-role-to-user admin -n openshift + $ oc adm policy add-role-to-user admin -n openshift ---- - For writing or pushing images, for example when using the `docker push` command, the user must have the *system:image-builder* role. To add this role: + ---- -$ oadm policy add-role-to-user system:image-builder +$ oc adm policy add-role-to-user system:image-builder ---- For more information on user permissions, see @@ -679,7 +679,7 @@ create a server certificate for the registry service IP and the *docker-registry.default.svc.cluster.local* host name: + ---- -$ oadm ca create-server-cert \ +$ oc adm ca create-server-cert \ --signer-cert=/etc/origin/master/ca.crt \ --signer-key=/etc/origin/master/ca.key \ --signer-serial=/etc/origin/master/ca.serial.txt \ diff --git a/install_config/install/rpm_vs_containerized.adoc b/install_config/install/rpm_vs_containerized.adoc index 2a56968ce635..d296fa97d37f 100644 --- a/install_config/install/rpm_vs_containerized.adoc +++ b/install_config/install/rpm_vs_containerized.adoc @@ -130,7 +130,7 @@ also provided to ease administrative tasks: |*_/usr/local/bin/oc_* |Developer CLI -|*_/usr/local/bin/oadm_* +|*_/usr/local/bin/oc adm_* |Administrative CLI |*_/usr/local/bin/kubectl_* @@ -146,7 +146,7 @@ The wrapper scripts mount a limited subset of paths: - *_/etc/origin/_* - *_/tmp/_* -Be mindful of this when passing in files to be processed by the `oc` or `oadm` +Be mindful of this when passing in files to be processed by the `oc` or `oc adm` commands. You may find it easier to redirect the input, for example: ---- diff --git a/install_config/master_node_configuration.adoc b/install_config/master_node_configuration.adoc index ca05a181bc08..aee630778fae 100644 --- a/install_config/master_node_configuration.adoc +++ b/install_config/master_node_configuration.adoc @@ -1095,7 +1095,7 @@ To create the encrypted file and key file for the above example: [options="nowrap"] ---- -$ oadm ca encrypt --genkey=bindPassword.key --out=bindPassword.encrypted +$ oc adm ca encrypt --genkey=bindPassword.key --out=bindPassword.encrypted > Data to encrypt: B1ndPass0rd! ---- @@ -1114,7 +1114,7 @@ new configuration files. For master host configuration files, use the `openshift start` command with the `--write-config` option to write the configuration files. For node hosts, use -the `oadm create-node-config` command to write the configuration files. +the `oc adm create-node-config` command to write the configuration files. The following commands write the relevant launch configuration file(s), certificate files, and any other necessary files to the specified @@ -1146,7 +1146,7 @@ related files in the specified directory: [options="nowrap"] ---- -$ oadm create-node-config \ +$ oc adm create-node-config \ --node-dir=/openshift.local.config/node- \ --node= \ --hostnames=, \ diff --git a/install_config/registry/accessing_registry.adoc b/install_config/registry/accessing_registry.adoc index d2e4a4998841..9d96a401b91a 100644 --- a/install_config/registry/accessing_registry.adoc +++ b/install_config/registry/accessing_registry.adoc @@ -149,21 +149,21 @@ using the following command: - The user must have the *system:registry* role. To add this role: + ---- -# oadm policy add-role-to-user system:registry +# oc adm policy add-role-to-user system:registry ---- - Have the *admin* role for the project associated with the Docker operation. For example, if accessing images in the global *openshift* project: + ---- - $ oadm policy add-role-to-user admin -n openshift + $ oc adm policy add-role-to-user admin -n openshift ---- - For writing or pushing images, for example when using the `docker push` command, the user must have the *system:image-builder* role. To add this role: + ---- -$ oadm policy add-role-to-user system:image-builder +$ oc adm policy add-role-to-user system:image-builder ---- For more information on user permissions, see diff --git a/install_config/registry/deploy_registry_existing_clusters.adoc b/install_config/registry/deploy_registry_existing_clusters.adoc index 3341ef00ce01..f8ca4ba19b0a 100644 --- a/install_config/registry/deploy_registry_existing_clusters.adoc +++ b/install_config/registry/deploy_registry_existing_clusters.adoc @@ -39,21 +39,21 @@ information. endif::[] ifdef::openshift-origin[] -To deploy the integrated Docker registry, use the `oadm registry` command from +To deploy the integrated Docker registry, use the `oc adm registry` command from the *_admin.kubeconfig_* file's location, as a user with cluster administrator privileges: ---- -$ oadm registry --config=admin.kubeconfig \//<1> +$ oc adm registry --config=admin.kubeconfig \//<1> --service-account=registry <2> ---- endif::[] ifdef::openshift-enterprise[] -To deploy the integrated Docker registry, use the `oadm registry` command as a +To deploy the integrated Docker registry, use the `oc adm registry` command as a user with cluster administrator privileges. For example: ---- -$ oadm registry --config=/etc/origin/master/admin.kubeconfig \//<1> +$ oc adm registry --config=/etc/origin/master/admin.kubeconfig \//<1> --service-account=registry \//<2> --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' <3> ---- @@ -77,14 +77,14 @@ similar to *docker-registry-1-cpty9*. To see a full list of options that you can specify when creating the registry: ---- -$ oadm registry --help +$ oc adm registry --help ---- endif::[] [[registry-daemonset]] == Deploying the Registry as a DaemonSet -Use the `oadm registry` command to deploy the registry as a `DaemonSet` with the +Use the `oc adm registry` command to deploy the registry as a `DaemonSet` with the `--daemonset` option. Daemonsets ensure that when nodes are created, they contain copies of a @@ -255,14 +255,14 @@ $ oc adm policy add-scc-to-user privileged system:serviceaccount:default:registr account: + ---- -$ oadm registry --service-account=registry \ +$ oc adm registry --service-account=registry \ --config=admin.kubeconfig \ --mount-host= ---- endif::[] ifdef::openshift-enterprise[] ---- -$ oadm registry --service-account=registry \ +$ oc adm registry --service-account=registry \ --config=/etc/origin/master/admin.kubeconfig \ --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' \ --mount-host= diff --git a/install_config/registry/extended_registry_configuration.adoc b/install_config/registry/extended_registry_configuration.adoc index 30112da2c04d..c1705dcc371c 100644 --- a/install_config/registry/extended_registry_configuration.adoc +++ b/install_config/registry/extended_registry_configuration.adoc @@ -46,7 +46,7 @@ with, for example, those used in step 3 of the instructions in the xref:deploy_registry_existing_clusters.adoc#registry-non-production-use[Non-Production Use] section: + ---- -$ oadm registry -o yaml > registry.yaml +$ oc adm registry -o yaml > registry.yaml ---- . Edit *_registry.yaml_*, find the `Service` there, diff --git a/install_config/registry/securing_and_exposing_registry.adoc b/install_config/registry/securing_and_exposing_registry.adoc index 2a7bed883096..429810c744ef 100644 --- a/install_config/registry/securing_and_exposing_registry.adoc +++ b/install_config/registry/securing_and_exposing_registry.adoc @@ -53,7 +53,7 @@ create a server certificate for the registry service IP and the *docker-registry.default.svc.cluster.local* host name: + ---- -$ oadm ca create-server-cert \ +$ oc adm ca create-server-cert \ --signer-cert=/etc/origin/master/ca.crt \ --signer-key=/etc/origin/master/ca.key \ --signer-serial=/etc/origin/master/ca.serial.txt \ @@ -76,7 +76,7 @@ default certificate so that the route is externally accessible. + [NOTE] ==== -The `oadm ca create-server-cert` command generates a certificate that is valid +The `oc adm ca create-server-cert` command generates a certificate that is valid for two years. This can be altered with the `--expire-days` option, but for security reasons, it is recommended to not make it greater than this value. ==== diff --git a/install_config/router/customized_haproxy_router.adoc b/install_config/router/customized_haproxy_router.adoc index d94ad8f37da3..ec5fb095a3e7 100644 --- a/install_config/router/customized_haproxy_router.adoc +++ b/install_config/router/customized_haproxy_router.adoc @@ -668,7 +668,7 @@ login` to the repository first. To use the new router, edit the router deployment configuration either by changing the *image:* string or by adding the `--images=/:` -flag to the `oadm router` command. +flag to the `oc adm router` command. When debugging the changes, it is helpful to set `imagePullPolicy: Always` in the deployment configuration to force an image pull on each pod creation. When diff --git a/install_config/router/f5_router.adoc b/install_config/router/f5_router.adoc index 3c27ab68c8e2..fb4047a85de7 100644 --- a/install_config/router/f5_router.adoc +++ b/install_config/router/f5_router.adoc @@ -115,7 +115,7 @@ $ oc adm policy add-scc-to-user privileged -z router ---- ==== -Deploy the F5 router with the `oadm router` command, but provide additional +Deploy the F5 router with the `oc adm router` command, but provide additional flags (or environment variables) specifying the following parameters for the *F5 BIG-IP®* host: @@ -164,7 +164,7 @@ For example: ifdef::openshift-enterprise[] ==== ---- -$ oadm router \ +$ oc adm router \ --type=f5-router \ --external-host=10.0.0.2 \ --external-host-username=admin \ @@ -180,7 +180,7 @@ endif::[] ifdef::openshift-origin[] ==== ---- -$ oadm router \ +$ oc adm router \ --type=f5-router \ --external-host=10.0.0.2 \ --external-host-username=admin \ @@ -194,7 +194,7 @@ $ oadm router \ ==== endif::[] -As with the HAProxy router, the `oadm router` command creates the service and +As with the HAProxy router, the `oc adm router` command creates the service and deployment configuration objects, and thus the replication controllers and pod(s) in which the F5 router itself runs. The replication controller restarts the F5 router in case of crashes. Because the F5 router is watching routes, @@ -231,7 +231,7 @@ custom partition. . Deploy the F5 router using the `--external-host-partition-path` flag to specify a partition path: + ---- -$ oadm router --external-host-partition-path=/OpenShift/zone1 ... +$ oc adm router --external-host-partition-path=/OpenShift/zone1 ... ---- @@ -325,11 +325,11 @@ use the two new additional options for VXLAN native integration. + ---- $ # Add policy to allow router to access nodes using the sdn-reader role -$ oadm policy add-cluster-role-to-user system:sdn-reader system:serviceaccount:default:router +$ oc adm policy add-cluster-role-to-user system:sdn-reader system:serviceaccount:default:router $ # Launch the router pod with vxlan-gw and F5's internal IP as extra arguments $ #--external-host-internal-ip=10.3.89.213 $ #--external-host-vxlan-gw=10.131.0.5/14 -$ oadm router \ +$ oc adm router \ --type=f5-router \ --external-host=10.3.89.90 \ --external-host-username=admin \ diff --git a/install_config/router/index.adoc b/install_config/router/index.adoc index 5cb81422716a..9779d19cab37 100644 --- a/install_config/router/index.adoc +++ b/install_config/router/index.adoc @@ -112,7 +112,7 @@ xref:../../install_config/router/default_haproxy_router.adoc#using-router-shards the service account for the router must have `cluster-reader` permission. ---- -$ oadm policy add-cluster-role-to-user \ +$ oc adm policy add-cluster-role-to-user \ cluster-reader \ system:serviceaccount:default:router ---- diff --git a/install_config/routing_from_edge_lb.adoc b/install_config/routing_from_edge_lb.adoc index 029292ebff86..c026cea0b81e 100644 --- a/install_config/routing_from_edge_lb.adoc +++ b/install_config/routing_from_edge_lb.adoc @@ -38,7 +38,7 @@ node] so that no pods end up on the load balancer itself: [options="nowrap"] ---- -$ oadm manage-node --schedulable=false +$ oc adm manage-node --schedulable=false ---- If the load balancer comes packaged as a container, then it is even easier to @@ -206,7 +206,7 @@ node. ==== [options="nowrap"] ---- -$ oadm manage-node --schedulable=false +$ oc adm manage-node --schedulable=false ---- ==== @@ -270,7 +270,7 @@ the *f5rampnode* label you set earlier: ---- # RAMP_IP=10.20.30.4 # IFNAME=eth0 <1> -# oadm ipfailover \ +# oc adm ipfailover \ --virtual-ips=$RAMP_IP \ --interface=$IFNAME \ --watch-port=0 \ diff --git a/install_config/syncing_groups_with_ldap.adoc b/install_config/syncing_groups_with_ldap.adoc index 6f7034edca51..5d1739ebf051 100644 --- a/install_config/syncing_groups_with_ldap.adoc +++ b/install_config/syncing_groups_with_ldap.adoc @@ -168,14 +168,14 @@ the `--confirm` flag on the `sync-groups` command in order to make changes to To sync all groups from the LDAP server with {product-title}: ---- -$ oadm groups sync --sync-config=config.yaml --confirm +$ oc adm groups sync --sync-config=config.yaml --confirm ---- To sync all Groups already in {product-title} that correspond to groups in the LDAP server specified in the configuration file: ---- -$ oadm groups sync --type=openshift --sync-config=config.yaml --confirm +$ oc adm groups sync --type=openshift --sync-config=config.yaml --confirm ---- To sync a subset of LDAP groups with {product-title}, you can use whitelist files, @@ -190,21 +190,21 @@ applies to groups found on LDAP servers, as well as Groups already present in ==== ---- -$ oadm groups sync --whitelist= \ +$ oc adm groups sync --whitelist= \ --sync-config=config.yaml \ --confirm -$ oadm groups sync --blacklist= \ +$ oc adm groups sync --blacklist= \ --sync-config=config.yaml \ --confirm -$ oadm groups sync \ +$ oc adm groups sync \ --sync-config=config.yaml \ --confirm -$ oadm groups sync \ +$ oc adm groups sync \ --whitelist= \ --blacklist= \ --sync-config=config.yaml \ --confirm -$ oadm groups sync --type=openshift \ +$ oc adm groups sync --type=openshift \ --whitelist= \ --sync-config=config.yaml \ --confirm @@ -225,7 +225,7 @@ corresponded to the deleted groups in LDAP and then remove them from {product-title}: ---- -$ oadm groups prune --sync-config=config.yaml --confirm +$ oc adm groups prune --sync-config=config.yaml --confirm ---- [[sync-examples]] @@ -365,7 +365,7 @@ xref:../install_config/syncing_groups_with_ldap.adoc#running-ldap-sync[whitelist To run sync with the *_rfc2307_config.yaml_* file: ---- -$ oadm groups sync --sync-config=rfc2307_config.yaml --confirm +$ oc adm groups sync --sync-config=rfc2307_config.yaml --confirm ---- {product-title} creates the following Group record as a result of the above sync @@ -446,7 +446,7 @@ xref:../install_config/syncing_groups_with_ldap.adoc#running-ldap-sync[whitelist To run sync with the *_rfc2307_config_user_defined.yaml_* file: ---- -$ oadm groups sync --sync-config=rfc2307_config_user_defined.yaml --confirm +$ oc adm groups sync --sync-config=rfc2307_config_user_defined.yaml --confirm ---- {product-title} creates the following Group record as a result of the above sync @@ -597,7 +597,7 @@ xref:../install_config/syncing_groups_with_ldap.adoc#running-ldap-sync[whitelist To run sync with the *_rfc2307_config_tolerating.yaml_* file: ---- -$ oadm groups sync --sync-config=rfc2307_config_tolerating.yaml --confirm +$ oc adm groups sync --sync-config=rfc2307_config_tolerating.yaml --confirm ---- {product-title} creates the following group record as a result of the above sync @@ -703,7 +703,7 @@ activeDirectory: To run sync with the *_active_directory_config.yaml_* file: ---- -$ oadm groups sync --sync-config=active_directory_config.yaml --confirm +$ oc adm groups sync --sync-config=active_directory_config.yaml --confirm ---- {product-title} creates the following Group record as a result of the above sync @@ -836,7 +836,7 @@ xref:../install_config/syncing_groups_with_ldap.adoc#running-ldap-sync[whitelist To run sync with the *_augmented_active_directory_config.yaml_* file: ---- -$ oadm groups sync --sync-config=augmented_active_directory_config.yaml --confirm +$ oc adm groups sync --sync-config=augmented_active_directory_config.yaml --confirm ---- {product-title} creates the following Group record as a result of the above sync @@ -949,7 +949,7 @@ definition for both user entries and group entries, as well as the attributes with which to represent them in the internal {product-title} Group records. Furthermore, certain changes are required in this configuration: -- The `oadm groups sync` command must explicitly +- The `oc adm groups sync` command must explicitly xref:../install_config/syncing_groups_with_ldap.adoc#running-ldap-sync[whitelist] groups. - The user's `groupMembershipAttributes` must include @@ -1002,7 +1002,7 @@ of https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx[`LDAP_MATCHIN To run sync with the *_augmented_active_directory_config_nested.yaml_* file: ---- -$ oadm groups sync \ +$ oc adm groups sync \ 'cn=admins,ou=groups,dc=example,dc=com' \ --sync-config=augmented_active_directory_config_nested.yaml \ --confirm diff --git a/install_config/upgrading/blue_green_deployments.adoc b/install_config/upgrading/blue_green_deployments.adoc index 34d6832831d8..88cf504b5e34 100644 --- a/install_config/upgrading/blue_green_deployments.adoc +++ b/install_config/upgrading/blue_green_deployments.adoc @@ -199,7 +199,7 @@ ip-172-31-49-10.ec2.internal Ready 4d beta.kubern . Perform a diagnostic check for the cluster: + ---- -$ oadm diagnostics +$ oc adm diagnostics [Note] Determining if client configuration exists for client/cluster diagnostics Info: Successfully read a client config file at '/root/.kube/config' @@ -276,13 +276,13 @@ warm the green nodes: unschedulable: + ---- -$ oadm manage-node --schedulable=false --selector=color=blue +$ oc adm manage-node --schedulable=false --selector=color=blue ---- . Set the green nodes to schedulable so that new pods only land on them: + ---- -$ oadm manage-node --schedulable=true --selector=color=green +$ oc adm manage-node --schedulable=true --selector=color=green ---- . Update the default image streams and templates as described in @@ -335,7 +335,7 @@ When you are ready to continue with the upgrade process, follow these steps. following commands: + ---- -$ oadm manage-node --selector=color=blue --evacuate +$ oc adm manage-node --selector=color=blue --evacuate $ oc delete node --selector=color=blue ---- diff --git a/install_config/upgrading/manual_upgrades.adoc b/install_config/upgrading/manual_upgrades.adoc index 15aa9695c337..11597dc19e10 100644 --- a/install_config/upgrading/manual_upgrades.adoc +++ b/install_config/upgrading/manual_upgrades.adoc @@ -238,7 +238,7 @@ ifdef::openshift-origin[] + ---- # cd /etc/origin/master/ -# oadm ca create-master-certs --cert-dir=/etc/origin/master/ \ +# oc adm ca create-master-certs --cert-dir=/etc/origin/master/ \ --master=https://:8443 \ --public-master=https://:8443 \ --hostnames=,,localhost,127.0.0.1,,kubernetes.default.local \ @@ -451,7 +451,7 @@ cluster roles] are automatically updated. To check if all defaults are set as recommended for your environment, run: ---- -# oadm policy reconcile-cluster-roles +# oc adm policy reconcile-cluster-roles ---- [WARNING] @@ -468,7 +468,7 @@ This command outputs a list of roles that are out of date and their new proposed values. For example: ---- -# oadm policy reconcile-cluster-roles +# oc adm policy reconcile-cluster-roles apiVersion: v1 items: - apiVersion: v1 @@ -495,7 +495,7 @@ made, or you can automatically apply the new policy using the following process: . Reconcile the cluster roles: + ---- -# oadm policy reconcile-cluster-roles \ +# oc adm policy reconcile-cluster-roles \ --additive-only=true \ --confirm ---- @@ -503,7 +503,7 @@ made, or you can automatically apply the new policy using the following process: . Reconcile the cluster role bindings: + ---- -# oadm policy reconcile-cluster-role-bindings \ +# oc adm policy reconcile-cluster-role-bindings \ --exclude-groups=system:authenticated \ --exclude-groups=system:authenticated:oauth \ --exclude-groups=system:unauthenticated \ @@ -515,7 +515,7 @@ made, or you can automatically apply the new policy using the following process: Also run: + ---- -# oadm policy reconcile-cluster-role-bindings \ +# oc adm policy reconcile-cluster-role-bindings \ system:build-strategy-jenkinspipeline \ --confirm \ -o name @@ -524,7 +524,7 @@ Also run: . Reconcile security context constraints: + ---- -# oadm policy reconcile-sccs \ +# oc adm policy reconcile-sccs \ --additive-only=true \ --confirm ---- @@ -566,7 +566,7 @@ packages from the list of yum excludes on the host: . As a user with *cluster-admin* privileges, disable scheduling for the node: + ---- -# oadm manage-node --schedulable=false +# oc adm manage-node --schedulable=false ---- . Evacuate pods on the node to other nodes: @@ -578,7 +578,7 @@ controller. ==== + ---- -# oadm drain --force --delete-local-data --ignore-daemonsets +# oc adm drain --force --delete-local-data --ignore-daemonsets ---- ifdef::openshift-origin[] @@ -695,7 +695,7 @@ and ensure the *docker* service starts successfully: . Re-enable scheduling for the node: + ---- -# oadm manage-node --schedulable +# oc adm manage-node --schedulable ---- . Run the following command on each node to add the *atomic-openshift* packages @@ -1152,7 +1152,7 @@ then copied into place on node systems. . Generate the new certificate: + ---- -# oadm ca create-server-cert --cert=server.crt \ +# oc adm ca create-server-cert --cert=server.crt \ --key=server.key $signing_opts \ --hostnames=, ---- @@ -1162,7 +1162,7 @@ For example, if the Subject Alternative Names from before were *mynode*, you would need to run the following command: + ---- -# oadm ca create-server-cert --cert=server.crt \ +# oc adm ca create-server-cert --cert=server.crt \ --key=server.key $signing_opts \ --hostnames=mynode,mynode.mydomain.com,1.2.3.4,10.10.10.1 ---- @@ -1297,7 +1297,7 @@ The *kubernetes* service IP can be obtained with: + ==== ---- -# oadm ca create-master-certs \ +# oc adm ca create-master-certs \ --hostnames=,,,$service_names \ <1> <2> <3> --master= \ <4> --public-master= \ <5> @@ -1624,14 +1624,14 @@ With deprecation of the `extensions/v1beta1.Job` resource, you must migrate all be migrated, run: ---- -$ oadm migrate storage --include=jobs +$ oc adm migrate storage --include=jobs ---- You can also increase the log level using the `--loglevel` flag. When you are ready to perform the actual migration, add the `--confirm` option: ---- -$ oadm migrate storage --include=jobs --confirm +$ oc adm migrate storage --include=jobs --confirm ---- endif::[] @@ -1705,7 +1705,7 @@ endif::[] . Use the diagnostics tool on the master to look for common issues: + ---- -# oadm diagnostics +# oc adm diagnostics ... [Note] Summary of diagnostics execution: [Note] Completed with no errors or warnings seen. diff --git a/install_config/upgrading/migrating_etcd.adoc b/install_config/upgrading/migrating_etcd.adoc index cb76fd9896ec..4223bfbbf274 100644 --- a/install_config/upgrading/migrating_etcd.adoc +++ b/install_config/upgrading/migrating_etcd.adoc @@ -215,31 +215,31 @@ After your etcd cluster is back online with all members, re-introduce the TTL information by running the following on the first master: + ---- -$ oadm migrate etcd-ttl --etcd-address=https://:2379 \ +$ oc adm migrate etcd-ttl --etcd-address=https://:2379 \ --cacert=/etc/origin/master/master.etcd-ca.crt \ --cert=/etc/origin/master/master.etcd-client.crt \ --key=/etc/origin/master/master.etcd-client.key \ --ttl-keys-prefix '/kubernetes.io/events' \ --lease-duration 1h -$ oadm migrate etcd-ttl --etcd-address=https://:2379 \ +$ oc adm migrate etcd-ttl --etcd-address=https://:2379 \ --cacert=/etc/origin/master/master.etcd-ca.crt \ --cert=/etc/origin/master/master.etcd-client.crt \ --key=/etc/origin/master/master.etcd-client.key \ --ttl-keys-prefix '/kubernetes.io/masterleases' \ --lease-duration 10s -$ oadm migrate etcd-ttl --etcd-address=https://:2379 \ +$ oc adm migrate etcd-ttl --etcd-address=https://:2379 \ --cacert=/etc/origin/master/master.etcd-ca.crt \ --cert=/etc/origin/master/master.etcd-client.crt \ --key=/etc/origin/master/master.etcd-client.key \ --ttl-keys-prefix '/openshift.io/oauth/accesstokens' \ --lease-duration 86400s -$ oadm migrate etcd-ttl --etcd-address=https://:2379 \ +$ oc adm migrate etcd-ttl --etcd-address=https://:2379 \ --cacert=/etc/origin/master/master.etcd-ca.crt \ --cert=/etc/origin/master/master.etcd-client.crt \ --key=/etc/origin/master/master.etcd-client.key \ --ttl-keys-prefix '/openshift.io/oauth/authorizetokens' \ --lease-duration 500s -$ oadm migrate etcd-ttl --etcd-address=https://:2379 \ +$ oc adm migrate etcd-ttl --etcd-address=https://:2379 \ --cacert=/etc/origin/master/master.etcd-ca.crt \ --cert=/etc/origin/master/master.etcd-client.crt \ --key=/etc/origin/master/master.etcd-client.key \ diff --git a/install_config/upgrading/os_upgrades.adoc b/install_config/upgrading/os_upgrades.adoc index d4d98482d127..50469496ebd2 100644 --- a/install_config/upgrading/os_upgrades.adoc +++ b/install_config/upgrading/os_upgrades.adoc @@ -19,13 +19,13 @@ Use the following to safely upgrade the OS on a host: . Ensure the host is unschedulable, meaning that no new pods will be placed onto the host: + ---- -$ oadm manage-node --schedulable=false +$ oc adm manage-node --schedulable=false ---- . Migrate the pods from the host: + ---- -$ oadm drain --force --delete-local-data --ignore-daemonsets +$ oc adm drain --force --delete-local-data --ignore-daemonsets ---- . Update or upgrade the host packages, and reboot the host. A reboot ensures @@ -42,7 +42,7 @@ the {product-title} node software will fix the flow rules. . Configure the host to be schedulable again: + ---- -$ oadm manage-node --schedulable=true +$ oc adm manage-node --schedulable=true ---- diff --git a/install_config/web_console_customization.adoc b/install_config/web_console_customization.adoc index ce1b45c31862..ab7076410490 100644 --- a/install_config/web_console_customization.adoc +++ b/install_config/web_console_customization.adoc @@ -1126,8 +1126,8 @@ You can also change the login page, and the login provider selection page for the web console. Run the following commands to create templates you can modify: ---- -$ oadm create-login-template > login-template.html -$ oadm create-provider-selection-template > provider-selection-template.html +$ oc adm create-login-template > login-template.html +$ oc adm create-provider-selection-template > provider-selection-template.html ---- Edit the file to change the styles or add content, but be careful not to remove @@ -1170,7 +1170,7 @@ When errors occur during authentication, you can change the page shown. . Run the following command to create a template you can modify: + ---- -$ oadm create-error-template > error-template.html +$ oc adm create-error-template > error-template.html ---- . Edit the file to change the styles or add content. diff --git a/registry_quickstart/administrators/system_configuration.adoc b/registry_quickstart/administrators/system_configuration.adoc index 501026bf461d..ae389d487141 100644 --- a/registry_quickstart/administrators/system_configuration.adoc +++ b/registry_quickstart/administrators/system_configuration.adoc @@ -130,7 +130,7 @@ container. + ==== ---- -$ oadm ca create-server-cert \ +$ oc adm ca create-server-cert \ --signer-cert=ca.crt \ --signer-key=ca.key \ --signer-serial=ca.serial.txt \ @@ -143,7 +143,7 @@ $ exit + [NOTE] ==== -The `oadm ca create-server-cert` command generates a certificate that is valid +The `oc adm ca create-server-cert` command generates a certificate that is valid for two years. This can be altered with the `--expire-days` option, but for security reasons, it is recommended to not make it greater than this value. ==== diff --git a/release_notes/ocp_3_3_release_notes.adoc b/release_notes/ocp_3_3_release_notes.adoc index 434b5d8118f4..2534c12d7956 100644 --- a/release_notes/ocp_3_3_release_notes.adoc +++ b/release_notes/ocp_3_3_release_notes.adoc @@ -1387,7 +1387,7 @@ https://bugzilla.redhat.com/show_bug.cgi?id=1380544[*BZ#1380544*]:: Binaries compiled with Golang versions prior to 1.7 will segfault most of the time in macOS Sierra (10.12) given incompatibilities between the Go syscall wrappers and Darwin. Users of the OpenShift Container Platform (OCP) -command-line tools (`oc`, `oadm`, and others) in macOS Sierra (10.12) get a +command-line tools (`oc`, `oc adm`, and others) in macOS Sierra (10.12) get a stack trace in the attempt of running commands. The Go 1.7 fix was backported by the go-tools team to Go 1.6, which was then used to compile OCP's command-line tools in this release. As a result, users of the OCP command-line tools can use @@ -1590,9 +1590,9 @@ The `*MONGODB_VERSION*` parameter has been added to the MongoDB templates, allowing users to choose which version of MongoDB to deploy. https://bugzilla.redhat.com/show_bug.cgi?id=1382636[*BZ#1382636*]:: -The `oadm` symlink incorrectly pointing to `oc` rather than the `openshift` +The `oc adm` symlink incorrectly pointing to `oc` rather than the `openshift` binary on containerized master hosts. This caused upgrades to fail, complaining -about missing `oadm` functionality. This bug fix transitions to using `oc adm` +about missing `oc adm` functionality. This bug fix transitions to using `oc adm` throughout the playbooks. As a result, the upgrade will now pass in these environments. diff --git a/release_notes/ocp_3_4_release_notes.adoc b/release_notes/ocp_3_4_release_notes.adoc index 162309d05502..8a3a07934a0c 100644 --- a/release_notes/ocp_3_4_release_notes.adoc +++ b/release_notes/ocp_3_4_release_notes.adoc @@ -1233,7 +1233,7 @@ a user that does. You can also run the following command as a user with `cluster-admin` permissions to give another user enough permission: ---- -$ oadm policy add-cluster-role-to-user registry-viewer +$ oc adm policy add-cluster-role-to-user registry-viewer ---- The script does not apply any changes unless the `-a` option is included. Run diff --git a/release_notes/ocp_3_5_release_notes.adoc b/release_notes/ocp_3_5_release_notes.adoc index 216ba4418965..d178fa778b23 100644 --- a/release_notes/ocp_3_5_release_notes.adoc +++ b/release_notes/ocp_3_5_release_notes.adoc @@ -287,10 +287,10 @@ after two to five years. There is now an `oc` command to change this expiry to be end-user configurable. This has not been implemented in the Ansible installer yet. -Use the `oadm ca` command, specifying a validity period greater than two years: +Use the `oc adm ca` command, specifying a validity period greater than two years: ---- -# oadm ca create-master-certs --hostnames=example.org --signer-expire-days=$[365*2+1]` +# oc adm ca create-master-certs --hostnames=example.org --signer-expire-days=$[365*2+1]` ---- See @@ -1013,7 +1013,7 @@ changes the behavior to display a warning instead of exiting with an Error. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1428978[*BZ#1428978*]) * The race condition is seen when updating a batch of nodes in the cluster using -`oadm manage-node` to be schedulable or unschedulable. Therefore, several nodes +`oc adm manage-node` to be schedulable or unschedulable. Therefore, several nodes could not be updated with the "object has been modified" error. Use a patch on the `unschedulable` field of the node object instead of a full update. With this bug fix, all nodes can be properly updated as schedulable or unschedulable. @@ -1038,9 +1038,9 @@ there is an error retrieving resources from the server. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1393289[*BZ#1393289*]) * The new responsive terminal would wrap long lines in the output of CLI commands. -The `oadm diagnostics` indentation did not work well, and no longer had color in -its output. This bug fix bypasses the responsive terminal in `oadm diagnostics` -(currently only being used in CLI help output). As a result, `oadm diagnostics` +The `oc adm diagnostics` indentation did not work well, and no longer had color in +its output. This bug fix bypasses the responsive terminal in `oc adm diagnostics` +(currently only being used in CLI help output). As a result, `oc adm diagnostics` now has proper indentation and colorized output. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1397995[*BZ#1397995*]) @@ -1063,7 +1063,7 @@ to create a new project, or to contact their administrator to have one created for them. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1405636[*BZ#1405636*]) -* After running ` oadm drain -h`, the user would try to open the provided link +* After running ` oc adm drain -h`, the user would try to open the provided link `\http://kubernetes.io/images/docs/kubectl _drain.svg`, but would receive a “404 page not found” error. This bug fix corrects an extra space in the link path and the link now works as expected. @@ -1098,7 +1098,7 @@ successfully verified during the login sequence (an exact match including the port was required). Therefore, the user was prompted with the warning "The server uses a certificate signed by an unknown authority" every time they attempted to log in using an {product-title} installation completed through -`openshift-ansible`. With this bug fix, the command `oadm create-kubeconfig` +`openshift-ansible`. With this bug fix, the command `oc adm create-kubeconfig` (used by the `openshift-ansible` playbook) was patched to normalize the server URL so that it included the port with the server URL in the generated *_.kubeconfig_* file every time. As a result, the user no longer sees the @@ -1265,7 +1265,7 @@ to end tests. *Logging* -* The Diagnostic Tool (`oadm diagnostics`) now correctly reports the presence of +* The Diagnostic Tool (`oc adm diagnostics`) now correctly reports the presence of the `logging-curator-ops` pod. The `logging-curator-ops` was not in the list of pods to investigate, resulting in an error that indicated the pod was missing. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1394716[*BZ#1394716*]) diff --git a/release_notes/ocp_3_6_release_notes.adoc b/release_notes/ocp_3_6_release_notes.adoc index f1e4f692b6b4..0de7c098ee6c 100644 --- a/release_notes/ocp_3_6_release_notes.adoc +++ b/release_notes/ocp_3_6_release_notes.adoc @@ -478,7 +478,7 @@ Image Signatures: Status: Unverified # Verify the image and save the result back to image stream -$ oadm verify-image-signature sha256:c13060b74c0348577cbe07dedcdb698f7d893ea6f74847154e5ef3c8c9369b2c \ +$ oc adm verify-image-signature sha256:c13060b74c0348577cbe07dedcdb698f7d893ea6f74847154e5ef3c8c9369b2c \ --expected-identity=172.30.204.70:5000/test/origin-pod:latest --save --as=system:admin sha256:c13060b74c0348577cbe07dedcdb698f7d893ea6f74847154e5ef3c8c9369b2c signature 0 is verified (signed by key: "172B61E538AAC0EE") @@ -929,7 +929,7 @@ IPv4 should be unaffected and continue to work, even if IPv6 is disabled. [NOTE] ==== HAProxy can only terminate IPv6 traffic when the router uses the network stack -of the host (default). When using the container network stack (`oadm router +of the host (default). When using the container network stack (`oc adm router --service-account=router --host-network=false`), there is no global IPv6 address for the pod. ==== @@ -1520,7 +1520,7 @@ all queries will originate from the local network. The `ClusterPolicy`, `Policy`, `ClusterPolicyBinding` and `PolicyBinding` API types are deprecated. Users will need to switch any interactions with these types to instead use `ClusterRole`, `Role`, `ClusterRoleBinding`, or -`RoleBinding` as appropriate. The following `oadm policy` commands can be used +`RoleBinding` as appropriate. The following `oc adm policy` commands can be used to help with this process: ---- @@ -1778,7 +1778,7 @@ This bug fix enhances the `new-app` circular dependency code to account for the *Command Line Interface* * Previously, pod headers were only being printed once for all sets of pods when -listing pods from multiple nodes. Executing `oadm manage-node ... +listing pods from multiple nodes. Executing `oc adm manage-node ... --evacuate --dry-run` with multiple nodes would print the same output multiple times (once per each specified node). Therefore, users would see inconsistent or duplicate pod information. This bug fix resolves the issue. @@ -1798,7 +1798,7 @@ time is now reduced by about four seconds on macOS. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1435261[*BZ#1435261*]) * When the master configuration specified a default `nodeSelector` for the -cluster, test projects created by `oadm diagnostics` NetworkCheck got this +cluster, test projects created by `oc adm diagnostics` NetworkCheck got this `nodeSelector` and, therefore, the test pods were also confined to this `nodeSelector`. NetworkCheck test pods could only be scheduled on a subset of nodes, preventing the diagnostic covering the entire cluster; in some clusters, @@ -2101,7 +2101,7 @@ changed so to avoid the edge cases around pod creation or deletion failures. This meant that the VNID tracking does not fail, so traffic flows. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1454948[*BZ#1454948*]) -* Previously, running `oadm diagnostics NetworkCheck` would result in a timeout +* Previously, running `oc adm diagnostics NetworkCheck` would result in a timeout error. Changing the script to run from the pod definition fixed the issue. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1421643[*BZ#1421643*]) @@ -2247,7 +2247,7 @@ file will correctly populate. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1449022[*BZ#1449022*]) * Version 3.6 router introduced a new port named `router-stats`. This bug created -an option for `oadm router` command to allow a user to specify customized a +an option for `oc adm router` command to allow a user to specify customized a router-stats port, such as `--stats-port=1936`, so that user could easily create an customized router. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1452019[*BZ#1452019*]) diff --git a/release_notes/ocp_3_7_release_notes.adoc b/release_notes/ocp_3_7_release_notes.adoc index a8145112754f..25a437c2598c 100644 --- a/release_notes/ocp_3_7_release_notes.adoc +++ b/release_notes/ocp_3_7_release_notes.adoc @@ -235,7 +235,7 @@ place due to the `system:nodes` RBAC being granted from OCP 3.6. To turn the enforcements on, run: ---- -# oadm policy remove-cluster-role-from-group system:node system:nodes +# oc adm policy remove-cluster-role-from-group system:node system:nodes ---- [[ocp-37-advanced-auditing]] @@ -1405,8 +1405,8 @@ operation to create native RBAC objects instead. The following commands do not work against a 3.7 server: ---- -$ oadm overwrite-policy -$ oadm migrate authorization +$ oc adm overwrite-policy +$ oc adm migrate authorization $ oc create policybinding ---- @@ -1763,7 +1763,7 @@ routine to hang. This caused many builds to hang during the registry push.This bug fix corrects the regulator. As a result, concurrent pushes no longer hang. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1436841[*BZ#1436841*]) -* Previously, the `oadm prune images` command would print confusing errors (such +* Previously, the `oc adm prune images` command would print confusing errors (such as operation timeout). This bug fix enables errors to be printed with hints. As a result, users are able to prune images, including images outside of the OpenShift cluster. @@ -2393,7 +2393,7 @@ used with {product-title}. The periodic resync of Kubernetes into {product-title} included the required changes. vSphere now works correctly. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1433236[*BZ#1433236*]) -* Because of changes with upstream Kubernetes, the `oadm join-projects`, `oadm +* Because of changes with upstream Kubernetes, the `oc adm join-projects`, `oc adm isolate-projects` and other commands that depend on the pod update operation will not work. The code was changed to fetch some required elements from the Container Runtime Interface (CRI) directly. As a result, the pod update @@ -2569,7 +2569,7 @@ are visible in a web browser. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1467257[*BZ#1467257*]) * Previously, the IP failover keepalived image did not support IPV6 addresses or -ranges, as well as IP address validation. Adding IPV6 addresses to the `oadm +ranges, as well as IP address validation. Adding IPV6 addresses to the `oc adm ipfailover` command resulted in a new vrrp section pertaining to the wrong address. The code has been updated, and inputting invalid IPV4 and IPV6 addresses now return an error as expected. diff --git a/release_notes/ose_3_1_release_notes.adoc b/release_notes/ose_3_1_release_notes.adoc index 1d91304b745b..4237d9ce9b7a 100644 --- a/release_notes/ose_3_1_release_notes.adoc +++ b/release_notes/ose_3_1_release_notes.adoc @@ -126,7 +126,7 @@ The `oc rsync` command is now available, which can copy local directories into a remote pod. Project Binding Command:: -Isolated projects can now be bound together using `oadm pod-network +Isolated projects can now be bound together using `oc adm pod-network join-project`. Host Configuration Validation Commands:: @@ -431,7 +431,7 @@ user provided TLS certificates. Miscellaneous:: * The integrated Docker registry has been updated to version 2.2.1. * The LDAP group prune and sync commands have been promoted out of experimental -and into `oadm groups`. +and into `oc adm groups`. * More tests and configuration warnings have been added to `openshift ex diagnostics`. * Builds are now updated with the Git commit used in a build after the build diff --git a/release_notes/ose_3_2_release_notes.adoc b/release_notes/ose_3_2_release_notes.adoc index bf3fd483a1c2..17c268689ba6 100644 --- a/release_notes/ose_3_2_release_notes.adoc +++ b/release_notes/ose_3_2_release_notes.adoc @@ -174,9 +174,9 @@ on the same node. See xref:../admin_guide/scheduler.adoc#admin-guide-scheduler[S ==== Administrator CLI - The administrative commands are now exposed via `oc adm` so you have access to -them in a client context. The `oadm` commands will still work, but will be a +them in a client context. The `oc adm` commands will still work, but will be a symlink to the `openshift` binary. -- The help output of the `oadm policy` command has been improved. +- The help output of the `oc adm policy` command has been improved. - Service accounts are now supported for the router and registry: ** The router can now be created without specifying `--credentials` and it will use the router service account in the current project. @@ -690,7 +690,7 @@ process is correctly running before continuing. rapidly, reducing the time before the old deployment is scaled back up. - Persistent volume claims (PVCs) are no longer blocked by the default SCC policy for users. -- Continue to support host ports on the `oadm router` command. Administrators can +- Continue to support host ports on the `oc adm router` command. Administrators can disable them with `--host-ports=false` when `--host-network=false` is also set. - Events are now emitted when the cancellation of a deployment fails. - When invoking a binary build, retry if the input image stream tag does not exist diff --git a/security/network_security.adoc b/security/network_security.adoc index 5a025cfd96ed..fa4280a068ae 100644 --- a/security/network_security.adoc +++ b/security/network_security.adoc @@ -48,7 +48,7 @@ environments. For example, to isolate a project network in the cluster and vice versa, run: ---- -$ oadm pod-network isolate-projects +$ oc adm pod-network isolate-projects ---- In the above example, all of the pods and services in `` and diff --git a/whats_new/ose_3_0_release_notes.adoc b/whats_new/ose_3_0_release_notes.adoc index 283e09434977..ebf8461cabf9 100644 --- a/whats_new/ose_3_0_release_notes.adoc +++ b/whats_new/ose_3_0_release_notes.adoc @@ -172,7 +172,7 @@ attempt to retrieve build pods by label will no longer work. .For Administrators * Kubernetes was updated to v1.0.0. * To make it easier to xref:../install_config/upgrading/index.adoc#install-config-upgrading-index[upgrade your cluster], -the `oadm reconcile-cluster-roles` command has been added to +the `oc adm reconcile-cluster-roles` command has been added to xref:../install_config/upgrading/index.adoc#install-config-upgrading-index[update your cluster roles] to match the internal default. Use this command to verify the cluster infrastructure users have the appropriate permissions. @@ -248,7 +248,7 @@ timer loop. * The router and internal registry now default to using the `*RollingUpdate*` strategy deployment. Red Hat recommends updating any existing router or registry installations if you plan on scaling them up to multiple pods. -* The `oadm policy who-can` command now shows additional information. +* The `oc adm policy who-can` command now shows additional information. * Master startup no longer has a chance to generate certificates with duplicate serial numbers, which previously rendered them unusable. @@ -436,7 +436,7 @@ annotation to *"true"* on a particular service account to require that check. .Platform * xref:../architecture/additional_concepts/other_api_objects.adoc#group[Groups] -of users are now supported. Cluster administrators can use the `oadm groups` +of users are now supported. Cluster administrators can use the `oc adm groups` command to manage them. * Service accounts are now more easily bound to roles through the new `*subjects*` field, as described in *Backwards Compatibility* above.