diff --git a/modules/proc_deploy-quay-openshift-operator.adoc b/modules/proc_deploy-quay-openshift-operator.adoc index dafb42467..6314ae2bf 100644 --- a/modules/proc_deploy-quay-openshift-operator.adoc +++ b/modules/proc_deploy-quay-openshift-operator.adoc @@ -819,7 +819,7 @@ During the development process, you may want to test the provisioning and setup of your {productname} server. By default, the operator will use the internal service to communicate with the configuration pod. However, when running external to the cluster, -you will need to specify the hostname location for which the setup process +you will need to specify the hostname location that the setup process can use. Specify the configHostname as shown below: @@ -851,8 +851,7 @@ SSL certificates can be provided and used instead of having the operator generat The secret containing custom certificates must define the following keys: -* **tls.cert**: All of the certificates (root, intermediate, certificate) concatinated into a single file - +* **tls.cert**: All of the certificates (root, intermediate, certificate) concatenated into a single file * **tls.key**: Private key as for the SSL certificate Create a secret containing the certificate and private key: @@ -881,9 +880,10 @@ spec: == TLS Termination {productname} can be configured to protect connections using SSL certificates. -By default, SSL communicated is termminated within {productname}. There are +By default, SSL communication is terminated within {productname}. There are several different ways that SSL termination can be configured including -omitting the use of certificates altogeter. TLS termination is determined by +omitting the use of certificates altogether. TLS termination is determined by + the termination property as shown below: ``` @@ -906,7 +906,7 @@ Alternate options are available as described below: |======= | TLS Termination Type |Description |Notes | passthrough |SSL communication is terminated at Quay |Default configuration -| edge |SSL commmunication is terminated prior to reaching Quay. Traffic reaching quay is not encrypted (HTTP) | +| edge |SSL communication is terminated prior to reaching Quay. Traffic reaching quay is not encrypted (HTTP) | | none | All communication is unencrypted | |======= @@ -1310,31 +1310,34 @@ $ operator-sdk up local --namespace=quay-enterprise ``` = Upgrading {productname} -The Quay Operator v3.3.0 has many changes from v1.0.2. The most notable which + +The Quay Operator {productminv} has many changes from v1.0.2. The most notable which affects the upgrade process is the backwards-incompatible change to the CRD. Ultimately, the CR (Custom Resource) used to deploy {productname} using the operator may have to be modified accordingly. == Upgrade Prerequisites -Ensure that your deployment is using a supported persistance layer and + +Ensure that your deployment is using a supported persistence layer and + database. A production {productname} deployment run by the Operator should *not* be relying on the Postgres instance or a OpenShift volume that has been created by the Operator. If you are using a Postgres instance or OpenShift volume that was created -by the Operator, the upgrade path is not suported as the removal of the old +by the Operator, the upgrade path is not supported as the removal of the old Operator will cascade the deletion of your database and volume. It may be possible to manually migrate your data to supported storage mechanisms but this is not within the scope of the typical, or supported, upgrade path. -Please read through the entire guide before following any steps as this upgrade +Please read through the entire guide before following any steps, as this upgrade path is potentially destructive and there is no guaranteed roll-back mechanism. == Upgrade Process Summary Here are the basic steps for upgrading the {productname} cluster you originally deployed from the v1.0.2 Quay Setup Operator to -the v3.3 Quay Operator: +the {productminv} Quay Operator: . Document all configuration related to your current deployment. . Copy your CR and modify any configuration values as needed. @@ -1342,10 +1345,10 @@ the v3.3 Quay Operator: . Ensure that only one quay pod will be started, as this Pod will perform any database migrations needed before scaling up the entire cluster. . Uninstall the old Quay Operator (v1.0.2 or older) -. Install the latest Quay Operator (v3.3.0) +. Install the latest Quay Operator ({productminv}) . Create your CR by issuing the command `oc create -f new_deployment.yaml` -. Watch the logs of your Quay Pod until all migrations have finished. -. At this point, it is safe to scale up your Quay cluster if desired. +. Watch the logs of your quay Pod until all migrations have finished. +. At this point, it is safe to scale up your {productname} cluster if desired. === Document the existing {productname} deployment @@ -1356,17 +1359,18 @@ information will aid them with the details needed to bring your cluster back to its original state. At minimum, the following information should be gathered: -. The Custom Resource used to create your current Quay deployment. +. The Custom Resource used to create your current {productname} deployment. . The output of running `oc get QuayEcosystem -o yaml > quayecosystem.yaml` in your Project or Namespace. . The hostnames currently used to access Quay, Clair, Quay's Config App, Postgres, Redis, and Clair's Postgres instance. This can be achieved by -executing: `oc get routes -o yaml > old_routes.yaml` +executing: `oc get routes -o yaml > old_routes.yaml` or (if you are using +a loadbalancer) `oc get service` . Any authentication details required to connect to your Postgres instance(s) -for Quay and Clair. -. Any authentication details required to connect to your data persistance +for Quay and Clair pods. +. Any authentication details required to connect to your data persistence provider such as AWS S3. -. Backup your Quay's configuration secret which contains the `config.yaml` +. Backup your {productname} configuration secret which contains the `config.yaml` along with any certificates needed. This can be accomplished by using the following command: + @@ -1381,16 +1385,16 @@ changes. If your deployment does not specify any specific network-related configuration values, this step may not be necessary. Please refer to the documentation to -ensure that your the configuration options in your current CR are still -accurate for the Quay Operator v3.3.0. +ensure that the configuration options in your current CR are still +accurate for the Quay Operator {productminv}. In the case that you have specified options related to the management of networking, such as using a LoadBalancer or specifying a custom hostname, please reference the latest documentation to update them with the schema -changes included in Quay Operator v3.3.0. +changes included in Quay Operator {productminv}. If you have overridden the image used for Quay or Clair, please keep in mind -that Quay Operator v3.3.0 specifically supports Quay v3.3.0 and Clair v3.3.0. +that Quay Operator {productminv} specifically supports Quay {productminv} and Clair {productminv}. It is advisable to remove those image overrides to use the latest, supported releases of Quay and Clair in your deployment. Any other images may not be supported. @@ -1404,14 +1408,15 @@ ensure you understand all steps required to recreate your cluster before removing your existing deployment. ==== -The Quay Operator v3.3.0 is now distributed using the official Red Hat channels. +The Quay Operator {productminv} is now distributed using the official Red Hat channels. Previously, Quay Operator v1.0.2 (and below) were provided using "Community" -channels. Additionally, 3.3.0 offers no automatic upgrade path which requires -your Quay deployment and the Quay Operator to be completely removed and +channels. Additionally, {productname} {productminv} offers no automatic upgrade path which requires +your {productname} deployment and the Quay Operator to be completely removed and replaced. -Fortunately, the important data is stored in your Postgres datbase and your -storage backend so it is advisable to ensure you have proper backups for both. +Fortunately, the important data is stored in your Postgres database and your +storage backend, so it is advisable to ensure you have proper backups for both. + Once you are ready, remove your existing deployment by issuing the following command: @@ -1421,29 +1426,31 @@ $ oc delete -f deployment.yaml ``` All Quay and Clair pods will be removed as well as the Redis pod. At this -point, your Quay cluster will be completely down and inaccesible. It is +point, your {productname} cluster will be completely down and inaccessible. It is suggested to inform your users of a maintenance window as they will not be able to access their images during this time. === Ensure only the quay pod is started -When Quay pods start, they will look at the database to determine whether all +When {productname} pods start, they will look at the database to determine whether all required database schema changes are applied. If the schema changes are not applied, which is more than likely going to be the case when upgrading from -Quay v3.2 to v3.3.0, then the Quay pod will automatically begin running all -migrations. If multiple Quay instances are running simultaenously, they may +{productname} v3.2 to {productminv}, then the Quay pod will automatically begin running all +migrations. If multiple {productname} instances are running simultaneously, they may all attempt to update or modify the database at the same time which may result in unexpected issues. + To ensure that the migrations are run correctly, do not specify more than a single Quay replica to be started. Note that the default quantity of Quay pod replicas is 1, so unless you changed it, there is no work to be done here. === Uninstall the Quay Operator -Verify that all Quay-related deployments and pods no longer exist within your -namespace. Ensure that no other Quay deployments depend upon the installed -Quay Operator v1.0.2 (or earlier). +Verify that all {productname}-related deployments and pods no longer exist within your +namespace. Ensure that no other {productname} deployments depend upon the installed +Quay Operator v1.0.2 (or earlier). Type `oc get pod` and `oc get deployment` to make +sure they are gone. Using OpenShift, navigate to the `Operators > Installed Operators` page. The UI will present you with the option to delete the operator. @@ -1455,9 +1462,9 @@ Previously, the Quay Operator (v1.0.2 and prior) were provided using the "community" Operator Hub catalog. In the latest release, the Quay Operator is released through official Red Hat channels. -In the OpenShift UI, navigate to `Operators > OperatorHub` and then simply -search for `Quay`. Ensure you are choosing the correct Quay Operator v3.3.0 -in the even that you encounter multiple, similar results. Simply click +In the OpenShift console, navigate to `Operators > OperatorHub` and then simply +search for `Quay`. Ensure you are choosing the correct Quay Operator {productminv} +in the event that you encounter multiple, similar results. Simply click `install` and choose the correct namespace/project to install the operator. === Recreate the deployment @@ -1467,9 +1474,9 @@ documented in this upgrade process: . Your CR is updated to reflect any differences in the latest operator's schema (CRD). -. Quay Operator v3.3.0 is installed into your project/namespace -. Any secrets necessary to deploy Quay exist -. Your CR defines either 1 Quay Pod replica or does not specify any quanity +. Quay Operator {productminv} is installed into your project/namespace +. Any secrets necessary to deploy {productname} exist +. Your CR defines either 1 Quay Pod replica or does not specify any quantity of Quay replicas which defaults to 1. Once you are ready, simply create your QuayEcosystem by executing the command: @@ -1481,7 +1488,6 @@ $ oc create -f new_deployment.yaml At this point, the Quay Operator will begin to deploy Redis, the Quay Config Application, and finally your (single) Quay Pod. - === Monitor the database schema update progress Assuming that you are upgrading from Quay v3.2 to Quay v3.3, it will be @@ -1491,6 +1497,15 @@ viewed in your Quay pod's logs. Do not proceed with any additional steps until you are sure that the database migrations are complete. +=== Monitor the database schema update progress + +Assuming that you are upgrading from {productname} v3.2 to {productname} v3.3, it will be +necessary for Quay to perform schema updates to your database. These can be +viewed in your Quay pod's logs. + +Do not proceed with any additional steps until you are sure that the database +migrations are complete. + [NOTE] ==== These migrations should occur early in the pod's logs so it may be easy to overlook them. @@ -1503,7 +1518,8 @@ deployed to your Openshift cluster, it is time to verify your configuration and scale as needed. You can compare the results of the current configuration with the previous -configuration referencing, the documentation gathered in the first step of this +configuration referencing the documentation gathered in the first step of this + process. It is recommended to pay close attention to your hostname(s) and glance at all logs to look for any issues that may not have been obvious or caught by the Quay Operator. @@ -1511,13 +1527,17 @@ caught by the Quay Operator. It is also recommended to perform a quick "smoke test" on your environment to ensure that the major functionality is working as expected. One example test may include performing pushes and pulls from the registry on existing, and new, -images. Another example may be accessing Quay's UI as a registered user and +images. Another example may be accessing the {productname} UI as a registered user and + ensuring that the expected TLS certificate is used. If you rely on the Quay Operator to generate a self-signed TLS certificate then keep in mind that a new certificate may have been created by this process. -If multiple replicas are needed to scale your Quay registry, it is now safe -to change the replica count to your desired quantity. +If multiple replicas are needed to scale your {productname} registry, it is now safe +to change the replica count to your desired quantity. For example, +to scale out the quay pod, your might run `oc edit quayecosystem demo-quayecosystem`, +then change `replicas: 1` to `replicas: 2`, or other desired number. + +Finally, it would be highly recommended to ensure you store your configuration -Finally, it would be highly recommended to ensure your store your configuration and any relevant OpenShift secrets in a safe, preferably encrypted, backup.