diff --git a/deploy_quay_on_openshift_op/master.adoc b/deploy_quay_on_openshift_op/master.adoc index 8ac91214a..03eca06ee 100644 --- a/deploy_quay_on_openshift_op/master.adoc +++ b/deploy_quay_on_openshift_op/master.adoc @@ -1,15 +1,15 @@ include::modules/attributes.adoc[] [id='deploy-quay-on-openshift'] -= Deploy {productname} on OpenShift with Quay Setup Operator += Deploy {productname} on OpenShift with Quay Operator {productname} is an enterprise-quality container registry. Use {productname} to build and store container images, then make them available to deploy across your enterprise. Red Hat currently supports two approaches to deploying {productname} on OpenShift: -* **Deploy {productname} with the {productname} Setup Operator**: -The {productname} Setup Operator provides a simple method to deploy and +* **Deploy {productname} on OpenShift with Quay Operator**: +The Quay Operator provides a simple method to deploy and manage a {productname} cluster. This is the now preferred procedure for deploying {productname} on OpenShift and is covered in this guide. @@ -18,6 +18,11 @@ and is covered in this guide. provides a set of yaml files that you deploy individually to set up your {productname} cluster. This procedure is expected to eventually be deprecated. +As of {productname} v3.3, this operator changed from the "Quay Setup Operator" to simply +the "Quay Operator." That is because the operator can now be used for more that just +initially deploying {productname}, but can also be used for on-going configuration +and maintenance of a {productname} cluster. + include::modules/con_quay_intro.adoc[leveloffset=+1] include::modules/con_quay_openshift_prereq.adoc[leveloffset=+1] diff --git a/modules/con_deploy_quay_start_using.adoc b/modules/con_deploy_quay_start_using.adoc index 18e83ae91..5e6bb3a95 100644 --- a/modules/con_deploy_quay_start_using.adoc +++ b/modules/con_deploy_quay_start_using.adoc @@ -2,5 +2,5 @@ With {productname} now running, you can: * Select Tutorial from the Quay home page to try the 15-minute tutorial. In the tutorial, you learn to log into Quay, start a container, create images, push repositories, view repositories, and change repository permissions with Quay. -* Refer to the link:https://access.redhat.com/documentation/en-us/red_hat_quay/3/html-single/use_red_hat_quay/[Use {productname}] for information on working +* Refer to the link:https://access.redhat.com/documentation/en-us/red_hat_quay/{producty}/html-single/use_red_hat_quay/[Use {productname}] for information on working with {productname} repositories. diff --git a/modules/con_quay_intro.adoc b/modules/con_quay_intro.adoc index b286c591e..c341ad5ee 100644 --- a/modules/con_quay_intro.adoc +++ b/modules/con_quay_intro.adoc @@ -4,7 +4,7 @@ Features of {productname} include: * High availability * Geo-replication -* Repository mirroring (link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] in {productname} v3.1, supported in v3.2) +* Repository mirroring * Docker v2, schema 2 (multiarch) support * Continuous integration * Security scanning with Clair @@ -35,4 +35,7 @@ For supported deployments, you need to use one of the following types of storage * **Private cloud storage**: In private clouds, an S3 or Swift compliant Object Store is needed, such as Ceph RADOS, or OpenStack Swift. +[WARNING] +==== Do not use "Locally mounted directory" Storage Engine for any production configurations. Mounted NFS volumes are not supported. Local storage is meant for {productname} test-only installations. +==== diff --git a/modules/con_quay_openshift_prereq.adoc b/modules/con_quay_openshift_prereq.adoc index 78e2525b3..d53beaae6 100644 --- a/modules/con_quay_openshift_prereq.adoc +++ b/modules/con_quay_openshift_prereq.adoc @@ -5,13 +5,13 @@ Here are a few things you need to know before you begin the {productname} on OpenShift deployment: -* *OpenShift cluster*: You need a privileged account to an OpenShift 3.x or 4.x cluster on which to deploy +* *OpenShift cluster*: You need a privileged account to an OpenShift 4.2 or later +cluster on which to deploy the {productname}. That account must have the ability to create namespaces at the cluster scope. -To use {productname} builders, OpenShift 3 is required. * *Storage*: AWS cloud storage is used as an example in the following procedure. As an alternative, you can create Ceph cloud storage using steps -from the link:https://access.redhat.com/documentation/en-us/red_hat_quay/3/html-single/deploy_red_hat_quay_-_high_availability/#set_up_ceph[Set up Ceph] section of the high availability {productname} deployment guide. +from the link:https://access.redhat.com/documentation/en-us/red_hat_quay/{producty}/html-single/deploy_red_hat_quay_-_high_availability/#set_up_ceph[Set up Ceph] section of the high availability {productname} deployment guide. The following is a list of other supported cloud storage: ** Amazon S3 (see link:https://access.redhat.com/solutions/3680151[S3 IAM Bucket Policy] for details on configuring an S3 bucket policy for {productname}) @@ -20,7 +20,7 @@ The following is a list of other supported cloud storage: ** Ceph Object Gateway (RADOS) ** OpenStack Swift ** CloudFront + S3 -** NooBaa S3 Storage (See link:https://access.redhat.com/articles/4356091[Configuring Red Hat OpenShift Container Storage for Red Hat Quay], currently link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]) +** OpenShift Container Storage (NooBaa S3 Storage). (See link:https://access.redhat.com/articles/4356091[Configuring Red Hat OpenShift Container Storage for Red Hat Quay]) * *Services*: The OpenShift cluster must have enough capacity to run the following containerized services: @@ -43,3 +43,5 @@ for details on support for third-party databases and other components. tutorial content to your {productname} configuration. ** *{productname}*: The quay container provides the features to manage the {productname} registry. + +** *Clair*: The clair-jwt container provides Clair scanning services for the registry. diff --git a/modules/proc_deploy-quay-openshift-operator.adoc b/modules/proc_deploy-quay-openshift-operator.adoc index 529a5f6fe..dafb42467 100644 --- a/modules/proc_deploy-quay-openshift-operator.adoc +++ b/modules/proc_deploy-quay-openshift-operator.adoc @@ -7,36 +7,53 @@ This procedure: -* Installs the {productname} Setup Operator on OpenShift from the OperatorHub -* Deploys a {productname} cluster on OpenShift with the Setup Operator +* Installs the {productname} Operator on OpenShift from the OperatorHub +* Deploys a {productname} cluster on OpenShift with the Quay Operator You have the option of changing dozens of settings before deploying the -{productname} Setup Operator. +{productname} Operator. The Operator automates the entire start-up process, by-passing the {productname} config tool. You can choose to skip the Operator's automated configuration and use the config tool directly. +As of {productname} 3.3, the Quay Setup Operator is now simply called the Quay Operator. +Features have been added to this operator to allow it to be used to +maintain and update your {productname} cluster after deployment. +Updates can be done directly through the Operator or using the {productname} Config Tool. + .Prerequisites -* An OpenShift 3.x or 4.x cluster +* An OpenShift 4.2 (or later) cluster * Cluster-scope admin privilege to the OpenShift cluster -.Procedure - -You have two choices for deploying the {productname} Operator: - -* **Advanced Setup**: Go through the `Customizing your -{productname} cluster` -section and change any setting you desire -before running this procedure. - -* **Standard Setup**: Just step through the procedure as is to use -all the default setting. - -== Install the {productname} Setup Operator +== Production deployments + +For a non-production deployment, you can take the defaults and get a {productname} +cluster up and running quickly. For a production deployment, you should go through +all the configuration options described later in this document. Of those, however, +you should certainly consider at least the following: + +* **Superuser password**: Change the default superuser password. +* **Config tool password**: Change the {productname} Config Tool password from the default. +* **Quay image**: If available, replace the quay image associated with the current +Operator with a later quay image. +* **Replica count**: Based on your expected demand, increase the replica count to +set how many instances of the quay container will run. +* **Memory request**: Choose how much memory to assign to the quay container, +based on expected demand. +* **CPU request**: Select the amount of CPU you want assigned, based on expected demand. +* **Quay database**: Consider using an existing PostgreSQL database that is +outside of the OpenShift cluster and one that has commercial support. +* **Storage backend**: Choose a reliable and supported storage backend. Local storage +and NFS storage are not supported for production deployments! +* **Certificates**: Supply your own certificates to communicate with {productname}, +as well as to access other services, such as storage and LDAP services. + +== Install the {productname} Operator . From the OpenShift console, select Operators -> OperatorHub, then select -the {productname} Operator. +the {productname} Operator. If there is more than one, be sure to use the +Red Hat certified Operator and not the community version. . Select Install. The Operator Subscription page appears. @@ -48,6 +65,8 @@ the {productname} Operator. * Approval Strategy: Choose to approve automatic or manual updates +. Select Subscribe. + == Deploy a {productname} ecosystem . See the @@ -271,7 +290,7 @@ enables replication and high availability of images. ``` apiVersion: redhatcop.redhat.io/v1alpha1 kind: QuayEcosystem -metadaLocalStorageta: +metadata: name: example-quayecosystem spec: quay: @@ -347,7 +366,8 @@ specify the underlying storage for the {productname} registry: ==== Local Storage -The following is an example for configuring the registry to make use of `local` storage: +The following is an example for configuring the registry to make use of `local` storage +(note that local storage is not supported for production deployments): ``` apiVersion: redhatcop.redhat.io/v1alpha1 @@ -615,6 +635,12 @@ The following is a comprehensive list of properties for the The following is an example for configuring the registry to make use of S3 storage on Amazon Web Services. +[NOTE] +==== +CloudFront configuration cannot currently be configured using the CR, due to a known issue. +You can, however, manage it through the {productname} Config Tool. +==== + ``` apiVersion: redhatcop.redhat.io/v1alpha1 kind: QuayEcosystem @@ -650,6 +676,29 @@ The following is a comprehensive list of properties for the `cloudfrontS3` regis | privateKeyFilename| CloudFront Private Key| No| Yes |======= +== Repository Mirroring +{productname} provides the capability to create container image repositories +that exactly match the content of external registries. This functionality can +be enabled by setting the enableRepoMirroring: true as shown below: + +``` +apiVersion: redhatcop.redhat.io/v1alpha1 +kind: QuayEcosystem +metadata: + name: example-quayecosystem +spec: + quay: + imagePullSecretName: redhat-pull-secret + enableRepoMirroring: true +``` + +The following additional options are also available: + +* repoMirrorTLSVerify - Require HTTPS and verify certificates of Quay registry during mirror +* repoMirrorServerHostname - URL for use by the skopeo copy command +* repoMirrorEnvVars - Environment variables to be applied to the repository mirror container +* repoMirrorResources - Compute resources to be applied to the repository mirror container + == Injecting configuration files Files related to the configuration of {productname} are located in @@ -718,7 +767,7 @@ spec: - secretName: quayconfigfile files: - key: myprivatekey.pem - filename: cloudfront.pemQuay + filename: cloudfront.pem - key: myExtraCaCert.crt type: extraCaCert ``` @@ -746,6 +795,47 @@ spec: skipSetup: true ``` +== Specifying the {productname} route +{productname} makes use of an OpenShift route to enable ingress. The hostname +for this route is automatically generated as per the configuration of the +OpenShift cluster. Alternatively, the hostname for this route can be explicitly +specified using the hostname property under the externalAccess field as shown below: + +``` +apiVersion: redhatcop.redhat.io/v1alpha1 +kind: QuayEcosystem +metadata: + name: example-quayecosystem +spec: + quay: + imagePullSecretName: redhat-pull-secret + externalAccess: + hostname: example-quayecosystem-quay-quay-enterprise.apps.openshift.example.com +``` + +== Specifying a {productname} configuration route + +During the development process, you may want to test the +provisioning and setup of your {productname} server. By default, +the operator will use the internal service to communicate with +the configuration pod. However, when running external to the cluster, +you will need to specify the hostname location for which the setup process +can use. + +Specify the configHostname as shown below: + +``` +apiVersion: redhatcop.redhat.io/v1alpha1 +kind: QuayEcosystem +metadata: + name: example-quayecosystem +spec: + quay: + imagePullSecretName: redhat-pull-secret + externalAccess: + configHostname: example-quayecosystem-quay-config-quay-enterprise.apps.openshift.example.com +``` + == Providing SSL certificates {productname}, as a secure registry, makes use of SSL certificates to @@ -761,16 +851,16 @@ SSL certificates can be provided and used instead of having the operator generat The secret containing custom certificates must define the following keys: -* **ssl.cert**: All of the certificates (root, intermediate, certificate) concatinated into a single file +* **tls.cert**: All of the certificates (root, intermediate, certificate) concatinated into a single file -* **ssl.key**: Private key as for the SSL certificate +* **tls.key**: Private key as for the SSL certificate Create a secret containing the certificate and private key: ``` oc create secret generic custom-quay-ssl \ - --from-file=ssl.key= \ - --from-file=ssl.cert= + --from-file=tls.key= \ + --from-file=tls.cert= ``` The secret containing the certificates are referenced using the `sslCertificatesSecretName` property as shown below: @@ -783,16 +873,18 @@ metadata: spec: quay: imagePullSecretName: redhat-pull-secret - sslCertificatesSecretName: custom-quay-ssl + externalAccess: + tls: + secretName: custom-quay-ssl ``` -== Specifying the {productname} route +== TLS Termination -{productname} makes use of an OpenShift route to enable ingress. The hostname for -this route is automatically generated as per the configuration of the -OpenShift cluster. Alternatively, the hostname for this route can be -explicitly specified using the `hostname` property under the `quay` field -as shown below: +{productname} can be configured to protect connections using SSL certificates. +By default, SSL communicated is termminated within {productname}. There are +several different ways that SSL termination can be configured including +omitting the use of certificates altogeter. TLS termination is determined by +the termination property as shown below: ``` apiVersion: redhatcop.redhat.io/v1alpha1 @@ -801,34 +893,29 @@ metadata: name: example-quayecosystem spec: quay: - hostname: example-quayecosystem-quay-quay-enterprise.apps.openshift.example.com imagePullSecretName: redhat-pull-secret + externalAccess: + tls: + termination: passthrough ``` -== Specifying a {productname} configuration route - -During the development process, you may want to test the provisioning and setup of {productname}. By default, the Operator will use the internal service to communicate with the configuration pod. However, when running external to the cluster, you will need to specify the ingress location for which the setup process can use. - -Specify the `configHostname` as shown below: +The example above is the default configuration applied to {productname}. +Alternate options are available as described below: -``` -apiVersion: redhatcop.redhat.io/v1alpha1 -kind: QuayEcosystem -metadata: - name: example-quayecosystem -spec: - quay: - configHostname: example-quayecosystem-quay-config-quay-enterprise.apps.openshift.example.com - imagePullSecretName: redhat-pull-secret -``` +[width="75%"] +|======= +| TLS Termination Type |Description |Notes +| passthrough |SSL communication is terminated at Quay |Default configuration +| edge |SSL commmunication is terminated prior to reaching Quay. Traffic reaching quay is not encrypted (HTTP) | +| none | All communication is unencrypted | +|======= = Configuration deployment after initial setup -In order to conserve resources, the configuration deployment of -{productname} is removed after the initial setup. In certain cases, -there may be a need to further configure the {productname} -environment. To specify that the configuration deployment should -be retained, the `keepConfigDeployment` property within the `quay` object can be set as `true` as shown below: +By default, the {productname} Config Tool pod is left running even after the +initial setup process. To configure the Config Tool pod to be +removed after setup, the keepConfigDeployment property within the +{productname} object can can be set as false as shown below: ``` apiVersion: redhatcop.redhat.io/v1alpha1 @@ -838,7 +925,7 @@ metadata: spec: quay: imagePullSecretName: redhat-pull-secret - keepConfigDeployment: true + keepConfigDeployment: false ``` == Setting Redis password @@ -885,6 +972,48 @@ spec: imagePullSecretName: redhat-pull-secret ``` +The Quay Operator sets the Clair database connection string with the parameter +`sslmode=disable` if no parameters are specified in QuayEcosystem custom +resource. In case you have SSL enabled Postgres database, or want to add +other parameters, provide `key: value` pairs as strings (for example, + connect_timeout: '10') under connectionParameters object. + +For example: + +``` +apiVersion: redhatcop.redhat.io/v1alpha1 +kind: QuayEcosystem +metadata: + name: example-quayecosystem +spec: + quay: + imagePullSecretName: redhat-pull-secret + clair: + enabled: true + imagePullSecretName: redhat-pull-secret + database: + connectionParameters: + sslmode: require + connect_timeout: '10' +``` + +Supported connection string parameters: + +* **sslmode** - Whether or not to use SSL (default is disable, this is not the default for libpq) +* **connect_timeout** - Maximum wait for connection, in seconds. Zero or not specified means wait indefinitely. +* **sslcert** - Cert file location. The file must contain PEM encoded data. +* **sslkey** - Key file location. The file must contain PEM encoded data. +* **sslrootcert** - The location of the root certificate file. The file must contain PEM encoded data. + +Valid values for sslmode are: + +* **disable** - No SSL +* **require** - Always SSL (skip verification) +* **verify-ca** - Always SSL (verify that the certificate presented by the server +was signed by a trusted CA) +* **verify-full** - Always SSL (verify that the certification presented by the +server was signed by a trusted CA and the server host name matches the one in the certificate) + === Clair update interval Clair routinely queries CVE databases in order to build its own internal @@ -1081,6 +1210,66 @@ Environment variables for the Quay configuration pod can be managed by specifyin User defined environment variables are given precedence over those managed by the operator. Undesirable results may occur if conflicting keys are used. ==== += Configuring {productname} (post-deployment) + +After the Quay Operator deploys {productname}, by default the Config Tool +continues to run. Going forward, you can use the Config Tool or the +Quay Operator itself to update and maintain your {productname} deployment. + +== Using the Config Tool +The {productname} Config Tool provides a web UI for enabling or +modifying many of the settings in your {productname} cluster. +To use the Config Tool: + +. Get the route to the Config Tool by typing: ++ +``` +$ oc get route +NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD +... example-quayecosystem-quay-config.example.com ... +``` +. Add `https://` to the HOST/PORT entry for the Config Tool and enter it into your +web browser. + +. When prompted, log in using the Config Tool user name and password +(`quayconfig` and `quay`, by default). + +. Select `Modify configuration for this cluster`. + +At this point you can change the configuration as you choose. +When you are done, select Save Configuration Changes. Here are a few +things you should know about using the Config Tool: + +* Most changes you make will be checked for accuracy. For example, +if you change the location of a service, the Config Tool will check +that it can reach that service before saving the configuration. +If the connection fails, you have the chance to modify the setting +before saving. + +* After checking for accuracy, you now have the choice of +continuing to edit or completing your changes. + +* After you make changes and they are accepted, those changes +are deployed to all {productname} instances in the cluster. +There is no need to stop and restart those pods manually. + +== Using the Quay Operator +Updating your {productname} cluster using the Quay Operator offers +a way to deploy changes without having to click through a web UI. +Here are some things you should know about changing settings +through the Operator: + +* The same level of error checking is not performed when you +change settings directly through the Quay Operator. If, for example, +you provide the wrong address to a service, the connection to that +service will probably just fail and you would have to track down +the problem through OpenShift. + +* Once you make a change, those changes will not automatically be +applied to your {productname} instances. To have the changes take +effect, you will have to restart the {productname} pods manually. + + = Troubleshooting To resolve issues running, configuring and utilizing the operator, @@ -1120,32 +1309,215 @@ run the operator locally: $ operator-sdk up local --namespace=quay-enterprise ``` -= Upgrading {productname} and Clair += Upgrading {productname} +The Quay Operator v3.3.0 has many changes from v1.0.2. The most notable which +affects the upgrade process is the backwards-incompatible change to the CRD. +Ultimately, the CR (Custom Resource) used to deploy {productname} using the operator +may have to be modified accordingly. + +== Upgrade Prerequisites +Ensure that your deployment is using a supported persistance layer and +database. A production {productname} deployment run by the Operator should *not* be +relying on the Postgres instance or a OpenShift volume that has +been created by the Operator. + +If you are using a Postgres instance or OpenShift volume that was created +by the Operator, the upgrade path is not suported as the removal of the old +Operator will cascade the deletion of your database and volume. It may be +possible to manually migrate your data to supported storage mechanisms but +this is not within the scope of the typical, or supported, upgrade path. + +Please read through the entire guide before following any steps as this upgrade +path is potentially destructive and there is no guaranteed roll-back mechanism. + +== Upgrade Process Summary + +Here are the basic steps for upgrading the {productname} cluster +you originally deployed from the v1.0.2 Quay Setup Operator to +the v3.3 Quay Operator: + +. Document all configuration related to your current deployment. +. Copy your CR and modify any configuration values as needed. +. Remove your current deployment using `oc delete -f deployment.yaml` +. Ensure that only one quay pod will be started, as this Pod will perform any +database migrations needed before scaling up the entire cluster. +. Uninstall the old Quay Operator (v1.0.2 or older) +. Install the latest Quay Operator (v3.3.0) +. Create your CR by issuing the command `oc create -f new_deployment.yaml` +. Watch the logs of your Quay Pod until all migrations have finished. +. At this point, it is safe to scale up your Quay cluster if desired. + +=== Document the existing {productname} deployment + +For the purpose of ensuring a smooth upgrade, it is important to ensure you +have all available configuration details *before* deleting your existing +deployment. In the case that you must work with Red Hat Support, this +information will aid them with the details needed to bring your cluster back +to its original state. At minimum, the following information should be +gathered: + +. The Custom Resource used to create your current Quay deployment. +. The output of running `oc get QuayEcosystem -o yaml > quayecosystem.yaml` +in your Project or Namespace. +. The hostnames currently used to access Quay, Clair, Quay's Config App, +Postgres, Redis, and Clair's Postgres instance. This can be achieved by +executing: `oc get routes -o yaml > old_routes.yaml` +. Any authentication details required to connect to your Postgres instance(s) +for Quay and Clair. +. Any authentication details required to connect to your data persistance +provider such as AWS S3. +. Backup your Quay's configuration secret which contains the `config.yaml` +along with any certificates needed. This can be accomplished by using the +following command: ++ +``` +$ oc get secret quay-enterprise-config-secret -o yaml > config-secret.yaml +``` + +=== Update the CR + +Ensure a backup is created of your original Custom Resource (CR) before making any +changes. + +If your deployment does not specify any specific network-related configuration +values, this step may not be necessary. Please refer to the documentation to +ensure that your the configuration options in your current CR are still +accurate for the Quay Operator v3.3.0. + +In the case that you have specified options related to the management of +networking, such as using a LoadBalancer or specifying a custom hostname, +please reference the latest documentation to update them with the schema +changes included in Quay Operator v3.3.0. + +If you have overridden the image used for Quay or Clair, please keep in mind +that Quay Operator v3.3.0 specifically supports Quay v3.3.0 and Clair v3.3.0. +It is advisable to remove those image overrides to use the latest, supported +releases of Quay and Clair in your deployment. Any other images may not be +supported. + +=== Remove the existing deployment + +[WARNING] +==== +This step will remove your entire {productname} deployment. Use caution and +ensure you understand all steps required to recreate your cluster +before removing your existing deployment. +==== + +The Quay Operator v3.3.0 is now distributed using the official Red Hat channels. +Previously, Quay Operator v1.0.2 (and below) were provided using "Community" +channels. Additionally, 3.3.0 offers no automatic upgrade path which requires +your Quay deployment and the Quay Operator to be completely removed and +replaced. -Before upgrading to a new version of {productname} or Clair, refer to the -link:https://access.redhat.com/documentation/en-us/red_hat_quay/3/html-single/upgrade_red_hat_quay/index[Upgrade {productname}] guide for details. -The instructions here tell you how to change the quay and clair-jwt containers, -but do not provide the full upgrade instructions. +Fortunately, the important data is stored in your Postgres datbase and your +storage backend so it is advisable to ensure you have proper backups for both. -At the point in the upgrade instructions where you are ready to identify the new quay and clair-jwt containers, here is what you do: +Once you are ready, remove your existing deployment by issuing the following +command: ``` -$ oc edit quayecosystem/quayecosystem +$ oc delete -f deployment.yaml ``` -Find and update the following entries: +All Quay and Clair pods will be removed as well as the Redis pod. At this +point, your Quay cluster will be completely down and inaccesible. It is +suggested to inform your users of a maintenance window as they will not be +able to access their images during this time. + +=== Ensure only the quay pod is started + +When Quay pods start, they will look at the database to determine whether all +required database schema changes are applied. If the schema changes are not +applied, which is more than likely going to be the case when upgrading from +Quay v3.2 to v3.3.0, then the Quay pod will automatically begin running all +migrations. If multiple Quay instances are running simultaenously, they may +all attempt to update or modify the database at the same time which may +result in unexpected issues. + +To ensure that the migrations are run correctly, do not specify more than a +single Quay replica to be started. Note that the default quantity of Quay pod +replicas is 1, so unless you changed it, there is no work to be done here. + +=== Uninstall the Quay Operator + +Verify that all Quay-related deployments and pods no longer exist within your +namespace. Ensure that no other Quay deployments depend upon the installed +Quay Operator v1.0.2 (or earlier). + +Using OpenShift, navigate to the `Operators > Installed Operators` page. +The UI will present you with the option to delete the operator. -``` -image: quay.io/redhat/clair-jwt:vX.X.X -image: quay.io/redhat/quay:vX.X.X + +=== Install the new Quay Operator + +Previously, the Quay Operator (v1.0.2 and prior) were provided using the +"community" Operator Hub catalog. In the latest release, the Quay Operator is +released through official Red Hat channels. + +In the OpenShift UI, navigate to `Operators > OperatorHub` and then simply +search for `Quay`. Ensure you are choosing the correct Quay Operator v3.3.0 +in the even that you encounter multiple, similar results. Simply click +`install` and choose the correct namespace/project to install the operator. + +=== Recreate the deployment + +At this point, the following assumptions are made based upon the previous steps +documented in this upgrade process: + +. Your CR is updated to reflect any differences in the latest operator's +schema (CRD). +. Quay Operator v3.3.0 is installed into your project/namespace +. Any secrets necessary to deploy Quay exist +. Your CR defines either 1 Quay Pod replica or does not specify any quanity +of Quay replicas which defaults to 1. + +Once you are ready, simply create your QuayEcosystem by executing the command: ``` +$ oc create -f new_deployment.yaml +``` + +At this point, the Quay Operator will begin to deploy Redis, the Quay +Config Application, and finally your (single) Quay Pod. -Once saved, the operator will automatically apply the upgrade. +=== Monitor the database schema update progress + +Assuming that you are upgrading from Quay v3.2 to Quay v3.3, it will be +necessary for Quay to perform schema updates to your database. These can be +viewed in your Quay pod's logs. + +Do not proceed with any additional steps until you are sure that the database +migrations are complete. [NOTE] ==== -If you used a different name than `QuayEcosystem` for the custom resource -to deploy your Quay ecosystem, you will have to replace the name to fit the proper value. +These migrations should occur early in the pod's logs so it may be easy to overlook them. ==== + +=== Finalize the {productname} cluster upgrade + +Now that the latest release of {productname}, and optionally Clair, have been +deployed to your Openshift cluster, it is time to verify your configuration and +scale as needed. + +You can compare the results of the current configuration with the previous +configuration referencing, the documentation gathered in the first step of this +process. It is recommended to pay close attention to your hostname(s) and +glance at all logs to look for any issues that may not have been obvious or +caught by the Quay Operator. + +It is also recommended to perform a quick "smoke test" on your environment to +ensure that the major functionality is working as expected. One example test +may include performing pushes and pulls from the registry on existing, and new, +images. Another example may be accessing Quay's UI as a registered user and +ensuring that the expected TLS certificate is used. If you rely on the Quay +Operator to generate a self-signed TLS certificate then keep in mind that a new +certificate may have been created by this process. + +If multiple replicas are needed to scale your Quay registry, it is now safe +to change the replica count to your desired quantity. + +Finally, it would be highly recommended to ensure your store your configuration +and any relevant OpenShift secrets in a safe, preferably encrypted, backup.