Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions architecture/images
36 changes: 18 additions & 18 deletions architecture/master.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,28 +7,27 @@ include::modules/attributes.adoc[]

include::modules/arch-intro.adoc[leveloffset=+1]

include::modules/arch-core-intro.adoc[leveloffset=+1]
include::modules/core-infrastructure.adoc[leveloffset=+2]
include::modules/core-distinct-registries.adoc[leveloffset=+3]
include::modules/arch-prereqs.adoc[leveloffset=+1]
include::modules/core-prereqs-storage.adoc[leveloffset=+2]
include::modules/core-prereqs-db.adoc[leveloffset=+2]
include::modules/core-prereqs-redis.adoc[leveloffset=+2]


include::modules/core-prereqs.adoc[leveloffset=+2]
include::modules/core-prereqs-storage.adoc[leveloffset=+3]
include::modules/core-prereqs-db.adoc[leveloffset=+3]
include::modules/core-prereqs-redis.adoc[leveloffset=+3]
include::modules/core-infrastructure.adoc[leveloffset=+1]
include::modules/core-distinct-registries.adoc[leveloffset=+2]


include::modules/core-sample-quay-on-prem.adoc[leveloffset=+2]
include::modules/core-example-deployment.adoc[leveloffset=+2]
include::modules/deployment-topology.adoc[leveloffset=+2]
include::modules/deployment-topology-with-storage-proxy.adoc[leveloffset=+2]

include::modules/public-cloud-intro.adoc[leveloffset=+2]
include::modules/public-cloud-aws.adoc[leveloffset=+3]
include::modules/public-cloud-azure.adoc[leveloffset=+3]


include::modules/core-sample-quay-on-prem.adoc[leveloffset=+1]
include::modules/core-example-deployment.adoc[leveloffset=+2]
include::modules/deployment-topology.adoc[leveloffset=+2]
include::modules/deployment-topology-with-storage-proxy.adoc[leveloffset=+2]

include::modules/public-cloud-intro.adoc[leveloffset=+1]
include::modules/public-cloud-aws.adoc[leveloffset=+2]
include::modules/public-cloud-azure.adoc[leveloffset=+2]

include::modules/security-intro.adoc[leveloffset=+1]
include::modules/clair-intro.adoc[leveloffset=+2]
Expand Down Expand Up @@ -57,6 +56,7 @@ include::modules/georepl-mixed-storage.adoc[leveloffset=+3]
include::modules/mirroring-versus-georepl.adoc[leveloffset=+2]
include::modules/airgap-intro.adoc[leveloffset=+2]
include::modules/airgap-clair.adoc[leveloffset=+3]

//access control
include::modules/access-control-intro.adoc[leveloffset=+1]
include::modules/tenancy-model.adoc[leveloffset=+2]
Expand All @@ -70,18 +70,18 @@ include::modules/fine-grained-access-control-intro.adoc[leveloffset=+3]
include::modules/ldap-binding-groups-intro.adoc[leveloffset=+4]
include::modules/ldap-filtering-intro.adoc[leveloffset=+4]
include::modules/quay-sso-keycloak-intro.adoc[leveloffset=+4]

//sizing
include::modules/sizing-intro.adoc[leveloffset=+1]
include::modules/sizing-sample.adoc[leveloffset=+2]
include::modules/subscription-intro.adoc[leveloffset=+2]

include::modules/quay-internal-registry-intro.adoc[leveloffset=+2]



include::modules/scalability-intro.adoc[leveloffset=+1]
//include::modules/scalability-intro.adoc[leveloffset=+1]


include::modules/build-automation-intro.adoc[leveloffset=+1]
//include::modules/build-automation-intro.adoc[leveloffset=+1]

include::modules/integration-intro.adoc[leveloffset=+1]
//include::modules/integration-intro.adoc[leveloffset=+1]
4 changes: 2 additions & 2 deletions modules/access-control-intro.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[[access-control-intro]]
= Access control
= Access control in {productname}

{productname} provides both Role Based Access Control (RBAC) and Fine-Grained Access Control, and has team features that allow for limited access control of repositories, organizations, and user privileges. {productname} access control features also provide support for dispersed organizations.
{productname} provides both role-based access control (RBAC) and fine-grained access control, and has team features that allow for limited access control of repositories, organizations, and user privileges. {productname} access control features also provide support for dispersed organizations.


11 changes: 0 additions & 11 deletions modules/arch-core-intro.adoc

This file was deleted.

77 changes: 66 additions & 11 deletions modules/arch-intro.adoc
Original file line number Diff line number Diff line change
@@ -1,19 +1,74 @@
[[arch-intro]]
= {productname} features


{productname} is a trusted, open source container registry platform that runs everywhere, but runs best on Red Hat OpenShift. It scales without limits, from a developer laptop to a container host or Kubernetes, and can be deployed on-premise or on public cloud. It provides global governance and security controls, with features including image vulnerability scanning, access controls, geo-replication and repository mirroring.
= {productname} overview

{productname} is a trusted, open source container registry platform that runs everywhere, but runs best on Red Hat OpenShift. It scales without limits, from a developer laptop to a container host or Kubernetes, and can be deployed on-prem or on public cloud. {productname} provides global governance and security controls, with features including image vulnerability scanning, access controls, geo-replication and repository mirroring.

image:178_Quay_architecture_0821_features.png[Quay features]

This guide provides an insight into architectural patterns to use when deploying {productname}. It contains sizing guidance and deployment prerequisites, along with best practices for ensuring high availability for your {productname} registry.


* xref:arch-core-intro[Core functionality]
* xref:security-intro[Security]
* xref:content-distrib-intro[Content distribution]
* xref:access-control-intro[Access control]
* xref:build-automation-intro[Build automation]
* xref:scalability-intro[Scalability]
* xref:integration-intro[Integration]

== Scalability and high availability (HA)

The code base for the private {productname} offering is substantially the same as that used for link:https::/quay.io[quay.io], the highly available container image registry hosted by Red Hat which provides a multi-tenant SaaS solution. As a result, you can be confident that {productname} can deliver at scale with high availability, whether you deploy on-prem or on public cloud.

== Security

{productname} is built for real enterprise use cases where content governance and security are two major focus areas. {productname} content governance and security includes built-in vulnerability scanning via Clair.

== Content distribution

Content distribution features in {productname} include:

Repository mirroring:: {productname} repository mirroring lets you mirror images from external container registries (or another local registry) into your {productname} cluster. Using repository mirroring, you can synchronize images to {productname} based on repository names and tags.

Geo-replication:: {productname} geo-replication allows multiple, geographically distributed Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed {productname} setup. Image data is asynchronously replicated in the background with transparent failover / redirect for clients.

Deployment in disconnected or air-gapped environments:: {productname} can be deployed in a disconnected environment in two ways:
+
* {productname} and Clair connected to the internet, with an air-gapped OpenShift cluster accessing the Quay registry through an explicit, white-listed hole in the firewall.
* {productname} and Clair running inside the firewall, with image and CVE data transferred to the target system using offline media. The data is exported from a separate Quay and Clair deployment that is connected to the internet.

== Access control

{productname} provides both role-based access control (RBAC) and fine-grained access control, and has team features that allow for limited access control of repositories, organizations, and user privileges. {productname} access control features also provide support for dispersed organizations.

== Build automation

{productname} supports building Dockerfiles using a set of worker nodes on OpenShift or Kubernetes. Build triggers, such as GitHub webhooks, can be configured to automatically build new versions of your repositories when new code is committed.

Prior to {productname} 3.7, Quay ran podman commands in virtual machines launched by pods. Running builds on virtual platforms requires enabling nested virtualization, which is not featured in Red Hat Enterprise Linux or OpenShift Container Platform. As a result, builds had to run on bare-metal clusters, which is an inefficient use of resources.

With {productname} 3.7, the bare-metal constraint required to run builds has been removed by adding an additional build option which does not contain the virtual machine layer. As a result, builds can be run on virtualized platforms. Backwards compatibility to run previous build configurations is also available.

== Integration

Integration with popular source code management and versioning systems like GitHub, GitLab or BitBucket allows {productname} to continuously build and serve your containerized software.

== REST API

{productname} provides a full OAuth 2, RESTful API that:

* Is available from endpoints of each {productname} instance from the URL https://<yourquayhost>/api/v1
* Lets you connect to endpoints, via a browser, to get, delete, post, and put {productname} settings by enabling the Swagger UI
* Can be accessed by applications that make API calls and use OAuth tokens
* Sends and receives data as JSON

== Recently added features

Storage Quota on Organizations:: Control and contain storage growth of your container registry with reporting and enforcement.

Transparent pull-thru cache proxy (Tech preview):: Use {productname} as a transparent cache for other registry for improved performance and resiliency.

Geo-replication with the Operator:: Deploy a geographically dispersed container registry across two or more OpenShift clusters.

{productname} container builds on OpenShift:: Build your container images right inside Quay running on top of OpenShift.

== Other features

* Full standards / spec support (Docker v2-2)
* Long-term protocol support
* OCI compatibility through test suite compliance
* Enterprise grade support
* Regular updates
10 changes: 10 additions & 0 deletions modules/arch-prereqs.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
[[arch-prereqs]]
= {productname} prerequisites

Before deploying {productname}, you will need to provision the following:

* xref:core-prereqs-storage[Image storage]
* xref:core-prereqs-db[Database]
* xref:core-prereqs-redis[Redis]


6 changes: 3 additions & 3 deletions modules/clair-analyses.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,17 @@ Once a `Manifest` is indexed, the `IndexReport` is persisted for later retrieval

- **Matching**: Matching is taking an `IndexReport` and correlating vulnerabilities affecting the `Manifest` the report represents.
+
Clair continuously ingests new security data and a request to the matcher will always provide users with the most to date vulnerability analysis of an `IndexReport`.
Clair continuously ingests new security data and a request to the matcher will always provide users with the most up to date vulnerability analysis of an `IndexReport`.

- **Notifications**: Clair implements a notification service. When new vulnerabilities are discovered, the notifier service will determine if these vulnerabilities affect any indexed `Manifests`. The notifier will then take action according to its configuration.

== Notifications for vulnerabilities found by Clair

{productname} 3.4 triggers different notifications for various repository events. These notifications vary based on enabled features.
Since {productname} 3.4, different notifications are triggered for various repository events. These notifications vary based on enabled features.

[NOTE]
====
This include the event type `Package Vulnerability Found`
This includes the event type `Package Vulnerability Found`
====

`Additional Filter` can be applied for `Security Level`, and there are various notification methods. Custom notification titles are also optional.
Expand Down
2 changes: 1 addition & 1 deletion modules/clair-intro.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[clair-intro]]
= {productname} vulnerability scanning using Clair

Clair is equipped with three types of scanners, a matcher, and an updater:
Clair is equipped with three types of scanners, and a matcher and an updater:

- **Distribution Scanner**: This scanner discovers `Distribution` information, which is typically the base operator system the layer demonstrates features of.

Expand Down
2 changes: 1 addition & 1 deletion modules/clairv2-to-v4.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[clairv2-to-v4]]
= Migrating from Clair v2 to Clair v4

Starting with {productname} 3.4, Clair v4 is used by default. It will also be the only version of Clair continually supported, as older {productname} versions are not supported with Clair v4 in production. Users should continue using Clair v2 if using a version of {productname} earlier than 3.4.
Starting with {productname} 3.4, Clair v4 is used by default. It will also be the only version of Clair continually supported, as older versions of {productname} are not supported with Clair v4 in production. Users should continue using Clair v2 if using a version of {productname} earlier than 3.4.

Existing {productname} 3.3 deployments will be upgraded to Clair v4 when managed via the {productname} Operator. Manually upgraded {productname} deployments can install Clair v4 side-by-side, which will cause the following:

Expand Down
4 changes: 2 additions & 2 deletions modules/clairv4-arch.adoc
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
[[clairv4-arch]]
= Clair v4 architecture

Clair v4 utilizes the ClairCore library as its engine for examining contents and reporting vulnerabilities. At a high level you can consider Clair a service wrapper to the functionality provided in the ClairCore library.
Clair v4 utilizes the ClairCore library as its engine for examining contents and reporting vulnerabilities. At a high level, you can consider Clair as a service wrapper to the functionality provided in the ClairCore library.

== ClairCore

ClairCore is the engine behind Clair v4's container security solution. The ClairCore package exports our domain models, interfaces necessary to plug into our business logic, and a default set of implementations. This default set of implementations defines our support matrix.
ClairCore is the engine behind Clair v4's container security solution. The ClairCore package exports domain models, interfaces that are necessary to plug into the business logic, and a default set of implementations. This default set of implementations defines the support matrix.

ClairCore relies on Postgres for its persistence and the library will handle migrations if configured to do so.

Expand Down
2 changes: 1 addition & 1 deletion modules/clairv4-limitations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ The following limitations are currently being addressed by the development team:

* Clair v4 does not currently support MSFT Windows images.

* Clair v4 does not currently support slim/scratch container images.
* Clair v4 does not currently support slim / scratch container images.
2 changes: 1 addition & 1 deletion modules/content-distrib-intro.adoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[content-distrib-intro]]
= Content distribution
= Content distribution with {productname}

Content distribution features in {productname} include:

Expand Down
40 changes: 26 additions & 14 deletions modules/core-distinct-registries.adoc
Original file line number Diff line number Diff line change
@@ -1,28 +1,40 @@
[[core-distinct-registries]]
= Single versus multiple registries

Many users consider running multiple, distinct registries while the preferred approach with Quay is to have a single, shared registry. The following table addresses the reasons why a user might want to run multiple registries and how these requirements are addressed in Quay:
Many users consider running multiple, distinct registries whereas the preferred approach with {productname} is to have a single, shared registry. The following table addresses the reasons why a user might want to run multiple registries and how these requirements are addressed in {productname}:

[cols="2a,2a",options="header"]
|===
| Multiple registries | Quay approach
| Clear separation between Dev and Prod | Use organizations and repositories instead + RBAC
Clear separation by content origin +
(internal/external) | Use organizations and repositories instead + RBAC
Required to test registry upgrades given the criticality of the registry for running apps |
Quay Operator automates updates, both patch releases as well as minor or major updates that require an ordered sequence of steps to complete
| Separate registry in each datacenter (DC) | Quay can serve content to multiple physically close DCs +

| Multiple registries | {productname} approach
| Clear separation between development and production | Use organizations and repositories instead + RBAC

| Clear separation by content origin +
(internal/external)
| Use organizations and repositories instead + RBAC

| Required to test registry upgrades given the criticality of the registry for running apps
| {productname} Operator automates updates, both patch releases as well as minor or major updates that require an ordered sequence of steps to complete

| Separate registry in each datacenter (DC)
| {productname} can serve content to multiple physically close DCs +
+
HA can stretch across DCs (requires load balancers) +
+
Quay Geo-Replication can stretch across physically distant DCs (requires global load balancer or DNS-based geo-aware load-balancing)
| Separate registry for each cluster | Quay can serve content to thousands of clusters
| Scalability concerns over single registry | Quay scales nearly without limits +
{productname} Geo-replication can stretch across physically distant DCs (requires global load balancer or DNS-based geo-aware load-balancing)

| Separate registry for each cluster
| {productname} can serve content to thousands of clusters

| Scalability concerns over single registry
| {productname} scales nearly without limits +
(The underlying code base is proven to work at scale at Quay.io)
| Distinct registry configurations | In this scenario it might make sense to run two distinct registries

| Distinct registry configurations
| In this scenario it might make sense to run two distinct registries

|===

**Recommendation:**

Running a shared registry helps you to save storage, infrastructure and operational costs.
A dedicated registry would be really needed in very specific circumstances.
Running a shared registry helps you to save storage, infrastructure and operational costs but a dedicated registry may be needed in very specific circumstances.
2 changes: 1 addition & 1 deletion modules/core-example-deployment.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[core-example-deployment]]
= {productname} example deployments

The following image shows two {productname} example deployments:
The following image shows three possible deployments for {productname}:

* Proof of concept, single node
* Highly available, multi-node in single data center
Expand Down
Loading