Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
86 changes: 45 additions & 41 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,20 +8,22 @@ The target operating model has two clusters:

**For the current version of this application the confidential containers assumes deployment to Azure**

On the platform a few workloads are deployed:
On the platform a a sample workload is deployed

1. Sample hello world applications to allow users to experiment with the policies for CoCo and the KBS (trustee).
1. This is currently working out of the box (or close to)
1. This is currently working out of the box (or close to).

Future work includes:

2. Red Hat OpenShift AI is deployed where a multi-layer perceptron to predict fraud is deployed as a confidential workload for inference
1. This currently is a work in progress.

3.
2. Enirnonments which will work sucessfully across multiple cloud providers


## Current constraints and assumptions
- Only currently is known to work with `azure` as the provider of confidential vms via peer-pods
- Only known to work today with everything on one cluster. The goal is to fix this as soon as possible.
- You must be able to get a lets-encrypt certificate
- Only known to work today with everything on one cluster. The work to expand this is in flight
- You must be able to get a lets-encrypt certificate. This means the service credentials in openshift must be able to manipulate the dns zone used by OpenSift.
-
- RHOAI data science cluster must be disabled until required components are deployed.
- Must be on 4.16.14 or later.

Expand All @@ -43,12 +45,39 @@ It deploys a hello-openshift application 3 times:



## Bootstrapping
## Setup instructions

### Default single cluster setup with `values-simple.yaml`

#### Configuring required secrets / parameters
The secrets here secure Trustee and the peer-pod vms. Mostly they are for demonstration purposes.
This only has to be done once.

#### Install of OCP cluster on azure.
1. Run `sh scripts/gen-secrets.sh`

#### Install on an OCP cluster on azure using Red Hat Demo Platform

Red Hat a demo platform. This allows easy access for Red Hat associates and partners to ephemeral cloud resources. The pattern is known to work with this setup.
1. Get the [openshift installer](https://console.redhat.com/openshift/downloads)
1. **NOTE: openshift installer must be updated regularly if you want to automatically provision the latest versions of OCP**
2. Get access to an [Azure Subscription Based Blank Open Environment](https://catalog.demo.redhat.com/catalog?category=Open_Environments&search=azure&item=babylon-catalog-prod%2Fazure-gpte.open-environment-azure-subscription.prod).
3. Import the required azure environmental variables (see coded block):
```
export CLIENT_ID=
export PASSWORD=
export TENANT=
export SUBSCRIPTION=
export RESOURCEGROUP=
```
1. Run the wrapper install script
1. `sh ./rhdp/wrapper.sh`
1. You *should* be done
1. You *may* need to recreate the hello world peer-pods depending on timeouts.

#### Install azure *not* usign Red Hat Demo platform
**NOTE: Don't use the default node sizes.. increase the node sizes such as below**


1. Login to console.redhat.com
2. Get the openshift installer
3. Login to azure locally.
Expand All @@ -63,41 +92,16 @@ It deploys a hello-openshift application 3 times:
```
1. `mkdir ./ocp-install && mv openshift-install.yaml ./ocp-install`
2. `openshift-install create cluster --dir=./ocp-install`
3. Once installed:
1. Login to `oc`
2. `./pattern.sh make install


### Multi cluster setup
TBD

#### Configuring secrets / params
1. Setup `values-secret-coco-pattern.yaml` from the template
1. If you have not previously, run `./scripts/gen-ssh-key-azure.sh`
2. If you have not previously, run `./scripts/gen-kbs-keys.sh`
3. Populate the azure details between those that must be known already (CLIENT_ID etc) and using, when logged into `az`, `sh ./get-azure-details.sh`
4. Update `charts/all/sandbox/values.yaml` with the appropriate azure details
5. Recommended: Disable the kata config until system is up.

#### required `values-global.yaml` configuration

The following fields must be populated for
```yaml
global:
azure:
clientID: '' # Azure service principal ID
subscriptionID: '' # azure subscription UUID
tenantID: '' # tenant ID - will look like a name
DNSResGroup: '' # resource group where DNS Zone is hosted
hostedZoneName: '' # Hosted zone name. Will be a dns entry in azure dns you have access to. Check in the azure portal
clusterResGroup: '' # resource group for the cluster
clusterSubnet: '' # subnet for the worker node
clusterNSG: '' # network security group for the worker node
clusterRegion: '' # named azure region
```


#### Install the pattern
1. `./pattern.sh make install` this *should* deploy all elements.
2. If it does not:
1. Likely that the hello-openshift deployments timed out without the vm templates


### Multi-cluster setup with AI
TBD

## Future work
- Support spreading remote attestation and workload to separate clusters.
Expand Down
50 changes: 50 additions & 0 deletions ansible/configure-issuer.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
- name: Retrieve Credentials for AAP on OpenShift
become: false
connection: local
hosts: localhost
gather_facts: false
vars:
kubeconfig: "{{ lookup('env', 'KUBECONFIG') }}"
tasks:
- name: Get Azure credentials
kubernetes.core.k8s_info:
kind: Secret
namespace: openshift-cloud-controller-manager
name: azure-cloud-credentials
register: azure_credentials
retries: 20
delay: 5
- name: List DNS zones
azure.azcollection.azure_rm_dnszone_info:
#resource_group: "{{ azure_credentials['data']['re'] }}" # don't pass if
auth_source: "auto"
subscription_id: "{{ azure_credentials.resources[0]['data']['azure_subscription_id'] | b64decode }}"
client_id: "{{ azure_credentials.resources[0]['data']['azure_client_id'] | b64decode }}"
secret: "{{ azure_credentials.resources[0]['data']['azure_client_secret'] | b64decode }}"
tenant: "{{ azure_credentials.resources[0]['data']['azure_tenant_id'] | b64decode }}"
register: dns_zones
# FIXME: This assumes only one dns zone is present. we should be matching against available dns zones.
- name: Split the Path
set_fact:
path_parts: "{{ dns_zones.ansible_info.azure_dnszones[0].id.split('/') }}"
- name: Find the Resource Group Name
set_fact:
resource_group: "{{ path_parts[4] }}"
- name: Get hosted zone
set_fact:
hosted_zone: "{{ dns_zones.ansible_info.azure_dnszones[0].name }}"
- name: "Set k8s cm"
kubernetes.core.k8s:
api_version: v1
kind: ConfigMap
resource_definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: dnsinfo
namespace: imperative
data:
resource_group: "{{ resource_group }}"
hosted_zone: "{{ hosted_zone }}"
state: present
12 changes: 12 additions & 0 deletions ansible/install-deps.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
- name: Retrieve Credentials for AAP on OpenShift
become: false
connection: local
hosts: localhost
gather_facts: false
tasks:
- name: Install required collection
ansible.builtin.command:
cmd: ansible-galaxy collection install azure.azcollection
- name: Install a Python package
ansible.builtin.command:
cmd: pip install --user -r ~/.ansible/collections/ansible_collections/azure/azcollection/requirements.txt
11 changes: 11 additions & 0 deletions charts/all/letsencrypt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,17 @@
## Forked from https://github.com/validatedpatterns/letsencrypt-chart


## Design for Azure
Cert-manager needs the azure resource group for a zone in order to manage the DNS.
Unfortunately this is a little tricky to get.

To get this running on azure two compromises have been made
1. The required information (managed_zone_name and managed_zone resource group) can be obtained via the ansible imperative framework.

2. The imperative framework is limited terms of feedback / logging. Please test carefully.

3. If the credentials can see more than one managed zone there may be issues. It presumes one.




Expand Down
62 changes: 62 additions & 0 deletions charts/all/letsencrypt/templates/acm-secret-create.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
{{ if .Values.letsencrypt.enabled }}
{{ if and (eq .Values.global.clusterPlatform "Azure") .Values.letsencrypt.cloudProviderDNS }}
---
## USE ACM policies to enforce the creation of a lets-encrypt cert
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: azure-secret-policy
spec:
remediationAction: enforce
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: azure-client-creds
spec:
remediationAction: enforce
severity: medium
object-templates:
- complianceType: mustonlyhave
objectDefinition:
apiVersion: v1
type: Opaque
kind: Secret
metadata:
name: azuredns-config
namespace: cert-manager
data:
client-secret: '{{ `{{ fromSecret "openshift-cloud-controller-manager" "azure-cloud-credentials" "azure_client_secret" }}` }}'
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: azure-secret-placement-binding
annotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
placementRef:
name: azure-managed-clusters-placement-rule
kind: PlacementRule
apiGroup: apps.open-cluster-management.io
subjects:
- name: azure-secret-policy
kind: Policy
apiGroup: policy.open-cluster-management.io
------
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: azure-managed-clusters-placement-rule
spec:
clusterConditions:
- status: 'True'
type: ManagedClusterConditionAvailable
clusterSelector:
matchLabels:
cloud: Azure

---
{{- end }}
{{- end }}
24 changes: 0 additions & 24 deletions charts/all/letsencrypt/templates/azure-eso.yaml

This file was deleted.

77 changes: 77 additions & 0 deletions charts/all/letsencrypt/templates/issuer-acm.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
{{ if .Values.letsencrypt.enabled }}
{{ if and (eq .Values.global.clusterPlatform "Azure") .Values.letsencrypt.cloudProviderDNS }}
---
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: azure-cluster-issuer-policy
spec:
remediationAction: enforce
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: azure-cluster-issuer
spec:
remediationAction: enforce
severity: medium
object-templates:
- complianceType: mustonlyhave
objectDefinition:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: validated-patterns-issuer
spec:
acme:
server: {{ .Values.letsencrypt.server }}
email: {{ .Values.letsencrypt.email }}
privateKeySecretRef:
name: validated-patterns-issuer-account-key
solvers:
- dns01:
azureDNS:
# This info is also available in CM's however it's easier to get from the secret
clientID: '{{ `{{ fromSecret "openshift-cloud-controller-manager" "azure-cloud-credentials" "azure_client_id" | base64dec }}` }}'
clientSecretSecretRef:
# The following is the secret we created in Kubernetes. Issuer will use this to present challenge to Azure DNS.
name: azuredns-config
key: client-secret
subscriptionID: '{{ `{{ (fromJson (fromConfigMap "openshift-cloud-controller-manager" "cloud-conf" "cloud.conf" | toLiteral)).subscriptionId }}` }}'
tenantID: '{{ `{{ (fromJson (fromConfigMap "openshift-cloud-controller-manager" "cloud-conf" "cloud.conf" | toLiteral)).tenantId }}` }}'
resourceGroupName: '{{ `{{ fromConfigMap "imperative" "dnsinfo" "resource_group" }}` }}'
hostedZoneName: '{{ `{{ fromConfigMap "imperative" "dnsinfo" "hosted_zone" }}` }}'
# Azure Cloud Environment, default to AzurePublicCloud
environment: AzurePublicCloud
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: azure-issuer-placement-binding
annotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
placementRef:
name: azure-issuer-placement-rule
kind: PlacementRule
apiGroup: apps.open-cluster-management.io
subjects:
- name: azure-cluster-issuer-policy
kind: Policy
apiGroup: policy.open-cluster-management.io
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: azure-issuer-placement-rule
spec:
clusterConditions:
- status: 'True'
type: ManagedClusterConditionAvailable
clusterSelector:
matchLabels:
cloud: Azure
---
{{- end }}
{{- end }}
Loading