Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Binary authorization module and example #683

Merged
merged 2 commits into from Jun 17, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 3 additions & 1 deletion .gitignore
Expand Up @@ -28,4 +28,6 @@ fast/stages/**/terraform-*.auto.tfvars.json
fast/stages/**/0*.auto.tfvars*
**/node_modules
fast/stages/**/globals.auto.tfvars.json
cloud_sql_proxy
cloud_sql_proxy
examples/cloud-operations/binauthz/tenant-setup.yaml
examples/cloud-operations/binauthz/app/app.yaml
127 changes: 127 additions & 0 deletions examples/cloud-operations/binauthz/README.md
@@ -0,0 +1,127 @@
# Binary Authorization

The following example shows to how to create a CI and a CD pipeline in Cloud Build for the deployment of an application to a private GKE cluster with unrestricted access to a public endpoint. The example enables a Binary Authorization policy in the project so only images that have been attested can be deployed to the cluster. The attestations are created using a cryptographic key pair that has been provisioned in KMS.

The diagram below depicts the architecture used in the example.

![Architecture](diagram.png)

The CI and CD pipelines are implemented as Cloud Build triggers that run with a user-specified service account.

The CI pipeline does the following:

* Builds and pushes the image to Artifact registry
* Creates an attestation for the image.

The CD pipeline deploys the application to the cluster.

## Running the example

Clone this repository or [open it in cloud shell](https://ssh.cloud.google.com/cloudshell/editor?cloudshell_git_repo=https%3A%2F%2Fgithub.com%2Fterraform-google-modules%2Fcloud-foundation-fabric&cloudshell_print=cloud-shell-readme.txt&cloudshell_working_dir=examples%2Fcloud-operations%2Fbinauthz), then go through the following steps to create resources:

* `terraform init`
* `terraform apply -var project_id=my-project-id`

WARNING: The example requires the activation of the Binary Authorization API. That API does not support authentication with user credentials. A service account will need to be used to run the example

## Testing the example

Once the resources have been created, do the following to verify that everything works as expected.

1. Fetch the cluster credentials

gcloud container clusters get-credentials cluster --project <PROJECT_ID>

2. Apply the manifest tenant-setup.yaml available in your work directory.

kubectl apply -f tenant-setup.yaml

By applying that manifest thw following is created:

* A namespace called "apis". This is the namespace where the application will be deployed.
* A Role and a RoleBinding in previously created namespace so the service account that has been configured for the CD pipeline trigger in Cloud Build is able to deploy the kubernetes application to that namespace.

3. Change to the image subdirectory in your work directory

cd <WORK_DIR>/image

4. Run the following commands:

git init
git remote add origin ssh://<USER>:2022/p/<PROJECT_ID>/r/image
git push -u origin main

4. In the Cloud Build > History section in the Google Cloud console you should see a job running. That job is build the image, pushing to Artifact Registry and creating an attestation.

Once the job finishes copy the digest of the image that is displayed in the Cloud Build job output.

5. Change to the app subdirectory in your working directory.

cd <WORK_DIR>/app

6. Edit the app.yaml file and replace the string DIGEST with the value you copied before.

7. Run the following commands:

git init
git remote add origin ssh://<USER>:2022/p/<PROJECT_ID>/r/app
git push -u origin main

8. In the Cloud Build > History section in the Google Cloud console you should see a job running. The job will deploy the application to the cluster.

9. Go to the Kubernetes Engine > Workloads section to check that the deployment was successful and that the Binary Authorization admissions controller webhook did not block the deployment.

10. Change to the working directory and try to deploy an image that has not been attested.

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: gcr.io/google-containers/nginx:latest
ports:
- containerPort: 80
EOF


9. Go to the Kubernetes Engine > Workloads section to check that that the Binary Authorization admissions controller webhook did not block the deployment.

The application deployed to the cluster is an RESTful API that enables managing Google Cloud storage buckets in the project. Workload identity is used so the app can interact with the Google Cloud Storage API.

Once done testing, you can clean up resources by running `terraform destroy`.
<!-- BEGIN TFDOC -->

## Variables

| name | description | type | required | default |
|---|---|:---:|:---:|:---:|
| [project_id](variables.tf#L26) | Project ID. | <code>string</code> | ✓ | |
| [master_cidr_block](variables.tf#L49) | Master CIDR block. | <code>string</code> | | <code>&#34;10.0.0.0&#47;28&#34;</code> |
| [pods_cidr_block](variables.tf#L37) | Pods CIDR block. | <code>string</code> | | <code>&#34;172.16.0.0&#47;20&#34;</code> |
| [prefix](variables.tf#L31) | Prefix for resources created. | <code>string</code> | | <code>null</code> |
| [project_create](variables.tf#L17) | Parameters for the creation of the new project. | <code title="object&#40;&#123;&#10; billing_account_id &#61; string&#10; parent &#61; string&#10;&#125;&#41;">object&#40;&#123;&#8230;&#125;&#41;</code> | | <code>null</code> |
| [region](variables.tf#L61) | Region. | <code>string</code> | | <code>&#34;europe-west1&#34;</code> |
| [services_cidr_block](variables.tf#L43) | Services CIDR block. | <code>string</code> | | <code>&#34;192.168.0.0&#47;24&#34;</code> |
| [subnet_cidr_block](variables.tf#L55) | Subnet CIDR block. | <code>string</code> | | <code>&#34;10.0.1.0&#47;24&#34;</code> |
| [zone](variables.tf#L67) | Zone. | <code>string</code> | | <code>&#34;europe-west1-c&#34;</code> |

## Outputs

| name | description | sensitive |
|---|---|:---:|
| [app_repo_url](outputs.tf#L22) | App source repository url. | |
| [image_repo_url](outputs.tf#L17) | Image source repository url. | |

<!-- END TFDOC -->
1 change: 1 addition & 0 deletions examples/cloud-operations/binauthz/app
Submodule app added at 0f46c6
Binary file added examples/cloud-operations/binauthz/diagram.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions examples/cloud-operations/binauthz/image/.dockerignore
@@ -0,0 +1 @@
node_modules
1 change: 1 addition & 0 deletions examples/cloud-operations/binauthz/image/.gitignore
@@ -0,0 +1 @@
node_modules/**
25 changes: 25 additions & 0 deletions examples/cloud-operations/binauthz/image/Dockerfile
@@ -0,0 +1,25 @@
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

FROM node:18-alpine

WORKDIR /app

COPY ["package.json", "package-lock.json*", "./"]

RUN npm install

COPY . .

CMD [ "node", "index.js" ]
27 changes: 27 additions & 0 deletions examples/cloud-operations/binauthz/image/README.md
@@ -0,0 +1,27 @@
# Storage API

This application it is a RESTful API that let's you manage the Google Cloud Storage buckets available is a project. In order to do so the application needs to authenticate with a service account that has been granted the Storage Admin (`roles/storage.admin`) role.

Find below the operations that can be performed using it:

* Get buckets in project

curl -v http://localhost:3000/buckets

* Get files in bucket

curl -v http://localhost:3000/buckets/BUCKET_NAME

* Create a bucket

curl -v http://localhost:3000/buckets \
-H'Content-Type: application/json' \
-d @- <<EOF
{
"name": "BUCKET_NAME"
}
EOF

* Delete bucket

curl -v -X DELETE http://localhost:3000/buckets/BUCKET_NAME
40 changes: 40 additions & 0 deletions examples/cloud-operations/binauthz/image/cloudbuild.yaml
@@ -0,0 +1,40 @@
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

steps:
- id: 'Build image and push it to artifact registry'
name: 'gcr.io/kaniko-project/executor:latest'
args:
- '--destination=${_IMAGE}:${COMMIT_SHA}'
- '--cache=true'
- '--cache-ttl=6h'
- id: 'Create and sign attestation'
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:latest'
entrypoint: 'bash'
args:
- '-eEuo'
- 'pipefail'
- '-c'
- |
set -x
DIGEST=$(gcloud container images describe ${_IMAGE}:${COMMIT_SHA} \
--format 'value(image_summary.digest)' \
--project ${PROJECT_ID})
gcloud beta container binauthz attestations sign-and-create \
--project="${PROJECT_ID}" \
--artifact-url="${_IMAGE}@$${DIGEST}" \
--attestor="${_ATTESTOR}" \
--keyversion="${_KEY_VERSION}"
options:
logging: CLOUD_LOGGING_ONLY
81 changes: 81 additions & 0 deletions examples/cloud-operations/binauthz/image/index.js
@@ -0,0 +1,81 @@
/**
* Copyright 2022 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

const express = require('express');
const app = express();
const { Storage } = require('@google-cloud/storage');

const PORT = 3000;
const PROJECT_ID = process.env.PROJECT_ID;

const storage = new Storage();


app.use(express.json());
app.set('json spaces', 2)

app.get('/buckets', async (req, res) => {
try {
const [buckets] = await storage.getBuckets();
res.json(buckets.map(bucket => bucket.name));
} catch (error) {
res.status(500).json({
message: `An error occurred trying to fetch the buckets in project: ${error}`
});
}
});

app.get('/buckets/:name', async (req, res) => {
const name = req.params.name;
try {
const [files] = await storage.bucket(name).getFiles();
res.json(files.map(file => file.name));
} catch (error) {
res.status(500).json({
message: `An error occurred fetch the files in ${name} bucket: ${error}`
});
}
});

app.post('/buckets', async (req, res) => {
const name = req.body.name;
try {
const [bucket] = await storage.createBucket(name);
res.status(201).json({
"name": bucket.name
});
} catch (error) {
res.status(500).json({
message: `An error occurred trying to create ${name} bucket: ${error}`
});
}
});

app.delete('/buckets/:name', async (req, res) => {
const name = req.params.name;
try {
await storage.bucket(name).delete();
res.send()
} catch (error) {
res.status(500).json({
message: `An error occurred trying to delete ${name} bucket: ${error}`
});
}
});

app.listen(PORT, () => {
console.log(`App listening on port ${PORT}`)
})