Skip to content

Commit

Permalink
Merge pull request #4 from tequilarista/main
Browse files Browse the repository at this point in the history
adding CI, CD, config mgmt. testing content
  • Loading branch information
tdcox committed Apr 26, 2022
2 parents 460cc56 + 3542e9e commit fe91277
Show file tree
Hide file tree
Showing 6 changed files with 236 additions and 83 deletions.
94 changes: 27 additions & 67 deletions content/en/learn/_index.md
Original file line number Diff line number Diff line change
@@ -1,76 +1,36 @@

---
title: "Continuous integration"
linkTitle: "Continuous integration"
weight: 4
description: >
Best practices for continuous integration
title: "Continuous delivery best practices"
linkTitle: "Learn"
weight: 20
menu:
main:
weight: 20
layout: docs
---

- Definition
- Why it matters
- Associated DORA capabilities
- Key stakeholders
- Best practices
- Relationship to other practices


# Why Continuous Integration

Continuous Integration ensures that coding changes are merged, compiled/linked, packaged and registered frequently, avoiding broken builds and ensuring a clean release candidate is available continuously.


# Definition
[Continuous Integration](https://github.com/cdfoundation/glossary/blob/main/definitions.md#continuous-integration), the CI in CI/CD, is the practice of combining code changes frequently, where each change is verified on check-in.

- Examples of verifications:
- Code scanning
- Testing
- Building and packaging


# Description and Scope
Minimizing broken builds due to incompatible coding changes is the purpose of the continuous integration process. Many of us can remember the days when project teams would have a ‘sync-up’ process which generally meant check-in all of your coding updates and let us all ‘pray’ the build runs. An unsung hero called the Build Manager created the build script which would include merging and pulling all of the coding updates based on ‘tags’ and then get the build to run.

This ‘sync-up’ step was performed once a week, every two weeks or monthly. It was not unusual for the build manager to put in a solid 8-12 hours to get the build working. The resulting build was often referred to as a ‘clean’ build which created an available release candidate. This process meant you would only have a release candidate to pass to testing on a very low frequency basis, which in turn slowed down the entire application lifecycle process. Builds were always the bottleneck.

Continuous integration changed the way the build (merge, compile/link, package and register) step was implemented. By triggering the process on a check-in of code to the versioning system, continuous integration quickly identified if a developer broke the build when they introduced new code. In essence, the process of merging, compiling and linking code on a high frequency basis allows for the continuous integration of coding changes as soon as possible. If the build breaks due to a coding update, the build is easier to fix with smaller incremental changes, versus the ‘sync-up’ method where dozens of possible coding changes impacted the build leaving it to the build manager to sort out all of the problems - a tedious and onerous process. And more frequent builds meant that testing got more frequent updates, truly the beginning of ‘agile’ or incremental software updates.

The process of triggering the ‘build’ is sometimes referred to as the continuous build step of the CI process. This step is executed by calling a static build script created and maintained by the build manager. It is in this step that all of the software configuration management is performed which includes determining what code to pull, what libraries to use and what parameters must be passed to the compilers and linkers. The ‘build’ step of CI is triggered by a source code ‘check-in’ event. It then executes a workflow process to run the build script and create, package and register the new binaries and thereby create a new release candidate.

As CI matured, so did the process around the central theme. CI workflows were created by developers to not only run the build script, but also deploy the new binaries to a development endpoint followed by running automated testing. In addition, code and library management steps were added to the build improving the quality and security of code, and ensuring the usage of the correct transitive dependencies, open source licenses, and security scans were done during the creation of the release candidate.


# Best Practices
## Merging
Defining a merge strategy is best discussed in the version control section of this document. However, as merging relates to the CI build process, there are some basic best practices to consider. Keep in mind, merging is triggered after a pull request has been approved.

### Know your Merging Strategy
In your build step, it is important to understand what code is being pulled from version control to be compiled. For this reason, there should be clear best practices defined for managing branches.

Compile by Branch - for every build, a branch is referenced. The branch name is passed to the build step to determine what to pull for the compile/link.
This section provides details about continuous delivery best practices.

Compile by Tag - a Tag has been applied to all objects in the repository and the build pulls the code based on the Tag for the compile step. The Tag is a collection of objects that relate together.
The practices in this section are vendor-neutral. To read case studies or
opinionated implementations with specific tools, take a look at the
[Community](/community) section. You can also find additional resources in the
[Resources](/resources) section.

## Compile/Link Best Practices
The creation of build scripts and how they are managed can often be controversial. Writing a build script is not easy in large monolithic practices. Whether it be a build script for Java or more complex C++ code, it is a tedious and time consuming process. There are some basic guidelines that should be followed for creating the build scripts that the CI process will run.
## How to use this guide

###Build Work Products
Regardless of what type of build is executing, it should produce 3 basic outputs.
1. A build should create not only the binaries, but also the application package named based on a version numbering schema that relates back to the versioning Tag. (MSI, zip, rpm, or container image)
2. A full Bill of Material report should be required at minimum for all production releases. BOM reports are often undervalued, but they are key to debugging issues if needed. A BOM report should show:
- All source code included in the build.
- All libraries or packages, internal and external used in the link.
- All compile/link parameters used to define the binaires.
- Licensing of external components and transitive dependencies.
3. Every build should include a Difference report. A Difference report shows what changed between any two builds. This should be used for approving updates before a release to testing or production environments. A Difference report should be generated based on a comparison of two BOM reports. Difference reports can be pulled from the version repository, but may be incomplete as objects such as third party libraries are not pulled from a version repository.
## Do not use wildcard includes (/*.*)
When defining where code, packages and libraries are to be found in the build process, do not use wildcard includes. Instead list all file references by name that need to be compiled or linked into the binary objects. While this may seem a lot of extra work, it is essential in securing that only approved objects end up in the resulting binary. If you use the wildcard includes, non-approved objects will no doubt be delivered with your binary. This can be a hidden security risk. This also means your binary includes unnecessary objects which can substantially increase the size of your binary.
If you are new to continuous delivery practices and want to understand their
benefits and prerequisites to starting your journey, read the
[Overview](overview).

## Know Your Build Parameters
Build parameters determine how the resulting binaries are created. For example, the use of debug flags allows the binaries to include debug symbols that can be exploited and should not be deployed to production environments.
If you have some familiarity with continuous delivery, but need help figuring
out what to prioritize, read about [assessment](assess) tools to help you identify
areas to focus on.

## Binary Repository
The CI build should include the updating of the binaries to an appropriate binary repository. The binary repository should be used to persist and share the binaries to the continuous deployment step of the Continuous Delivery Pipeline. These binaries should be tagged with the corresponding version control ‘tag.’
The rest of the subsections in this guide provide information about key areas of
continuous delivery.

# Docker Specific Best Practices
A Multi-stage Docker Build is the process of moving your CI steps into a container build. It is called ‘multi-stage’ because it aggregates all of the needed steps into a Docker Build of the image. For example, in stage one run the compile/link steps to create the artifacts, stage two copy the artifacts to a run-time and destroy stage one. The benefit of considering a Multi-stage Docker Build is to create an airtight environment in which the process is executed. The container holds all objects without the possibility of an external change. Interesting to the Multi-stage process is the ability to move your entire CI Build process into a single container build. Best practices related to each of those steps should still be followed. This option minimizes external updates to the build step making it a best practice candidate.
Some practices depend on others, while others span the entire software
lifecycle. For example, best practices for continuous integration depend on
version control. Security best practices are most effective when applied across
the entire software supply chain. Best practices also involve collaboration
across functional teams.
70 changes: 61 additions & 9 deletions content/en/learn/cd/_index.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,68 @@

---
title: "Continuous delivery"
linkTitle: "Continuous delivery"
title: "Continuous deployment"
linkTitle: "Continuous deployment"
weight: 6
description: >
Best practices for continuous delivery
Best practices for continuous deployment
---


- Definition
- Why it matters
- Associated DORA capabilities
- Key stakeholders
- Best practices
- Relationship to other practices
- [Why Continuous Delivery](#heading-wcd)
- [Definition](#heading-def)
- [Description and Scope](#heading-das)
- [Best practices](#heading-bp)


# Why Continuous Deployments {#heading-wcd}
Continuous Integration drove the need for continuous deployments. Continuous deployment is the process of updating an endpoint with a new release candidate pushed by the Continuous Delivery orchestration engine. Once developers automated the creation of binaries, triggered by a coding update, they also wanted to deploy the update and run automated testing, but first the update needed to be deployed. Automating the deployment of objects became a natural next step from automating the builds.

As developers and testing teams became more efficient at providing release candidates, production teams were being asked to move the new updates forward to the end users. However, the production teams had different deployment requirements and often used ‘operation’ tooling to perform releases. The scripts that drove deployments for development and testing were not accepted by the teams managing the production environments. This began a culture shift. We began to see the rise in “Site Reliability” engineering, individuals who work at an operational level but assigned to development teams. This began a conversation about automating the continuous deployment step of the DevOps pipeline and shifted the conversation from continuous integration to solving a repeatable continuous deployment step integrated into the continuous delivery orchestration. To support what the operational side of the house needed it became apparent that automated tooling, specific to deployments, was required. In particular, solutions to serve the auditability and change management of production endpoints was required to build a DevOps pipeline that truly served the needs of both sides of the equation. The deployment automation category was born.


# Definition {#heading-def}
Continuous deployment is an approach where working software is released to users automatically on every commit. The process is repeatable and auditable.

# Description and Scope {#heading-das}
The need to automate deployments grew out of the continuous integration movement. Developers automated deployments from their CI workflows using a simple deployment script to update their development environments for unit testing. Initially the scripts were just a copy command. As the industry evolved, the need to recycle web servers and tweak environment configurations were added to the scripts. The deployment step began to become more and more complicated and critical. Testing teams became more dependent on developers to perform testing releases. In many ways, this need evolved a simple CI workflow into a Continuous Delivery workflow, automating the update to testing upon a successful unit test in development. Now one workflow called another workflow and we began the journey into continuous delivery.

Once the unit testing was complete, the need to push the update to testing and production drove the evolution of automating deployments to include broader management of the deployment process with the goal of deployment repeatability across all stages. While continuous deployments had been embraced by developers and testers, production teams were not willing to accept updates on a high frequency basis. Operation teams, with the goal of maintaining a stable production environment, have a culture of being risk averse. In addition, the deployment needs of production are consistently different from the needs of development and testing. Creating a single platform for managing deployments across the lifecycle pipeline became the goal of the continuous deployment movement.

Continuous deployments can be viewed in two ways, a push process or a pull process. A push solution updates environments upon a call from the continuous delivery orchestration engine. A pull solution, such as GitOps, manages deployments based on a ‘state’ defined by configuration data in a deployment file stored in an ‘environment’ repository. An operator running in an environment monitors the state by referencing the deployment file and coordinates the updates. In either case, a new update ‘event’ triggers the Continuous Delivery process to perform an action. That action can push a deployment, or create a pull request to update a deployment file to a repository. The outcome is the same, a consistent repeatable deployment process is achieved.



# Best Practices {#heading-bp}

## Repeatability {#heading-repeat}
The deployment process must be repeatable across all stages of the pipeline. To achieve repeatability, values that are specific to an environment should be separated from the deployment tasks. This allows the logic of the deployment to remain consistent, while the values change according to the endpoint.

## Automation to Reduce One-Off Scripting {#heading-auto}
Continuous deployment requires the ability to scale quickly. This means that the reliance on deployment scripts can impede scaling of your release process. To avoid the reliance on scripts, the process should include a set of reusable tasks, components and functions that can define a templated approach to deployments.

## Environment Modeling {#heading-env}
A logical view of your endpoints, their use, ownership and capabilities is essential for defining your release landscape and creating a reference for automated deployments. Reporting on the Environment configurations is required for abstracting the differences between any two environments - a process required for debugging when a deployment does not perform as expected based on metrics defined in a previous environment.

## Approval and Approval Gates {#heading-approval}
Depending on the specific vertical, approvals of releases to testing and production environments may be required. Highly regulated markets require a separation of duties which speaks directly to restricting access to certain stages in the application lifecycle such as testing and production. Including a method of notification and approval that a new release is moving to a particular location should be included in your release strategy if you are highly regulated.

## Release Coordination and Auditing {#heading-release}
Tracking and coordinating activities across both automated and manual steps in the deployment process is needed for a clear understanding of what occurred. In addition, all activities manual or automated, should include an audit log showing who, when and where an update occurred. This level of information can be used for resolving an incident, and serves the purpose of Audit teams in certain highly regulated industry segments.

## Inventory Tracking {#heading-inv}
The location of any artifact deployed to any location in an environment should be recorded. Understanding what is running in any environment is essential for maintaining a high level of service and quality. The inventory tracking should allow for viewing and comparing from the point of view of an artifact to all locations where the artifact is installed. From the environment view, the tracking should show all artifacts across all applications that are deployed to the environment.

## Calendar and Scheduling {#heading-cal}
For larger enterprises where approvals and policies are needed for a release to occur, a calendar and scheduling process should be used. The calendar should be defined based on the environment and allow for collaboration between development teams, testing teams and production teams showing when a release is scheduled or requested.

## Immutable Deployments {#heading-imm}
The continuous deployment process should be free from manual changes. This requires all release metadata to and logic to be maintained in an immutable state in which the deployment can be re-executed with the assurance that no manual touches occurred.

## Deployment Models {#heading-deploy}
Canary deployments, blue/green deployments and rolling blue/green deployments are common methods of ‘testing’ a release to mainly production environments. The continuous deployment process should support the various deployment models required by production teams.

## Push Vs. Pull {#heading-pvp}
In reality, all deployments are ‘push’ deployments. Even in a GitOps methodology, a push drives a pull request. In GitOps a deployment is initiated by committing a deployment definition (.yaml file) to an environment repository. All other best practices should be applied even to a Pull GitOps process.

## Policies {#heading-pol}
Automation of the deployment may require specific guardrails depending on the environment. Policies should be defined to allow the automation process to incorporate standard ‘rules’ around any specific deployment that align with the organizational culture.
Loading

0 comments on commit fe91277

Please sign in to comment.