Skip to content

Commit

Permalink
profread, update preface
Browse files Browse the repository at this point in the history
  • Loading branch information
TomFern committed Aug 12, 2021
1 parent 9e791ec commit ae881e2
Show file tree
Hide file tree
Showing 8 changed files with 50 additions and 37 deletions.
17 changes: 15 additions & 2 deletions chapters/01-introduction-ebook.md
@@ -1,11 +1,11 @@
\newpage

© 2020 Rendered Text. All rights reserved.
© 2021 Rendered Text. All rights reserved.

This book is open source:
<https://github.com/semaphoreci/book-cicd-docker-kubernetes>

$MONTHYEAR: First edition v1.1 (revision $REVISION)
$MONTHYEAR: Second edition v2.0 (revision $REVISION)

\newpage

Expand Down Expand Up @@ -53,6 +53,19 @@ Chapter 3, "Best Practices for Cloud Native Applications", describes how both ou

Chapter 4, "A Complete CI/CD Pipeline", is a step-by-step guide to implementing a CI/CD pipeline with Semaphore that builds, tests, and deploys a Dockerized microservice to Kubernetes.

## Changes in the Second Edition

A few changes were introduced in this second edition:

- Moved to Kubernetes version v1.20. All commands and actions were tested with this version.
- Added comments about accessing services in local development Kubernetes clusters.
- Added mention of new CI/CD features in Semaphore: parameterized pipelines, test results, code change detection.
- DigitalOcean deployment now uses their Private Container Registry service instead of Docker Hub.
- Updated setup steps for DigitalOcean, Google Cloud, and AWS.
- Updated UI screenshots using higher resolution.
- Modified deployment tutorial to use parametrized promotions.
- Other minor fixes.

## How to Contact Us

We would very much love to hear your feedback after reading this book. What did you like and learn? What could be improved? Is there something we could explain further?
Expand Down
6 changes: 3 additions & 3 deletions chapters/01-introduction.md
Expand Up @@ -2,7 +2,7 @@

© 2021 Rendered Text. All rights reserved.

This work is licensed under Creative Commmons
This work is licensed under Creative Commons
Attribution-NonCommercial-NoDerivatives 4.0 International.
To view a copy of this license, visit
<https://creativecommons.org/licenses/by-nc-nd/4.0>
Expand Down Expand Up @@ -33,7 +33,7 @@ Today there's a massive change going on in the way we're using the cloud. To bor

Doing so successfully, however, requires our applications to adapt. They need to be disposable and horizontally scalable. They should have a minimal divergence between development and production so that we can continuously deploy them multiple times per day.

A new generation of tools has democratized the way of building such *cloud native* software. Docker container is now the standard way of packaging software in a way that can be deployed, scaled, and dynamically distributed on any cloud. And Kubernetes is the leading platform to run containers in production. Over time new platforms with higher-order interfaces will emerge, but it's almost certain that they will be based on Kubernetes.
A new generation of tools has democratized the way of building such *cloud native* software. Docker containers are now the standard way of packaging software in a way that can be deployed, scaled, and dynamically distributed on any cloud. And Kubernetes is the leading platform to run containers in production. Over time new platforms with higher-order interfaces will emerge, but it's almost certain that they will be based on Kubernetes.

The great opportunity comes potentially at a high cost. Countless organizations have spent many engineering months learning how to deliver their apps with this new stack, making sense of disparate information from the web. Delaying new features by months is not exactly the outcome any business wants when engineers announce that they're moving to new tools that are supposed to make them more productive.

Expand Down Expand Up @@ -76,7 +76,7 @@ A few changes were introduced in this second edition:

- Moved to Kubernetes version v1.20. All commands and actions were tested with this version.
- Added comments about accessing services in local development Kubernetes clusters.
- Added mention of new CI/CD features in Semaphore: parametrized pipelines, test results, code change detection.
- Added mention of new CI/CD features in Semaphore: parameterized pipelines, test results, code change detection.
- DigitalOcean deployment now uses their Private Container Registry service instead of Docker Hub.
- Updated setup steps for DigitalOcean, Google Cloud, and AWS.
- Updated UI screenshots using higher resolution.
Expand Down
10 changes: 5 additions & 5 deletions chapters/02-using-docker.md
Expand Up @@ -43,11 +43,11 @@ We will see how to get there.

After we build container images, we can run them consistently on any server environment. Automating server installation would usually require steps (and domain knowledge) specific to our infrastructure. For instance, if we are using AWS EC2, we may use AMI (Amazon Machine Images), but these images are different (and built differently) from the ones used on Azure, Google Cloud, or a private OpenStack cluster.

Configuration management systems (like Ansible, Chef, Puppet, or Salt) help us by describing our servers and their configuration as manifests that live in version-controlled source repositories. This helps, but writing these manifests is no easy task, and they don’t guarantee reproducible execution. These manifests have to be adapted when switching distributions, distribution versions, and sometimes even from a cloud provider to another, because they would use different network interface or disk naming, for instance.
Configuration management systems (like Ansible, Chef, Puppet, or Salt) help us by describing our servers and their configuration as manifests that live in version-controlled source repositories. This helps, but writing these manifests is no easy task, and they don’t guarantee reproducible execution. These manifests have to be adapted when switching distributions, distribution versions, and sometimes even from a cloud provider to another, because they would use different network interfaces or disk naming, for instance.

Once we have installed the Docker Engine (the most popular option), it can run any container image and effectively abstract these environment discrepancies.

The ability to stage up new environments easily and reliably gives us exactly what we need to set up CI/CD (continuous integration and continuous delivery). We will see how to get there. Ultimately, it means that advanced techniques, such as blue/green deployments, or immutable infrastructure, become accessible to us, instead of being a privilege of larger organizations able to spend a lot of time to build their perfect custom tooling.
The ability to stage new environments easily and reliably gives us exactly what we need to set up CI/CD (continuous integration and continuous delivery). We will see how to get there. Ultimately, it means that advanced techniques, such as blue/green deployments, or immutable infrastructure, become accessible to us, instead of being a privilege of larger organizations able to spend a lot of time to build their perfect custom tooling.

### 1.1.3 Less Risky Releases

Expand All @@ -63,7 +63,7 @@ As a result, we can deploy with more confidence, because we know that if somethi

## 1.2 A Roadmap to Adopting Docker

The following roadmap works for organizations and teams of all size, regardless of their existing knowledge of containers. Even better, this roadmap will give you tangible benefits at each step, so that the gains realized give you more confidence into the whole process.
The following roadmap works for organizations and teams of all sizes, regardless of their existing knowledge of containers. Even better, this roadmap will give you tangible benefits at each step, so that the gains realized give you more confidence in the whole process.

Sounds too good to be true?

Expand All @@ -89,7 +89,7 @@ If we have a component that is tricky enough to require a tool like Vagrant to r

### 1.2.2 Writing the First Dockerfile

There are various ways to write your first Dockerfile, and none of them is inherently right or wrong. Some people prefer to follow the existing environment as close as possible. For example, if you're currently using PHP 7.2 with Apache 2.4, and have some very specific Apache configuration and `.htaccess` files? Sure, makes sense to put that in containers. But if you prefer to start anew from your `.php` files, serve them with PHP FPM, and host the static assets from a separate NGINX container, that’s fine too. Either way, the [official PHP images](https://hub.docker.com/r/_/php/) got us covered.
There are various ways to write your first Dockerfile, and none of them is inherently right or wrong. Some people prefer to follow the existing environment as closely as possible. For example, if you're currently using PHP 7.2 with Apache 2.4, and have some very specific Apache configuration and `.htaccess` files? Sure, it makes sense to put that in containers. But if you prefer to start anew from your `.php` files, serve them with PHP FPM, and host the static assets from a separate NGINX container, that’s fine too. Either way, the [official PHP images](https://hub.docker.com/r/_/php/) got us covered.

During this phase, we’ll want to make sure that the team working on that service has Docker installed on their machine, but only a few people will have to meddle with Docker at this point. They will be leveling the field for everyone else.

Expand All @@ -104,7 +104,7 @@ CMD ["ruby", "hasher.rb"]
EXPOSE 80
```

Once we have a working Dockerfile for an app, we can start using this container image as the official development environment for this specific service or component. If we picked a fast-moving one, we will see the benefits very quickly, since Docker makes library and other dependency upgrades completely seamless. Rebuilding the entire environment with a different language version now becomes effortless. And if we realize after a difficult upgrade that the new version doesn’t work as well, rolling back is just as easy and instantaneous, because Docker keeps a cache of previous image builds around.
Once we have a working Dockerfile for an app, we can start using this container image as the official development environment for this specific service or component. If we pick a fast-moving one, we will see the benefits very quickly, since Docker makes library and other dependency upgrades completely seamless. Rebuilding the entire environment with a different language version now becomes effortless. And if we realize after a difficult upgrade that the new version doesn’t work as well, rolling back is just as easy and instantaneous, because Docker keeps a cache of previous image builds around.

### 1.2.3 Writing More Dockerfiles

Expand Down
16 changes: 8 additions & 8 deletions chapters/03-kubernetes-deployment.md
Expand Up @@ -123,7 +123,7 @@ deployment?

## 2.2 Declarative vs Imperative Systems

Kubernetes is a **declarative system** (which is the opposite of an imperative systems).
Kubernetes is a **declarative system** (which is the opposite of an imperative system).
This means that you can't give it orders.
You can't say, "Run this container." All you can do is describe
what you want to have and wait for Kubernetes to take action to reconcile
Expand Down Expand Up @@ -354,7 +354,7 @@ to versions 1, 2, and 3 of the application) accordingly.
## 2.7 MaxSurge and MaxUnavailable

Kubernetes doesn't exactly update deployments one pod at a time.
Earlier, you learned that that deployments had "a few extra parameters": these
Earlier, you learned that deployments had "a few extra parameters": these
parameters include `MaxSurge` and `MaxUnavailable`, and they
indicate the pace at which the update should proceed.

Expand Down Expand Up @@ -390,7 +390,7 @@ The default values for both parameters are 25%,
meaning that when updating a deployment of size 100, 25 new pods
are immediately created, while 25 old pods are shutdown. Each time
a new pod comes up and is marked ready, another old pod can
be shutdown. Each time an old pod has completed its shutdown
be shutdown. Each time an old pod has completed its shut down
and its resources have been freed, another new pod can be created.

## 2.8 Quick Demo
Expand Down Expand Up @@ -468,7 +468,7 @@ $ kubectl expose deployment web --port=80

The service will have its own internal IP address
(denoted by the name `ClusterIP`) and an optional external IP,
and connections to these IP address on port 80 will be load-balanced
and connections to these IP addresses on port 80 will be load-balanced
across all the pods of this deployment.

In fact, these connections will be load-balanced across all the pods
Expand All @@ -483,9 +483,9 @@ will receive connections automatically.
This means that during a rollout, the deployment doesn't reconfigure
or inform the load balancer that pods are started and stopped.
It happens automatically through the selector of the service
associated to the load balancer.
associated with the load balancer.

If you're wondering how probes and healthchecks play into this,
If you're wondering how probes and health checks play into this,
a pod is added as a valid endpoint for a service only if all its
containers pass their readiness check. In other words, a pod starts
receiving traffic only once it's actually ready for it.
Expand Down Expand Up @@ -538,9 +538,9 @@ send traffic anywhere:
$ kubectl create service clusterip web --tcp=80
```

**Note**: when running a local development Kubernetes cluster, such as MiniKube[^minikube] or the one bundled with Docker Desktop, you'll wish to change the previous command to: `kubectl create service nodeport web --tcp=80`. The NodePort type of service is easier to access locally as the service ports are forwared to `localhost` automatically. To see this port mapping run `kubectl get services`.
**Note**: when running a local development Kubernetes cluster, such as MiniKube[^minikube] or the one bundled with Docker Desktop, you'll wish to change the previous command to: `kubectl create service nodeport web --tcp=80`. The NodePort type of service is easier to access locally as the service ports are forwarded to `localhost` automatically. To see this port mapping run `kubectl get services`.

Now, you can update the selector of service `web` by
Now, you can update the selector of the service `web` by
running `kubectl edit service web`. This will retrieve the
definition of service `web` from the Kubernetes API, and open
it in a text editor. Look for the section that says:
Expand Down

0 comments on commit ae881e2

Please sign in to comment.