diff --git a/chapters/01-introduction-ebook.md b/chapters/01-introduction-ebook.md index 655633e..d47b93d 100644 --- a/chapters/01-introduction-ebook.md +++ b/chapters/01-introduction-ebook.md @@ -1,11 +1,11 @@ \newpage -© 2020 Rendered Text. All rights reserved. +© 2021 Rendered Text. All rights reserved. This book is open source: -$MONTHYEAR: First edition v1.1 (revision $REVISION) +$MONTHYEAR: Second edition v2.0 (revision $REVISION) \newpage @@ -53,6 +53,19 @@ Chapter 3, "Best Practices for Cloud Native Applications", describes how both ou Chapter 4, "A Complete CI/CD Pipeline", is a step-by-step guide to implementing a CI/CD pipeline with Semaphore that builds, tests, and deploys a Dockerized microservice to Kubernetes. +## Changes in the Second Edition + +A few changes were introduced in this second edition: + +- Moved to Kubernetes version v1.20. All commands and actions were tested with this version. +- Added comments about accessing services in local development Kubernetes clusters. +- Added mention of new CI/CD features in Semaphore: parameterized pipelines, test results, code change detection. +- DigitalOcean deployment now uses their Private Container Registry service instead of Docker Hub. +- Updated setup steps for DigitalOcean, Google Cloud, and AWS. +- Updated UI screenshots using higher resolution. +- Modified deployment tutorial to use parametrized promotions. +- Other minor fixes. + ## How to Contact Us We would very much love to hear your feedback after reading this book. What did you like and learn? What could be improved? Is there something we could explain further? diff --git a/chapters/01-introduction.md b/chapters/01-introduction.md index b1b515a..73eb33e 100644 --- a/chapters/01-introduction.md +++ b/chapters/01-introduction.md @@ -2,7 +2,7 @@ © 2021 Rendered Text. All rights reserved. -This work is licensed under Creative Commmons +This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International. To view a copy of this license, visit @@ -33,7 +33,7 @@ Today there's a massive change going on in the way we're using the cloud. To bor Doing so successfully, however, requires our applications to adapt. They need to be disposable and horizontally scalable. They should have a minimal divergence between development and production so that we can continuously deploy them multiple times per day. -A new generation of tools has democratized the way of building such *cloud native* software. Docker container is now the standard way of packaging software in a way that can be deployed, scaled, and dynamically distributed on any cloud. And Kubernetes is the leading platform to run containers in production. Over time new platforms with higher-order interfaces will emerge, but it's almost certain that they will be based on Kubernetes. +A new generation of tools has democratized the way of building such *cloud native* software. Docker containers are now the standard way of packaging software in a way that can be deployed, scaled, and dynamically distributed on any cloud. And Kubernetes is the leading platform to run containers in production. Over time new platforms with higher-order interfaces will emerge, but it's almost certain that they will be based on Kubernetes. The great opportunity comes potentially at a high cost. Countless organizations have spent many engineering months learning how to deliver their apps with this new stack, making sense of disparate information from the web. Delaying new features by months is not exactly the outcome any business wants when engineers announce that they're moving to new tools that are supposed to make them more productive. @@ -76,7 +76,7 @@ A few changes were introduced in this second edition: - Moved to Kubernetes version v1.20. All commands and actions were tested with this version. - Added comments about accessing services in local development Kubernetes clusters. -- Added mention of new CI/CD features in Semaphore: parametrized pipelines, test results, code change detection. +- Added mention of new CI/CD features in Semaphore: parameterized pipelines, test results, code change detection. - DigitalOcean deployment now uses their Private Container Registry service instead of Docker Hub. - Updated setup steps for DigitalOcean, Google Cloud, and AWS. - Updated UI screenshots using higher resolution. diff --git a/chapters/02-using-docker.md b/chapters/02-using-docker.md index 3756ffe..3e2f8ea 100644 --- a/chapters/02-using-docker.md +++ b/chapters/02-using-docker.md @@ -43,11 +43,11 @@ We will see how to get there. After we build container images, we can run them consistently on any server environment. Automating server installation would usually require steps (and domain knowledge) specific to our infrastructure. For instance, if we are using AWS EC2, we may use AMI (Amazon Machine Images), but these images are different (and built differently) from the ones used on Azure, Google Cloud, or a private OpenStack cluster. -Configuration management systems (like Ansible, Chef, Puppet, or Salt) help us by describing our servers and their configuration as manifests that live in version-controlled source repositories. This helps, but writing these manifests is no easy task, and they don’t guarantee reproducible execution. These manifests have to be adapted when switching distributions, distribution versions, and sometimes even from a cloud provider to another, because they would use different network interface or disk naming, for instance. +Configuration management systems (like Ansible, Chef, Puppet, or Salt) help us by describing our servers and their configuration as manifests that live in version-controlled source repositories. This helps, but writing these manifests is no easy task, and they don’t guarantee reproducible execution. These manifests have to be adapted when switching distributions, distribution versions, and sometimes even from a cloud provider to another, because they would use different network interfaces or disk naming, for instance. Once we have installed the Docker Engine (the most popular option), it can run any container image and effectively abstract these environment discrepancies. -The ability to stage up new environments easily and reliably gives us exactly what we need to set up CI/CD (continuous integration and continuous delivery). We will see how to get there. Ultimately, it means that advanced techniques, such as blue/green deployments, or immutable infrastructure, become accessible to us, instead of being a privilege of larger organizations able to spend a lot of time to build their perfect custom tooling. +The ability to stage new environments easily and reliably gives us exactly what we need to set up CI/CD (continuous integration and continuous delivery). We will see how to get there. Ultimately, it means that advanced techniques, such as blue/green deployments, or immutable infrastructure, become accessible to us, instead of being a privilege of larger organizations able to spend a lot of time to build their perfect custom tooling. ### 1.1.3 Less Risky Releases @@ -63,7 +63,7 @@ As a result, we can deploy with more confidence, because we know that if somethi ## 1.2 A Roadmap to Adopting Docker -The following roadmap works for organizations and teams of all size, regardless of their existing knowledge of containers. Even better, this roadmap will give you tangible benefits at each step, so that the gains realized give you more confidence into the whole process. +The following roadmap works for organizations and teams of all sizes, regardless of their existing knowledge of containers. Even better, this roadmap will give you tangible benefits at each step, so that the gains realized give you more confidence in the whole process. Sounds too good to be true? @@ -89,7 +89,7 @@ If we have a component that is tricky enough to require a tool like Vagrant to r ### 1.2.2 Writing the First Dockerfile -There are various ways to write your first Dockerfile, and none of them is inherently right or wrong. Some people prefer to follow the existing environment as close as possible. For example, if you're currently using PHP 7.2 with Apache 2.4, and have some very specific Apache configuration and `.htaccess` files? Sure, makes sense to put that in containers. But if you prefer to start anew from your `.php` files, serve them with PHP FPM, and host the static assets from a separate NGINX container, that’s fine too. Either way, the [official PHP images](https://hub.docker.com/r/_/php/) got us covered. +There are various ways to write your first Dockerfile, and none of them is inherently right or wrong. Some people prefer to follow the existing environment as closely as possible. For example, if you're currently using PHP 7.2 with Apache 2.4, and have some very specific Apache configuration and `.htaccess` files? Sure, it makes sense to put that in containers. But if you prefer to start anew from your `.php` files, serve them with PHP FPM, and host the static assets from a separate NGINX container, that’s fine too. Either way, the [official PHP images](https://hub.docker.com/r/_/php/) got us covered. During this phase, we’ll want to make sure that the team working on that service has Docker installed on their machine, but only a few people will have to meddle with Docker at this point. They will be leveling the field for everyone else. @@ -104,7 +104,7 @@ CMD ["ruby", "hasher.rb"] EXPOSE 80 ``` -Once we have a working Dockerfile for an app, we can start using this container image as the official development environment for this specific service or component. If we picked a fast-moving one, we will see the benefits very quickly, since Docker makes library and other dependency upgrades completely seamless. Rebuilding the entire environment with a different language version now becomes effortless. And if we realize after a difficult upgrade that the new version doesn’t work as well, rolling back is just as easy and instantaneous, because Docker keeps a cache of previous image builds around. +Once we have a working Dockerfile for an app, we can start using this container image as the official development environment for this specific service or component. If we pick a fast-moving one, we will see the benefits very quickly, since Docker makes library and other dependency upgrades completely seamless. Rebuilding the entire environment with a different language version now becomes effortless. And if we realize after a difficult upgrade that the new version doesn’t work as well, rolling back is just as easy and instantaneous, because Docker keeps a cache of previous image builds around. ### 1.2.3 Writing More Dockerfiles diff --git a/chapters/03-kubernetes-deployment.md b/chapters/03-kubernetes-deployment.md index 5b16be6..ff970a3 100644 --- a/chapters/03-kubernetes-deployment.md +++ b/chapters/03-kubernetes-deployment.md @@ -123,7 +123,7 @@ deployment? ## 2.2 Declarative vs Imperative Systems -Kubernetes is a **declarative system** (which is the opposite of an imperative systems). +Kubernetes is a **declarative system** (which is the opposite of an imperative system). This means that you can't give it orders. You can't say, "Run this container." All you can do is describe what you want to have and wait for Kubernetes to take action to reconcile @@ -354,7 +354,7 @@ to versions 1, 2, and 3 of the application) accordingly. ## 2.7 MaxSurge and MaxUnavailable Kubernetes doesn't exactly update deployments one pod at a time. -Earlier, you learned that that deployments had "a few extra parameters": these +Earlier, you learned that deployments had "a few extra parameters": these parameters include `MaxSurge` and `MaxUnavailable`, and they indicate the pace at which the update should proceed. @@ -390,7 +390,7 @@ The default values for both parameters are 25%, meaning that when updating a deployment of size 100, 25 new pods are immediately created, while 25 old pods are shutdown. Each time a new pod comes up and is marked ready, another old pod can -be shutdown. Each time an old pod has completed its shutdown +be shutdown. Each time an old pod has completed its shut down and its resources have been freed, another new pod can be created. ## 2.8 Quick Demo @@ -468,7 +468,7 @@ $ kubectl expose deployment web --port=80 The service will have its own internal IP address (denoted by the name `ClusterIP`) and an optional external IP, -and connections to these IP address on port 80 will be load-balanced +and connections to these IP addresses on port 80 will be load-balanced across all the pods of this deployment. In fact, these connections will be load-balanced across all the pods @@ -483,9 +483,9 @@ will receive connections automatically. This means that during a rollout, the deployment doesn't reconfigure or inform the load balancer that pods are started and stopped. It happens automatically through the selector of the service -associated to the load balancer. +associated with the load balancer. -If you're wondering how probes and healthchecks play into this, +If you're wondering how probes and health checks play into this, a pod is added as a valid endpoint for a service only if all its containers pass their readiness check. In other words, a pod starts receiving traffic only once it's actually ready for it. @@ -538,9 +538,9 @@ send traffic anywhere: $ kubectl create service clusterip web --tcp=80 ``` -**Note**: when running a local development Kubernetes cluster, such as MiniKube[^minikube] or the one bundled with Docker Desktop, you'll wish to change the previous command to: `kubectl create service nodeport web --tcp=80`. The NodePort type of service is easier to access locally as the service ports are forwared to `localhost` automatically. To see this port mapping run `kubectl get services`. +**Note**: when running a local development Kubernetes cluster, such as MiniKube[^minikube] or the one bundled with Docker Desktop, you'll wish to change the previous command to: `kubectl create service nodeport web --tcp=80`. The NodePort type of service is easier to access locally as the service ports are forwarded to `localhost` automatically. To see this port mapping run `kubectl get services`. -Now, you can update the selector of service `web` by +Now, you can update the selector of the service `web` by running `kubectl edit service web`. This will retrieve the definition of service `web` from the Kubernetes API, and open it in a text editor. Look for the section that says: diff --git a/chapters/04-cicd-best-practices.md b/chapters/04-cicd-best-practices.md index 96005b7..092b9bd 100644 --- a/chapters/04-cicd-best-practices.md +++ b/chapters/04-cicd-best-practices.md @@ -28,13 +28,13 @@ For this to happen, the CI/CD tool of choice should fit into the existing develo A reliable pipeline always produces the same output for a given input. And with consistent runtime. Intermittent failures cause intense frustration among developers. -Engineers like to do things independently, and they often opt to maintain their CI/CD system. But operating CI/CD that provides on-demand, clean, stable, and fast resources is a complicated job. What seems to work well for one project or a few developers usually breaks down later. The team and the number of projects grow the technology stack changes. Then someone from management realizes that by delegating that task, the team could spend more time on the actual product. At that point, if not earlier, the engineering team moves from a self-hosted to a cloud-based CI/CD solution. +Engineers like to do things independently, and they often opt to maintain their CI/CD system. But operating CI/CD that provides on-demand, clean, stable, and fast resources is a complicated job. What seems to work well for one project or a few developers usually breaks down later. The team and the number of projects grow as the technology stack changes. Then someone from management realizes that by delegating that task, the team could spend more time on the actual product. At that point, if not earlier, the engineering team moves from a self-hosted to a cloud-based CI/CD solution. ### 3.1.3 Completeness Any increase in automation is a positive change. However, a CI/CD pipeline needs to run and visualize everything that happens to a code change — from the moment it enters the repository until it runs in production. This requires the CI/CD tool to be able to model both simple and, when needed, complex workflows. That way, manual errors are all but impossible. -For example, it’s not uncommon to have the pipeline run only the build and test steps. Deployment remains a manual operation, often performed by a single person. This is a relic of the past when CI tools unable to model delivery workflows. +For example, it’s not uncommon to have the pipeline run only the build and test steps. Deployment remains a manual operation, often performed by a single person. This is a relic of the past when CI tools were unable to model delivery workflows. Today a service like Semaphore provides features like: @@ -101,7 +101,7 @@ There are cases when complete automation is not possible. You may have customers But if these conditions do not apply and you still think that your pipeline can’t be fully automated — you’re almost certainly wrong. -Take a good look at your end-to-end process and uncover where you’re doing things manually out of habit. Make a plan to make any changes that may be needed, are automate it. +Take a good look at your end-to-end process and uncover where you’re doing things manually out of habit. Make a plan to make any changes that may be needed, and automate it. ## 3.3 Continuous Integration Best Practices @@ -133,7 +133,7 @@ If a CI build takes a long time, we approach our work defensively. We tend to ke With a slow build, every “git push” leads to a huge distraction. We either wait or look for something else to do to avoid being completely idle. And if we context-switch to something else, we know that we’ll need to switch back again when the build is finished. The catch is that every task switch in programming is hard, and it sucks up our energy. -The point of continuous in continuous integration is speed. Speed drives high productivity: we want feedback as soon as possible. Fast feedback loops keep us in a state of flow, which is the source of our happiness at work. +The point of *continuous* in continuous integration is speed. Speed drives high productivity: we want feedback as soon as possible. Fast feedback loops keep us in a state of flow, which is the source of our happiness at work. So, it’s helpful to establish criteria for how fast should a CI process be: @@ -154,7 +154,7 @@ With that last question, only a few hands remain. Those are the people who pass There are a couple of tactics which you can employ to reduce CI build time: - **Caching**: Project dependencies should be independently reused across builds. When building Docker containers, use the layer caching feature to reuse known layers from the registry. -- **Built-in Docker registry**: A container-native CI solution should include a built-in registry. This saves a lot of money comparing to using the registry provided by your cloud provider. It also speeds up CI, often by several minutes. +- **Built-in Docker registry**: A container-native CI solution should include a built-in registry. This saves a lot of money compared to using the registry provided by your cloud provider. It also speeds up CI, often by several minutes. - **Test parallelization**: A large test suite is the most common reason why CI is slow. The solution is to distribute tests across as many parallel jobs as needed. - **Change detection**: Large test suites can be dramatically sped up by only testing code that has changed since the last commit. @@ -197,7 +197,7 @@ According to this strategy, a test suite has: - The most unit tests. - Somewhat less service-level tests, which include calls to the database and any other core external resource. -- Few user interface, or end-to-end tests. These serve to verify the behavior of the system as a whole, usually from the user's perspective. +- Few user interfaces, or end-to-end tests. These serve to verify the behavior of the system as a whole, usually from the user's perspective. If a team follows this strategy, a failing unit test is a signal of a fundamental problem. The remaining high-level and long-running tests are irrelevant until we resolve the problem. @@ -271,7 +271,7 @@ It’s crucial to maintain the discipline of having every single change go throu It can be tempting to break this rule in cases of seemingly exceptional circumstances and revert to manual procedures that circumvent the pipeline. On the contrary, the times of crisis are exactly when the pipeline delivers value by making sure that the system doesn’t degrade even further. When timing is critical, the pipeline should roll back to the previous release. -Once it happens that the configuration and history of the CI/CD pipeline diverge from what teams do in reality, it’s difficult to re-establish the automation and the culture of quality. For this reason, it’s important to invest time in making the pipeline fast so that no one feels encouraged to skip it. +Once it happens that the configuration and history of the CI/CD pipeline diverge from what teams do in reality, it’s difficult to re-establish automation and the culture of quality. For this reason, it’s important to invest time in making the pipeline fast so that no one feels encouraged to skip it. ### 3.4.2 Developers Can Deploy to Production-Like Staging Environments at a Push of a Button @@ -293,7 +293,7 @@ Today containers guarantee that your code always runs in the same environment. Y Other environments are still not the same as production, since reproducing the same infrastructure and load is expensive. However, the differences are manageable, and we get to avoid most of the errors that would have occurred with non-identical environments. -Chapter 1 includes a roadmap for adopting Docker for this purpose. Chapter 2 described some of the advanced deployment strategies that you can use with Kubernetes. Strategies like blue-green and canary deployment reduce the risk of bad deploys. Now that we know what a proper CI/CD pipeline should look like, it’s time to start implementing it. +Chapter 1 includes a roadmap for adopting Docker for this purpose. Chapter 2 described some of the advanced deployment strategies that you can use with Kubernetes. Strategies like blue-green and canary deployment reduce the risk of bad deployments. Now that we know what a proper CI/CD pipeline should look like, it’s time to start implementing it. [^jez]: What is Proper Continuous Integration, Semaphore [https://semaphoreci.com/blog/2017/03/02/what-is-proper-continuous-integration.html](https://semaphoreci.com/blog/2017/03/02/what-is-proper-continuous-integration.html?utm_source=ebook&utm_medium=pdf&utm_campaign=cicd-docker-kubernetes-semaphore) diff --git a/chapters/05-tutorial-intro.md b/chapters/05-tutorial-intro.md index d90051f..8c3882e 100644 --- a/chapters/05-tutorial-intro.md +++ b/chapters/05-tutorial-intro.md @@ -274,7 +274,7 @@ The CI pipeline performs the following steps: - **Test**: Start the container and run tests inside. -- **Docker push**: If all test pass, push the accepted image to the production registry. +- **Docker push**: If all tests pass, push the accepted image to the production registry. In this process, we'll use Semaphore’s built-in Docker registry. This is faster and cheaper than using a registry from a cloud vendor to work with containers in the CI/CD context. @@ -282,7 +282,7 @@ In this process, we'll use Semaphore’s built-in Docker registry. This is faste In chapter 2, we learned about canaries and rolling deployments. In chapter 3, we have talked about Continuous Delivery and Continuous Deployment. Our CI/CD workflow combines these two practices. -A canary deployment is a limited release of a new version. We’ll call it _canary release_, and the previous version still used by most users is the _stable release_. +A canary deployment is a limited release of a new version. We’ll call it: the _canary release_, and the previous version still used by most users is the _stable release_. We can do a canary deployment by connecting the canary pods to the same load balancer as the rest of the pods. As a result, a set fraction of user traffic goes to the canary. For example, if we have nine stable pods and one canary pod, 10% of the users would get the canary release. @@ -292,7 +292,7 @@ The canary release performs the following steps: - **Copy** the image from the Semaphore registry to the production registry. - **Canary deploy** a canary pod. -- **Test** the canary pod to ensure it’s working by running automate functional tests. We may optionally also perform manual QA. +- **Test** the canary pod to ensure it’s working by running automated functional tests. We may optionally also perform manual QA. - **Stable release**: if test passes, update the rest of the pods. Let’s take a closer look at how the stable release works. diff --git a/chapters/06-tutorial-semaphore.md b/chapters/06-tutorial-semaphore.md index 6edac84..64e3553 100644 --- a/chapters/06-tutorial-semaphore.md +++ b/chapters/06-tutorial-semaphore.md @@ -45,7 +45,7 @@ On Semaphore, click on *New Project* at the top of the screen. Then, click on *C In the search field, start typing `semaphore-demo-cicd-kubernetes` and choose that repository. -Semaphore will quickly initialize the project. Behind the scenes, it will set up everything that's needed to know about every Git push automatically pull the latest code — without you configuring anything. +Semaphore will quickly initialize the project. Behind the scenes, it will set up everything that's needed to know about every Git push automatically pulling the latest code — without you configuring anything. The next screen lets you invite collaborators to your project. Semaphore mirrors access permissions of GitHub, so if you add some people to the GitHub repository later, you can "sync" them inside project settings on Semaphore. @@ -93,7 +93,7 @@ Jobs inherit their configuration from their parent block. All the jobs in a bloc Blocks run sequentially. Once all the jobs in the block are complete, the next one starts. -### 4.4.5 The Continous Integration Pipeline +### 4.4.5 The Continuous Integration Pipeline We talked about the benefits of CI/CD in chapter 3. In the previous section, we created our very first pipeline. In this section, we’ll extend it with tests and a place to store the images. @@ -143,7 +143,7 @@ The discerning reader will note that we introduced special environment variables ![Build block](./figures/05-sem-build-block-2.png){ width=95% } -Now that we have a Docker image that we can test let’s add a second block. Click on the *+Add Block* dotted box. +Now that we have a Docker image that we can test, let’s add a second block. Click on the *+Add Block* dotted box. The Test block will have jobs: diff --git a/chapters/08-tutorial-deployment.md b/chapters/08-tutorial-deployment.md index b5f63f8..5f22ffe 100644 --- a/chapters/08-tutorial-deployment.md +++ b/chapters/08-tutorial-deployment.md @@ -4,7 +4,7 @@ Now that we have our cloud services, we’re ready to prepare the canary deploym Our project on GitHub includes three ready-to-use reference pipelines for deployment. They should work out-of-the-box in combination with the secrets as described earlier. For further details, check the `.semaphore` folder in the project. -In this section, you'll learn how to create deployment pipelines on Semaphore from scratch. We'll use DigitalOcean and Docker Hub registry as an example, but the process is essentially the same for other clouds. +In this section, you'll learn how to create deployment pipelines on Semaphore from scratch. We'll use DigitalOcean as an example, but the process is essentially the same for other clouds. ### 4.7.1 Creating a Promotion and Deployment Pipeline @@ -71,7 +71,7 @@ Open the *Environment Variables* section: - Create a variable called `CLUSTER_NAME` with the DigitalOcean cluster name (`semaphore-demo-cicd-kubernetes`) - Create a variable called `REGISTRY_NAME` with the name of the DigitalOcean container registry name. -To connect with the DigitalOcean cluster, we can use preinstalled the official `doctl` tool. +To connect with the DigitalOcean cluster, we can use the preinstalled official `doctl` tool. Add the following commands to the *job*: @@ -186,7 +186,7 @@ addressbook-canary 1/1 1 1 8m40s ### 4.8.3 Releasing the Stable -In tandem with the canary deployment, we should have a dashboard to monitor errors, user reports, and performance metrics to compare against the baseline. After some pre-determined amount of time, we would reach a go vs. no-go decision. Is the canary version is good enough to be promoted to stable? If so, the deployment continues. If not, after collecting the necessary error reports and stack traces, we roll back and regroup. +In tandem with the canary deployment, we should have a dashboard to monitor errors, user reports, and performance metrics to compare against the baseline. After some predetermined amount of time, we would reach a go vs. no-go decision. Is the canary version good enough to be promoted to stable? If so, the deployment continues. If not, after collecting the necessary error reports and stack traces, we roll back and regroup. Let’s say we decide to go ahead. So go on and hit the *Promote* button. You can tweak the number of final pods to deploy. The stable pipeline should be done in a few seconds. @@ -311,7 +311,7 @@ Run the workflow once more and make a canary release, but this time try rolling ![Rollback Pipeline](./figures/05-sem-rollback-canary.png){ width=95% } -And we’re back to normal, phew\! Now its time to check the job logs to see what went wrong and fix it before merging to master again. +And we’re back to normal, phew\! Now it’s time to check the job logs to see what went wrong and fix it before merging to master again. **But what if we discover a problem after we deploy a stable release?** Let’s imagine that a defect sneaked its way into production. It can happen, maybe there was some subtle bug that no one found hours or days in. Or perhaps some error was not picked up by the functional test. Is it too late? Can we go back to the previous version? @@ -373,7 +373,7 @@ To access a pod network from your machine, forward a port with `port-forward`, f These are some common error messages that you might run into: - - Manifest is invalid: it usually means that the manifest YAML syntax is incorrect. Use `kubectl --dry-run` or `--validate` options verify the manifest. + - Manifest is invalid: it usually means that the manifest YAML syntax is incorrect. Use `kubectl --dry-run` or `--validate` options to verify the manifest. - `ImagePullBackOff` or `ErrImagePull`: the requested image is invalid or was not found. Check that the image is in the registry and that the reference in the manifest is correct. - `CrashLoopBackOff`: the application is crashing, and the pod is shutting down. Check the logs for application errors. - Pod never leaves `Pending` status: this could mean that one of the Kubernetes secrets is missing.