diff --git a/linkerd.io/README.md b/linkerd.io/README.md index aa485f37de..870086fa8d 100644 --- a/linkerd.io/README.md +++ b/linkerd.io/README.md @@ -153,8 +153,8 @@ list page. If a cover image is not present, or you would like to use a different image than the cover image, you can name it `feature` and place it in the blog post folder. -**Note:** When a blog post is featured on the blog listing it will be -cropped into a 4x1 ratio. +**Note:** When a blog post is featured on the blog listing it will be cropped +into a 4x1 ratio. #### Thumbnail images @@ -289,31 +289,29 @@ bash {class=disable-copy} ### Creating a new major version To create a new major version for the Linkerd docs, follow the steps below. As -an example we suppose the latest major is `2.18` and we'd like to create docs -for the upcoming `2.19` version, that will appear at `https://linkerd.io/2.19`. +an example we suppose the latest major is `2.19` and we'd like to create docs +for the upcoming `2.20` version. - Clone the `https://github.com/linkerd/website` repo -- Create a new branch `yourusername/2.19` +- Create a new branch `yourusername/2.20` - Update the latest version in `linkerd.io/config/_default/params.yaml`: - `latestMajorVersion: "2.19"` -- Update the `docs` menu in `linkerd.io/config/_default/menu.yaml` to include a - menu item for `2.19`. + `latestMajorVersion: "2.20"` +- Update the `docs` menu in `linkerd.io/config/_default/menu.yaml` changing the + 2.19 `pageRef` to `/2.19/`. Then include a new menu item for `2.20` with + `pageRef` set to `/docs/`. - Make sure all the links in the edge version (`2-edge`) are relative and don't have the version hard-coded. E.g. `(/../cli/install/#)` instead of `(/2-edge/reference/cli/install/#)`. -- Add a row to the Supported Kubernetes Versions table for `2.19` in +- Add a row to the Supported Kubernetes Versions table for `2.20` in `linkerd.io/content/2-edge/reference/k8s-versions.md`. -- Create an entire new directory, copying the edge docs: - `cp -r linkerd.io/content/2-edge linkerd.io/content/2.19`. Any upcoming doc - changes pertaining to `2.19` should be pushed against that new directory and +- Add a row to the Gateway API compatibility table for `2.20` in + `linkerd.io/content/2-edge/features/gateway-api.md`. +- Rename the `docs` directory `2.19`. +- Create a new directory, copying the edge docs: + `cp -r linkerd.io/content/2-edge linkerd.io/content/docs`. Any upcoming doc + changes pertaining to `2.20` should be pushed against that new directory and the `2-edge` directory. -- Generate the CLI docs with `linkerd doc > linkerd.io/data/cli/2-19.yaml`. Just +- Generate the CLI docs with `linkerd doc > linkerd.io/data/cli/2-20.yaml`. Just to make sure the edge data is up to date, copy the contents from this newly genereated file to `linkerd.io/data/cli/2-edge.yaml`. -- Push, and hold the merge till after `2.19` is out. -- After merging, update the Cloudflare redirection rule so `/2` points to - `/2.19`: - - Click on the `linkerd.io` site - - Click on the `Rules`section - - Update the rule `https://linkerd.io/2/*` so that it points to - `https://linkerd.io/2.19/$1` +- Push, and hold the merge till after `2.20` is out. diff --git a/linkerd.io/assets/alias.html b/linkerd.io/assets/alias.html new file mode 100644 index 0000000000..01b7094336 --- /dev/null +++ b/linkerd.io/assets/alias.html @@ -0,0 +1,13 @@ + + + + {{ .Permalink }} + + + + + + + diff --git a/linkerd.io/config/_default/menu.yaml b/linkerd.io/config/_default/menu.yaml index 746cd24a69..d553ef0e51 100644 --- a/linkerd.io/config/_default/menu.yaml +++ b/linkerd.io/config/_default/menu.yaml @@ -40,7 +40,7 @@ docs: pageRef: /2-edge/ weight: 99 - name: Linkerd 2.19 - pageRef: /2.19/ + pageRef: /docs/ weight: 19 - name: Linkerd 2.18 pageRef: /2.18/ @@ -108,7 +108,6 @@ community: image: logos/forum.png follow: - - name: Linkedin url: https://www.linkedin.com/company/linkerd/ weight: 1 diff --git a/linkerd.io/content/2-edge/_index.md b/linkerd.io/content/2-edge/_index.md index 6c28dc82e3..76d71fa364 100644 --- a/linkerd.io/content/2-edge/_index.md +++ b/linkerd.io/content/2-edge/_index.md @@ -7,5 +7,5 @@ cascade: type: _default layout: redirect params: - redirect: ./overview + redirect: ./getting-started --- diff --git a/linkerd.io/content/2-edge/checks/index.md b/linkerd.io/content/2-edge/checks/index.md index fc6ac87de5..5f2ae8ee80 100644 --- a/linkerd.io/content/2-edge/checks/index.md +++ b/linkerd.io/content/2-edge/checks/index.md @@ -6,5 +6,5 @@ type: _default layout: redirect params: unlisted: true - redirect: /2/tasks/troubleshooting/ + redirect: /docs/tasks/troubleshooting/ --- diff --git a/linkerd.io/content/2-edge/features/dashboard.md b/linkerd.io/content/2-edge/features/dashboard.md index d66a157726..c69f941be4 100644 --- a/linkerd.io/content/2-edge/features/dashboard.md +++ b/linkerd.io/content/2-edge/features/dashboard.md @@ -49,7 +49,7 @@ health of specific service routes. One way to pull it up is by running `linkerd viz dashboard` from the command line. -![Top Line Metrics](/docs/images/architecture/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/architecture/stat.png 'Top Line Metrics') ## Grafana @@ -96,13 +96,13 @@ linkerd -n emojivoto viz tap deploy/web All of this functionality is also available in the dashboard, if you would like to use your browser instead: -![Top Line Metrics](/docs/images/getting-started/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/getting-started/stat.png 'Top Line Metrics') -![Deployment Detail](/docs/images/getting-started/inbound-outbound.png "Deployment Detail") +![Deployment Detail](/images/docs/getting-started/inbound-outbound.png 'Deployment Detail') -![Top](/docs/images/getting-started/top.png "Top") +![Top](/images/docs/getting-started/top.png 'Top') -![Tap](/docs/images/getting-started/tap.png "Tap") +![Tap](/images/docs/getting-started/tap.png 'Tap') ## Futher reading diff --git a/linkerd.io/content/2-edge/features/distributed-tracing.md b/linkerd.io/content/2-edge/features/distributed-tracing.md index 2d111828f9..0a8833fe64 100644 --- a/linkerd.io/content/2-edge/features/distributed-tracing.md +++ b/linkerd.io/content/2-edge/features/distributed-tracing.md @@ -26,13 +26,13 @@ For example, Linkerd can display a live topology of all incoming and outgoing dependencies for a service, without requiring distributed tracing or any other such application modification: -![The Linkerd dashboard showing an automatically generated topology graph](/docs/images/books/webapp-detail.png "The Linkerd dashboard showing an automatically generated topology graph") +![The Linkerd dashboard showing an automatically generated topology graph](/images/docs/books/webapp-detail.png 'The Linkerd dashboard showing an automatically generated topology graph') Likewise, Linkerd can provide golden metrics per service and per _route_, again without requiring distributed tracing or any other such application modification: -![Linkerd dashboard showing an automatically generated route metrics](/docs/images/books/webapp-routes.png "Linkerd dashboard showing an automatically generated route metrics") +![Linkerd dashboard showing an automatically generated route metrics](/images/docs/books/webapp-routes.png 'Linkerd dashboard showing an automatically generated route metrics') ## Using distributed tracing diff --git a/linkerd.io/content/2-edge/features/multicluster.md b/linkerd.io/content/2-edge/features/multicluster.md index 79a1fe8da6..9c1371847a 100644 --- a/linkerd.io/content/2-edge/features/multicluster.md +++ b/linkerd.io/content/2-edge/features/multicluster.md @@ -43,7 +43,7 @@ the _Foo_ service as if it were on the local cluster. Linkerd supports three basic forms of multi-cluster communication: hierarchical, flat, and federated. -![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png) +![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png) ### Hierarchical networks diff --git a/linkerd.io/content/2-edge/getting-started/_index.md b/linkerd.io/content/2-edge/getting-started/_index.md index ef3519b59c..5e3af75210 100644 --- a/linkerd.io/content/2-edge/getting-started/_index.md +++ b/linkerd.io/content/2-edge/getting-started/_index.md @@ -255,7 +255,7 @@ linkerd viz dashboard & You should see a screen like this: -![The Linkerd dashboard in action](/docs/images/getting-started/viz-empty-dashboard.png "The Linkerd dashboard in action") +![The Linkerd dashboard in action](/images/docs/getting-started/viz-empty-dashboard.png 'The Linkerd dashboard in action') Click around, explore, and have fun! For extra credit, see if you can find the live metrics for each Emojivoto component, and determine which one has a partial diff --git a/linkerd.io/content/2-edge/reference/architecture.md b/linkerd.io/content/2-edge/reference/architecture.md index f3a4bcdfca..8dc18491ef 100644 --- a/linkerd.io/content/2-edge/reference/architecture.md +++ b/linkerd.io/content/2-edge/reference/architecture.md @@ -16,7 +16,7 @@ with the control plane for configuration. Linkerd also provides a **CLI** that can be used to interact with the control and data planes. -![Linkerd's architecture](/docs/images/architecture/control-plane.png "Linkerd's architecture") +![Linkerd's architecture](/images/docs/architecture/control-plane.png "Linkerd's architecture") ## CLI diff --git a/linkerd.io/content/2-edge/reference/iptables.md b/linkerd.io/content/2-edge/reference/iptables.md index 67a7ea89de..a91a6d0d4d 100644 --- a/linkerd.io/content/2-edge/reference/iptables.md +++ b/linkerd.io/content/2-edge/reference/iptables.md @@ -31,7 +31,7 @@ The redirect chain will be configured with two more rules: Based on these two rules, there are two possible paths that an inbound packet can take, both of which are outlined below. -![Inbound iptables chain traversal](/docs/images/iptables/iptables-fig2-1.png "Inbound iptables chain traversal") +![Inbound iptables chain traversal](/images/docs/iptables/iptables-fig2-1.png 'Inbound iptables chain traversal') The packet will arrive on the `PREROUTING` chain and will be immediately routed to the redirect chain. If its destination port matches any of the inbound ports @@ -79,7 +79,7 @@ configured: been produced by the service, so it should be forwarded to its destination by the proxy. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-2.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-2.png 'Outbound iptables chain traversal') A packet produced by the service will first hit the `OUTPUT` chain; from here, it will be sent to our own output chain for processing. The first rule it @@ -113,7 +113,7 @@ in the pod. This scenario would typically apply when: - The destination is a port bound on localhost (regardless of which container it belongs to). -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-3.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-3.png 'Outbound iptables chain traversal') When the application targets itself through its pod's IP (or loopback address), the packets will traverse the two output chains. The first rule will be skipped, @@ -138,7 +138,7 @@ inbound side to account for outbound packets that are sent locally. is not guaranteed that the destination will be local. The packet follows an unusual path, as depicted in the diagram below. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-4.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-4.png 'Outbound iptables chain traversal') When the packet first traverses the output chains, it will follow the same path an outbound packet would normally take. In such a scenario, the packet's diff --git a/linkerd.io/content/2-edge/reference/multicluster.md b/linkerd.io/content/2-edge/reference/multicluster.md index 013191edba..9c2a032437 100644 --- a/linkerd.io/content/2-edge/reference/multicluster.md +++ b/linkerd.io/content/2-edge/reference/multicluster.md @@ -18,7 +18,7 @@ modes: hierarchical (using a gateway), flat (without a gateway), and federated. These modes can be mixed and matched. -![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png) +![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png) Hierarchical mode places a bare minimum of requirements on the underlying network, as it only requires that the gateway IP be reachable. However, flat diff --git a/linkerd.io/content/2-edge/tasks/books.md b/linkerd.io/content/2-edge/tasks/books.md index b881bcbeaa..6f9460c53f 100644 --- a/linkerd.io/content/2-edge/tasks/books.md +++ b/linkerd.io/content/2-edge/tasks/books.md @@ -21,7 +21,7 @@ the other services. There are three services: For demo purposes, the app comes with a simple traffic generator. The overall topology looks like this: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') ## Prerequisites @@ -71,7 +71,7 @@ connection" messages for the rest of the exercise.) Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') Unfortunately, there is an error in the app: if you click _Add Book_, it will fail 50% of the time. This is a classic case of non-obvious, intermittent @@ -80,7 +80,7 @@ debug. Kubernetes itself cannot detect or surface this error. From Kubernetes's perspective, it looks like everything's fine, but you know the application is returning errors. -![Failure](/docs/images/books/failure.png "Failure") +![Failure](/images/docs/books/failure.png 'Failure') ## Add Linkerd to the service diff --git a/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md b/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md index 2c495e1f20..67f4070de6 100644 --- a/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md @@ -64,7 +64,7 @@ $ kubectl -n booksapp port-forward svc/webapp 7000 & Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') ## Creating a Server resource @@ -330,7 +330,7 @@ web UI, we may notice that something is amiss. Attempting to delete an author results in a "not found" error in the web UI: -![Not found](/docs/images/books/delete-404.png "Not found") +![Not found](/images/docs/books/delete-404.png 'Not found') and similarly, adding a new author takes us to an error page. @@ -375,7 +375,7 @@ EOF What happens if we try to delete an author _now_? We still see a failure, but a different one: -![Internal server error](/docs/images/books/delete-503.png "Internal server error") +![Internal server error](/images/docs/books/delete-503.png 'Internal server error') This is because we have created a _route_ matching `DELETE`, `PUT`, and `POST` requests, but we haven't _authorized_ requests to that route. Running the @@ -432,11 +432,11 @@ in this case, we only authenticate the `webapp` deployment's `ServiceAccount` Now, if we attempt to delete an author in the frontend once again, we can: -![Author deleted](/docs/images/books/delete-ok.png "Author deleted") +![Author deleted](/images/docs/books/delete-ok.png 'Author deleted') Similarly, we can now create a new author successfully, as well: -![Author created](/docs/images/books/create-ok.png "Author created") +![Author created](/images/docs/books/create-ok.png 'Author created') Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: diff --git a/linkerd.io/content/2-edge/tasks/debugging-your-service.md b/linkerd.io/content/2-edge/tasks/debugging-your-service.md index fa22bef005..d1009cfe76 100644 --- a/linkerd.io/content/2-edge/tasks/debugging-your-service.md +++ b/linkerd.io/content/2-edge/tasks/debugging-your-service.md @@ -17,12 +17,12 @@ command), you should see all the resources in the `emojivoto` namespace, including the deployments. Each deployment running Linkerd shows success rate, requests per second and latency percentiles. -![Top Level Metrics](/docs/images/debugging/stat.png "Top Level Metrics") +![Top Level Metrics](/images/docs/debugging/stat.png 'Top Level Metrics') That's pretty neat, but the first thing you might notice is that the success rate is well below 100%! Click on `web` and let's dig in. -![Deployment Detail](/docs/images/debugging/octopus.png "Deployment Detail") +![Deployment Detail](/images/docs/debugging/octopus.png 'Deployment Detail') You should now be looking at the Deployment page for the web deployment. The first thing you'll see here is that the web deployment is taking traffic from @@ -38,7 +38,7 @@ returning. Let's scroll a little further down the page, we'll see a live list of all traffic that is incoming to _and_ outgoing from `web`. This is interesting: -![Top](/docs/images/debugging/web-top.png "Top") +![Top](/images/docs/debugging/web-top.png 'Top') There are two calls that are not at 100%: the first is vote-bot's call to the `/api/vote` endpoint. The second is the `VoteDoughnut` call from the web @@ -54,7 +54,7 @@ the requests are failing with a is a common error response as you can see from [the code][code]. Linkerd is aware of gRPC's response classification without any other configuration! -![Tap](/docs/images/debugging/web-tap.png "Tap") +![Tap](/images/docs/debugging/web-tap.png 'Tap') At this point, we have everything required to get the endpoint fixed and restore the overall health of our applications. diff --git a/linkerd.io/content/2-edge/tasks/distributed-tracing.md b/linkerd.io/content/2-edge/tasks/distributed-tracing.md index c95fcd0198..9e3b0ef67f 100644 --- a/linkerd.io/content/2-edge/tasks/distributed-tracing.md +++ b/linkerd.io/content/2-edge/tasks/distributed-tracing.md @@ -23,7 +23,7 @@ To use distributed tracing, you'll need to: In the case of emojivoto, once all these steps are complete there will be a topology that looks like this: -![Topology](/docs/images/tracing/tracing-topology.svg "Topology") +![Topology](/images/docs/tracing/tracing-topology.svg 'Topology') {{< warning >}} @@ -180,20 +180,22 @@ kubectl port-forward -n jaeger-system svc/jaeger-query 16686 ``` + Then, open http://127.0.0.1:16686 in your browser. + -![Jaeger](/docs/images/tracing/jaeger-empty.png "Jaeger") +![Jaeger](/images/docs/tracing/jaeger-empty.png 'Jaeger') You can search for any service in the dropdown and click Find Traces. `vote-bot` is a great way to get started. -![Search](/docs/images/tracing/jaeger-search.png "Search") +![Search](/images/docs/tracing/jaeger-search.png 'Search') Clicking on a specific trace will provide all the details, you'll be able to see the spans for every proxy! -![Search](/docs/images/tracing/example-trace.png "Search") +![Search](/images/docs/tracing/example-trace.png 'Search') Note the large number of `linkerd-proxy` spans in the output. Internally, the proxy has a server and client side. When a request goes through the proxy, it is diff --git a/linkerd.io/content/2-edge/tasks/fault-injection.md b/linkerd.io/content/2-edge/tasks/fault-injection.md index 1108152181..5172a374a0 100644 --- a/linkerd.io/content/2-edge/tasks/fault-injection.md +++ b/linkerd.io/content/2-edge/tasks/fault-injection.md @@ -12,7 +12,7 @@ return whatever responses you want - 500s, timeouts or even crazy payloads. The [books demo](books/) is a great way to show off this behavior. The overall topology looks like: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') In this guide, you will split some of the requests from `webapp` to `books`. Most requests will end up at the correct `books` destination, however some of diff --git a/linkerd.io/content/2-edge/tasks/flagger.md b/linkerd.io/content/2-edge/tasks/flagger.md index 6f6ab6fcc2..356ac3b38e 100644 --- a/linkerd.io/content/2-edge/tasks/flagger.md +++ b/linkerd.io/content/2-edge/tasks/flagger.md @@ -69,7 +69,7 @@ orchestrates it. A load generator simply makes it easier to execute the rollout as there needs to be some kind of active traffic to complete the operation. Together, these components have a topology that looks like: -![Topology](/docs/images/canary/simple-topology.svg "Topology") +![Topology](/images/docs/canary/simple-topology.svg 'Topology') To add these components to your cluster and include them in the Linkerd [data plane](../reference/architecture/#data-plane), run: @@ -213,7 +213,7 @@ podinfo-primary ClusterIP 10.7.249.63 9898/TCP 23m At this point, the topology looks a little like: -![Initialized](/docs/images/canary/initialized.svg "Initialized") +![Initialized](/images/docs/canary/initialized.svg 'Initialized') {{< note >}} @@ -259,7 +259,7 @@ kubectl -n test get ev --watch While an update is occurring, the resources and traffic will look like this at a high level: -![Ongoing](/docs/images/canary/ongoing.svg "Ongoing") +![Ongoing](/images/docs/canary/ongoing.svg 'Ongoing') After the update is complete, this picture will go back to looking just like the figure from the previous section. diff --git a/linkerd.io/content/2-edge/tasks/gitops.md b/linkerd.io/content/2-edge/tasks/gitops.md index 18e90ce73c..51328396a3 100644 --- a/linkerd.io/content/2-edge/tasks/gitops.md +++ b/linkerd.io/content/2-edge/tasks/gitops.md @@ -22,7 +22,7 @@ the [auto proxy injection](../features/proxy-injection/) feature into your workflow. Finally, this guide conclude with steps to upgrade Linkerd to a newer version following a GitOps workflow. -![Linkerd GitOps workflow](/docs/images/gitops/architecture.png 'Linkerd GitOps workflow') +![Linkerd GitOps workflow](/images/docs/gitops/architecture.png 'Linkerd GitOps workflow') The software and tools used in this guide are selected for demonstration purposes only. Feel free to choose others that are most suited for your @@ -184,7 +184,7 @@ argocd proj get demo On the dashboard: -![New project in Argo CD dashboard](/docs/images/gitops/dashboard-project.png 'New project in Argo CD dashboard') +![New project in Argo CD dashboard](/images/docs/gitops/dashboard-project.png 'New project in Argo CD dashboard') ### Deploy the applications @@ -215,7 +215,7 @@ Sync the `main` application: argocd app sync main ``` -![Synchronize the main application](/docs/images/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') +![Synchronize the main application](/images/docs/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') Notice that only the `main` application is synchronized. @@ -237,7 +237,7 @@ for deploy in "cert-manager" "cert-manager-cainjector" "cert-manager-webhook"; \ done ``` -![Synchronize the cert-manager application](/docs/images/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') +![Synchronize the cert-manager application](/images/docs/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') ### Deploy Sealed Secrets @@ -253,7 +253,7 @@ Confirm that sealed-secrets is running: kubectl -n kube-system rollout status deploy/sealed-secrets ``` -![Synchronize the sealed-secrets application](/docs/images/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') +![Synchronize the sealed-secrets application](/images/docs/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') ### Create mTLS trust anchor @@ -348,7 +348,7 @@ Git server earlier. {{< /note >}} -![Synchronize the linkerd-bootstrap application](/docs/images/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') +![Synchronize the linkerd-bootstrap application](/images/docs/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') SealedSecrets should have created a secret containing the decrypted trust anchor. Retrieve the decrypted trust anchor from the secret: @@ -380,7 +380,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Empty default trust anchor](/docs/images/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') +![Empty default trust anchor](/images/docs/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') We will override this parameter in the `linkerd` application with the value of `${trust_anchor}`. @@ -395,7 +395,7 @@ Ensure that the multi-line string is indented correctly. E.g., source: chart: linkerd-control-plane repoURL: https://helm.linkerd.io/edge - targetRevision: {{% chart-version %}} + targetRevision: { { % chart-version % } } helm: parameters: - name: identityTrustAnchorsPEM @@ -442,7 +442,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Override mTLS trust anchor](/docs/images/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') +![Override mTLS trust anchor](/images/docs/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') Synchronize the `linkerd-crds` and `linkerd-control-plane` applications: @@ -457,7 +457,7 @@ Check that Linkerd is ready: linkerd check ``` -![Synchronize Linkerd](/docs/images/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') +![Synchronize Linkerd](/images/docs/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') ### Test with emojivoto @@ -475,7 +475,7 @@ for deploy in "emoji" "vote-bot" "voting" "web" ; \ done ``` -![Synchronize emojivoto](/docs/images/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') +![Synchronize emojivoto](/images/docs/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') ### Upgrade Linkerd diff --git a/linkerd.io/content/2-edge/tasks/multicluster.md b/linkerd.io/content/2-edge/tasks/multicluster.md index 3a80b3f3ed..e0c8c81185 100644 --- a/linkerd.io/content/2-edge/tasks/multicluster.md +++ b/linkerd.io/content/2-edge/tasks/multicluster.md @@ -13,6 +13,7 @@ between services that live on different clusters. At a high level, you will: + 1. [Install Linkerd and Linkerd Viz](#install-linkerd-and-linkerd-viz) on two clusters with a shared trust anchor. 1. [Prepare](#preparing-your-cluster) the clusters. @@ -21,7 +22,7 @@ At a high level, you will: 1. [Export](#exporting-the-services) the demo services, to control visibility. 1. [Verify](#security) the security of your clusters. 1. [Split traffic](#traffic-splitting) from pods on the source cluster (`west`) - to the target cluster (`east`) +to the target cluster (`east`) ## Prerequisites @@ -52,7 +53,7 @@ At a high level, you will: ## Install Linkerd and Linkerd Viz -![install](/docs/images/multicluster/install.svg "Two Clusters") +![install](/images/docs/multicluster/install.svg 'Two Clusters') Linkerd requires a shared [trust anchor](generate-certificates/#trust-anchor-certificate) to exist between @@ -138,7 +139,7 @@ done ## Preparing your cluster -![preparation](/docs/images/multicluster/prep-overview.svg "Preparation") +![preparation](/images/docs/multicluster/prep-overview.svg 'Preparation') In order to route traffic between clusters, Linkerd leverages Kubernetes services so that your application code does not need to change and there is @@ -161,7 +162,7 @@ for ctx in west east; do done ``` -![install](/docs/images/multicluster/components.svg "Components") +![install](/images/docs/multicluster/components.svg 'Components') Installed into the `linkerd-multicluster` namespace, the gateway is a simple [pause container](https://github.com/linkerd/linkerd2/blob/main/multicluster/charts/linkerd-multicluster/templates/gateway.yaml#L3) @@ -200,7 +201,7 @@ mirroring services. We'll want to link the clusters together now! ## Linking the clusters -![link-clusters](/docs/images/multicluster/link-flow.svg "Link") +![link-clusters](/images/docs/multicluster/link-flow.svg 'Link') For `west` to mirror services from `east`, the `west` cluster needs to have credentials so that it can watch for services in `east` to be exported. You'd @@ -275,7 +276,7 @@ use the `--api-server-address` flag for `link`. ## Installing the test services -![test-services](/docs/images/multicluster/example-topology.svg "Topology") +![test-services](/images/docs/multicluster/example-topology.svg 'Topology') It is time to test this all out! The first step is to add some services that we can mirror. To add these to both clusters, you can run: @@ -303,7 +304,7 @@ To see what it looks like from the `west` cluster right now, you can run: kubectl --context=west -n test port-forward svc/frontend 8080 ``` -![west-podinfo](/docs/images/multicluster/west-podinfo.gif "West Podinfo") +![west-podinfo](/images/docs/multicluster/west-podinfo.gif 'West Podinfo') With the podinfo landing page available at [http://localhost:8080](http://localhost:8080), you can see how it looks in the @@ -394,7 +395,7 @@ the [grafana install instructions](grafana/) first to have a working grafana provisioned with Linkerd dashboards). You can get to it by running `linkerd --context=west viz dashboard` and going to -![grafana-dashboard](/docs/images/multicluster/grafana-dashboard.png "Grafana") +![grafana-dashboard](/images/docs/multicluster/grafana-dashboard.png 'Grafana') ## Security @@ -433,7 +434,7 @@ kubectl --context=west -n test run -it --rm --image=alpine:3 test -- \ ## Traffic Splitting -![with-split](/docs/images/multicluster/with-split.svg "Traffic Split") +![with-split](/images/docs/multicluster/with-split.svg 'Traffic Split') It is pretty useful to have services automatically show up in clusters and be able to explicitly address them, however that only covers one use case for @@ -477,7 +478,7 @@ both clusters.Alternatively, for the command line approach, `curl localhost:8080` will give you a message that greets from `west` and `east`. -![podinfo-split](/docs/images/multicluster/split-podinfo.gif "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/split-podinfo.gif 'Cross Cluster Podinfo') You can also watch what's happening with metrics. To see the source side of things (`west`), you can run: @@ -498,7 +499,7 @@ linkerd --context=east -n test viz stat \ There's even a dashboard! Run `linkerd viz dashboard` and send your browser to [localhost:50750](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![podinfo-split](/docs/images/multicluster/ts-dashboard.png "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/ts-dashboard.png 'Cross Cluster Podinfo') ## Cleanup diff --git a/linkerd.io/content/2.10/_index.md b/linkerd.io/content/2.10/_index.md index 6c28dc82e3..11f8979d57 100644 --- a/linkerd.io/content/2.10/_index.md +++ b/linkerd.io/content/2.10/_index.md @@ -2,10 +2,12 @@ title: Docs cascade: type: docs + params: + noIndex: true # Redirect type: _default layout: redirect params: - redirect: ./overview + redirect: ./getting-started --- diff --git a/linkerd.io/content/2.10/checks/index.md b/linkerd.io/content/2.10/checks/index.md index fc6ac87de5..5f2ae8ee80 100644 --- a/linkerd.io/content/2.10/checks/index.md +++ b/linkerd.io/content/2.10/checks/index.md @@ -6,5 +6,5 @@ type: _default layout: redirect params: unlisted: true - redirect: /2/tasks/troubleshooting/ + redirect: /docs/tasks/troubleshooting/ --- diff --git a/linkerd.io/content/2.10/features/dashboard.md b/linkerd.io/content/2.10/features/dashboard.md index 94ac5572d9..7c7e541180 100644 --- a/linkerd.io/content/2.10/features/dashboard.md +++ b/linkerd.io/content/2.10/features/dashboard.md @@ -50,7 +50,7 @@ rate, requests/second and latency), visualize service dependencies and understand the health of specific service routes. One way to pull it up is by running `linkerd viz dashboard` from the command line. -![Top Line Metrics](/docs/images/architecture/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/architecture/stat.png 'Top Line Metrics') ## Grafana @@ -60,13 +60,13 @@ down into the details, even for pods. The dashboards that are provided out of the box include: -![Top Line Metrics](/docs/images/screenshots/grafana-top.png "Top Line Metrics") +![Top Line Metrics](/images/docs/screenshots/grafana-top.png 'Top Line Metrics') -![Deployment Detail](/docs/images/screenshots/grafana-deployment.png "Deployment Detail") +![Deployment Detail](/images/docs/screenshots/grafana-deployment.png 'Deployment Detail') -![Pod Detail](/docs/images/screenshots/grafana-pod.png "Pod Detail") +![Pod Detail](/images/docs/screenshots/grafana-pod.png 'Pod Detail') -![Linkerd Health](/docs/images/screenshots/grafana-health.png "Linkerd Health") +![Linkerd Health](/images/docs/screenshots/grafana-health.png 'Linkerd Health') linkerd -n emojivoto check --proxy @@ -107,10 +107,10 @@ linkerd -n emojivoto viz tap deploy/web All of this functionality is also available in the dashboard, if you would like to use your browser instead: -![Top Line Metrics](/docs/images/getting-started/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/getting-started/stat.png 'Top Line Metrics') -![Deployment Detail](/docs/images/getting-started/inbound-outbound.png "Deployment Detail") +![Deployment Detail](/images/docs/getting-started/inbound-outbound.png 'Deployment Detail') -![Top](/docs/images/getting-started/top.png "Top") +![Top](/images/docs/getting-started/top.png 'Top') -![Tap](/docs/images/getting-started/tap.png "Tap") +![Tap](/images/docs/getting-started/tap.png 'Tap') diff --git a/linkerd.io/content/2.10/features/distributed-tracing.md b/linkerd.io/content/2.10/features/distributed-tracing.md index 2d111828f9..0a8833fe64 100644 --- a/linkerd.io/content/2.10/features/distributed-tracing.md +++ b/linkerd.io/content/2.10/features/distributed-tracing.md @@ -26,13 +26,13 @@ For example, Linkerd can display a live topology of all incoming and outgoing dependencies for a service, without requiring distributed tracing or any other such application modification: -![The Linkerd dashboard showing an automatically generated topology graph](/docs/images/books/webapp-detail.png "The Linkerd dashboard showing an automatically generated topology graph") +![The Linkerd dashboard showing an automatically generated topology graph](/images/docs/books/webapp-detail.png 'The Linkerd dashboard showing an automatically generated topology graph') Likewise, Linkerd can provide golden metrics per service and per _route_, again without requiring distributed tracing or any other such application modification: -![Linkerd dashboard showing an automatically generated route metrics](/docs/images/books/webapp-routes.png "Linkerd dashboard showing an automatically generated route metrics") +![Linkerd dashboard showing an automatically generated route metrics](/images/docs/books/webapp-routes.png 'Linkerd dashboard showing an automatically generated route metrics') ## Using distributed tracing diff --git a/linkerd.io/content/2.10/features/multicluster.md b/linkerd.io/content/2.10/features/multicluster.md index c3214e47cc..0ecb3eea5d 100644 --- a/linkerd.io/content/2.10/features/multicluster.md +++ b/linkerd.io/content/2.10/features/multicluster.md @@ -34,7 +34,7 @@ full observability, security and routing features of Linkerd apply uniformly to both in-cluster and cluster-calls, and the application does not need to distinguish between those situations. -![Overview](/docs/images/multicluster/feature-overview.svg "Overview") +![Overview](/images/docs/multicluster/feature-overview.svg 'Overview') Linkerd's multi-cluster functionality is implemented by two components: a _service mirror_ and a _gateway_. The _service mirror_ component watches a diff --git a/linkerd.io/content/2.10/getting-started/_index.md b/linkerd.io/content/2.10/getting-started/_index.md index 7470777cb3..2fc34c4440 100644 --- a/linkerd.io/content/2.10/getting-started/_index.md +++ b/linkerd.io/content/2.10/getting-started/_index.md @@ -169,7 +169,7 @@ linkerd viz dashboard & You should see a screen like this: -![The Linkerd dashboard in action](/docs/images/getting-started/viz-empty-dashboard.png "The Linkerd dashboard in action") +![The Linkerd dashboard in action](/images/docs/getting-started/viz-empty-dashboard.png 'The Linkerd dashboard in action') If you installed the buoyant-cloud extension, run: @@ -179,7 +179,7 @@ linkerd buoyant dashboard & You should see a screen lke this: -![Buoyant Coud in action](/docs/images/getting-started/bcloud-empty-dashboard.png "Buoyant Coud in action") +![Buoyant Coud in action](/images/docs/getting-started/bcloud-empty-dashboard.png 'Buoyant Coud in action') Click around, explore, and have fun! One thing you'll see is that, even if you don't have any applications running on this cluster, you still have traffic! diff --git a/linkerd.io/content/2.10/reference/architecture.md b/linkerd.io/content/2.10/reference/architecture.md index b5e7683056..83b3d79908 100644 --- a/linkerd.io/content/2.10/reference/architecture.md +++ b/linkerd.io/content/2.10/reference/architecture.md @@ -16,7 +16,7 @@ service. Because they're transparent, these proxies act as highly instrumented out-of-process network stacks, sending telemetry to, and receiving control signals from, the control plane. -![Linkerd's architecture](/docs/images/architecture/control-plane.png "Linkerd's architecture") +![Linkerd's architecture](/images/docs/architecture/control-plane.png "Linkerd's architecture") ## CLI diff --git a/linkerd.io/content/2.10/tasks/books.md b/linkerd.io/content/2.10/tasks/books.md index 3ff8c5fe97..737c0c621b 100644 --- a/linkerd.io/content/2.10/tasks/books.md +++ b/linkerd.io/content/2.10/tasks/books.md @@ -21,7 +21,7 @@ the other services. There are three services: For demo purposes, the app comes with a simple traffic generator. The overall topology looks like this: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') ## Prerequisites @@ -69,7 +69,7 @@ kubectl -n booksapp port-forward svc/webapp 7000 & Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') Unfortunately, there is an error in the app: if you click _Add Book_, it will fail 50% of the time. This is a classic case of non-obvious, intermittent @@ -78,7 +78,7 @@ debug. Kubernetes itself cannot detect or surface this error. From Kubernetes's perspective, it looks like everything's fine, but you know the application is returning errors. -![Failure](/docs/images/books/failure.png "Failure") +![Failure](/images/docs/books/failure.png 'Failure') ## Add Linkerd to the service @@ -109,7 +109,7 @@ out the Linkerd dashboard, run: linkerd viz dashboard & ``` -![Dashboard](/docs/images/books/dashboard.png "Dashboard") +![Dashboard](/images/docs/books/dashboard.png 'Dashboard') Select `booksapp` from the namespace dropdown and click on the [Deployments](http://localhost:50750/namespaces/booksapp/deployments) workload. @@ -127,7 +127,7 @@ has two outgoing dependencies: `authors` and `book`. One is the service for pulling in author information and the other is the service for pulling in book information. -![Detail](/docs/images/books/webapp-detail.png "Detail") +![Detail](/images/docs/books/webapp-detail.png 'Detail') A failure in a dependent service may be exactly what’s causing the errors that `webapp` is returning (and the errors you as a user can see when you click). We @@ -135,7 +135,7 @@ can see that the `books` service is also failing. Let’s scroll a little furthe down the page, we’ll see a live list of all traffic endpoints that `webapp` is receiving. This is interesting: -![Top](/docs/images/books/top.png "Top") +![Top](/images/docs/books/top.png 'Top') Aha! We can see that inbound traffic coming from the `webapp` service going to the `books` service is failing a significant percentage of the time. That could @@ -143,7 +143,7 @@ explain why `webapp` was throwing intermittent failures. Let’s click on the ta (🔬) icon and then on the Start button to look at the actual request and response stream. -![Tap](/docs/images/books/tap.png "Tap") +![Tap](/images/docs/books/tap.png 'Tap') Indeed, many of these requests are returning 500’s. diff --git a/linkerd.io/content/2.10/tasks/canary-release.md b/linkerd.io/content/2.10/tasks/canary-release.md index 6eda80fc7d..91ceed6956 100644 --- a/linkerd.io/content/2.10/tasks/canary-release.md +++ b/linkerd.io/content/2.10/tasks/canary-release.md @@ -67,7 +67,7 @@ orchestrates it. A load generator simply makes it easier to execute the rollout as there needs to be some kind of active traffic to complete the operation. Together, these components have a topology that looks like: -![Topology](/docs/images/canary/simple-topology.svg "Topology") +![Topology](/images/docs/canary/simple-topology.svg 'Topology') To add these components to your cluster and include them in the Linkerd [data plane](../reference/architecture/#data-plane), run: @@ -170,7 +170,7 @@ podinfo-primary ClusterIP 10.7.249.63 9898/TCP 23m At this point, the topology looks a little like: -![Initialized](/docs/images/canary/initialized.svg "Initialized") +![Initialized](/images/docs/canary/initialized.svg 'Initialized') {{< note >}} @@ -216,7 +216,7 @@ kubectl -n test get ev --watch While an update is occurring, the resources and traffic will look like this at a high level: -![Ongoing](/docs/images/canary/ongoing.svg "Ongoing") +![Ongoing](/images/docs/canary/ongoing.svg 'Ongoing') After the update is complete, this picture will go back to looking just like the figure from the previous section. @@ -265,7 +265,7 @@ For something a little more visual, you can use the dashboard. Start it by running `linkerd viz dashboard` and then look at the detail page for the [podinfo traffic split](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![Dashboard](/docs/images/canary/traffic-split.png "Dashboard") +![Dashboard](/images/docs/canary/traffic-split.png 'Dashboard') ### Browser diff --git a/linkerd.io/content/2.10/tasks/debugging-your-service.md b/linkerd.io/content/2.10/tasks/debugging-your-service.md index fa22bef005..d1009cfe76 100644 --- a/linkerd.io/content/2.10/tasks/debugging-your-service.md +++ b/linkerd.io/content/2.10/tasks/debugging-your-service.md @@ -17,12 +17,12 @@ command), you should see all the resources in the `emojivoto` namespace, including the deployments. Each deployment running Linkerd shows success rate, requests per second and latency percentiles. -![Top Level Metrics](/docs/images/debugging/stat.png "Top Level Metrics") +![Top Level Metrics](/images/docs/debugging/stat.png 'Top Level Metrics') That's pretty neat, but the first thing you might notice is that the success rate is well below 100%! Click on `web` and let's dig in. -![Deployment Detail](/docs/images/debugging/octopus.png "Deployment Detail") +![Deployment Detail](/images/docs/debugging/octopus.png 'Deployment Detail') You should now be looking at the Deployment page for the web deployment. The first thing you'll see here is that the web deployment is taking traffic from @@ -38,7 +38,7 @@ returning. Let's scroll a little further down the page, we'll see a live list of all traffic that is incoming to _and_ outgoing from `web`. This is interesting: -![Top](/docs/images/debugging/web-top.png "Top") +![Top](/images/docs/debugging/web-top.png 'Top') There are two calls that are not at 100%: the first is vote-bot's call to the `/api/vote` endpoint. The second is the `VoteDoughnut` call from the web @@ -54,7 +54,7 @@ the requests are failing with a is a common error response as you can see from [the code][code]. Linkerd is aware of gRPC's response classification without any other configuration! -![Tap](/docs/images/debugging/web-tap.png "Tap") +![Tap](/images/docs/debugging/web-tap.png 'Tap') At this point, we have everything required to get the endpoint fixed and restore the overall health of our applications. diff --git a/linkerd.io/content/2.10/tasks/distributed-tracing.md b/linkerd.io/content/2.10/tasks/distributed-tracing.md index 5dfc0f77c9..c97758710d 100644 --- a/linkerd.io/content/2.10/tasks/distributed-tracing.md +++ b/linkerd.io/content/2.10/tasks/distributed-tracing.md @@ -21,7 +21,7 @@ To use distributed tracing, you'll need to: In the case of emojivoto, once all these steps are complete there will be a topology that looks like: -![Topology](/docs/images/tracing/tracing-topology.svg "Topology") +![Topology](/images/docs/tracing/tracing-topology.svg 'Topology') ## Prerequisites @@ -101,17 +101,17 @@ up in Jaeger. To get to the UI, run: linkerd jaeger dashboard ``` -![Jaeger](/docs/images/tracing/jaeger-empty.png "Jaeger") +![Jaeger](/images/docs/tracing/jaeger-empty.png 'Jaeger') You can search for any service in the dropdown and click Find Traces. `vote-bot` is a great way to get started. -![Search](/docs/images/tracing/jaeger-search.png "Search") +![Search](/images/docs/tracing/jaeger-search.png 'Search') Clicking on a specific trace will provide all the details, you'll be able to see the spans for every proxy! -![Search](/docs/images/tracing/example-trace.png "Search") +![Search](/images/docs/tracing/example-trace.png 'Search') There sure are a lot of `linkerd-proxy` spans in that output. Internally, the proxy has a server and client side. When a request goes through the proxy, it is @@ -127,7 +127,7 @@ meta-data as trace attributes, users can directly jump into related resources traces directly from the linkerd-web dashboard by clicking the Jaeger icon in the Metrics Table, as shown below: -![Linkerd-Jaeger](/docs/images/tracing/linkerd-jaeger-ui.png "Linkerd-Jaeger") +![Linkerd-Jaeger](/images/docs/tracing/linkerd-jaeger-ui.png 'Linkerd-Jaeger') To obtain that functionality you need to install (or upgrade) the Linkerd-Viz extension specifying the service exposing the Jaeger UI. By default, this would @@ -205,7 +205,7 @@ If using helm to install ingress-nginx, you can configure tracing by using: ```yaml controller: config: - enable-opentracing: "true" + enable-opentracing: 'true' zipkin-collector-host: collector.linkerd-jaeger ``` diff --git a/linkerd.io/content/2.10/tasks/fault-injection.md b/linkerd.io/content/2.10/tasks/fault-injection.md index 4311125652..0213435caa 100644 --- a/linkerd.io/content/2.10/tasks/fault-injection.md +++ b/linkerd.io/content/2.10/tasks/fault-injection.md @@ -14,7 +14,7 @@ or even crazy payloads. The [books demo](books/) is a great way to show off this behavior. The overall topology looks like: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') In this guide, you will split some of the requests from `webapp` to `books`. Most requests will end up at the correct `books` destination, however some of diff --git a/linkerd.io/content/2.10/tasks/gitops.md b/linkerd.io/content/2.10/tasks/gitops.md index 44884e090d..83341ac61a 100644 --- a/linkerd.io/content/2.10/tasks/gitops.md +++ b/linkerd.io/content/2.10/tasks/gitops.md @@ -22,7 +22,7 @@ the [auto proxy injection](../features/proxy-injection/) feature into your workflow. Finally, this guide conclude with steps to upgrade Linkerd to a newer version following a GitOps workflow. -![Linkerd GitOps workflow](/docs/images/gitops/architecture.png 'Linkerd GitOps workflow') +![Linkerd GitOps workflow](/images/docs/gitops/architecture.png 'Linkerd GitOps workflow') The software and tools used in this guide are selected for demonstration purposes only. Feel free to choose others that are most suited for your @@ -179,7 +179,7 @@ argocd proj get demo On the dashboard: -![New project in Argo CD dashboard](/docs/images/gitops/dashboard-project.png 'New project in Argo CD dashboard') +![New project in Argo CD dashboard](/images/docs/gitops/dashboard-project.png 'New project in Argo CD dashboard') ### Deploy the applications @@ -210,7 +210,7 @@ Sync the `main` application: argocd app sync main ``` -![Synchronize the main application](/docs/images/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') +![Synchronize the main application](/images/docs/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') Notice that only the `main` application is synchronized. @@ -240,7 +240,7 @@ for deploy in "cert-manager" "cert-manager-cainjector" "cert-manager-webhook"; \ done ``` -![Synchronize the cert-manager application](/docs/images/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') +![Synchronize the cert-manager application](/images/docs/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') ### Deploy Sealed Secrets @@ -256,7 +256,7 @@ Confirm that sealed-secrets is running: kubectl -n kube-system rollout status deploy/sealed-secrets ``` -![Synchronize the sealed-secrets application](/docs/images/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') +![Synchronize the sealed-secrets application](/images/docs/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') ### Create mTLS trust anchor @@ -346,7 +346,7 @@ Git server earlier. {{< /note >}} -![Synchronize the linkerd-bootstrap application](/docs/images/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') +![Synchronize the linkerd-bootstrap application](/images/docs/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') SealedSecrets should have created a secret containing the decrypted trust anchor. Retrieve the decrypted trust anchor from the secret: @@ -378,7 +378,7 @@ argocd app get linkerd -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Empty default trust anchor](/docs/images/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') +![Empty default trust anchor](/images/docs/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') We will override this parameter in the `linkerd` application with the value of `${trust_anchor}`. @@ -440,7 +440,7 @@ argocd app get linkerd -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Override mTLS trust anchor](/docs/images/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') +![Override mTLS trust anchor](/images/docs/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') Synchronize the `linkerd` application: @@ -454,7 +454,7 @@ Check that Linkerd is ready: linkerd check ``` -![Synchronize Linkerd](/docs/images/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') +![Synchronize Linkerd](/images/docs/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') ### Test with emojivoto @@ -472,7 +472,7 @@ for deploy in "emoji" "vote-bot" "voting" "web" ; \ done ``` -![Synchronize emojivoto](/docs/images/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') +![Synchronize emojivoto](/images/docs/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') ### Upgrade Linkerd to 2.10.1 diff --git a/linkerd.io/content/2.10/tasks/multicluster.md b/linkerd.io/content/2.10/tasks/multicluster.md index 3e1beeda77..6819372156 100644 --- a/linkerd.io/content/2.10/tasks/multicluster.md +++ b/linkerd.io/content/2.10/tasks/multicluster.md @@ -13,6 +13,7 @@ between services that live on different clusters. At a high level, you will: + 1. [Install Linkerd](#install-linkerd) on two clusters with a shared trust anchor. 1. [Prepare](#preparing-your-cluster) the clusters. @@ -21,7 +22,7 @@ At a high level, you will: 1. [Export](#exporting-the-services) the demo services, to control visibility. 1. [Verify](#security) the security of your clusters. 1. [Split traffic](#traffic-splitting) from pods on the source cluster (`west`) - to the target cluster (`east`) +to the target cluster (`east`) ## Prerequisites @@ -55,7 +56,7 @@ At a high level, you will: ## Install Linkerd -![install](/docs/images/multicluster/install.svg "Two Clusters") +![install](/images/docs/multicluster/install.svg 'Two Clusters') Linkerd requires a shared [trust anchor](generate-certificates/#trust-anchor-certificate) to exist between @@ -134,7 +135,7 @@ linkerd viz install \ ## Preparing your cluster -![preparation](/docs/images/multicluster/prep-overview.svg "Preparation") +![preparation](/images/docs/multicluster/prep-overview.svg 'Preparation') In order to route traffic between clusters, Linkerd leverages Kubernetes services so that your application code does not need to change and there is @@ -157,7 +158,7 @@ for ctx in west east; do done ``` -![install](/docs/images/multicluster/components.svg "Components") +![install](/images/docs/multicluster/components.svg 'Components') Installed into the `linkerd-multicluster` namespace, the gateway is a simple [pause container](https://github.com/linkerd/linkerd2/blob/main/multicluster/charts/linkerd-multicluster/templates/gateway.yaml#L3) @@ -198,7 +199,7 @@ mirroring services. We'll want to link the clusters together now! ## Linking the clusters -![link-clusters](/docs/images/multicluster/link-flow.svg "Link") +![link-clusters](/images/docs/multicluster/link-flow.svg 'Link') For `west` to mirror services from `east`, the `west` cluster needs to have credentials so that it can watch for services in `east` to be exported. You'd @@ -256,7 +257,7 @@ use the `--api-server-address` flag for `link`. ## Installing the test services -![test-services](/docs/images/multicluster/example-topology.svg "Topology") +![test-services](/images/docs/multicluster/example-topology.svg 'Topology') It is time to test this all out! The first step is to add some services that we can mirror. To add these to both clusters, you can run: @@ -283,7 +284,7 @@ To see what it looks like from the `west` cluster right now, you can run: kubectl --context=west -n test port-forward svc/frontend 8080 ``` -![west-podinfo](/docs/images/multicluster/west-podinfo.gif "West Podinfo") +![west-podinfo](/images/docs/multicluster/west-podinfo.gif 'West Podinfo') With the podinfo landing page available at [http://localhost:8080](http://localhost:8080), you can see how it looks in the @@ -375,7 +376,7 @@ We also provide a grafana dashboard to get a feel for what's going on here. You can get to it by running `linkerd --context=west viz dashboard` and going to [http://localhost:50750/grafana/](http://localhost:50750/grafana/d/linkerd-multicluster/linkerd-multicluster?orgId=1&refresh=1m) -![grafana-dashboard](/docs/images/multicluster/grafana-dashboard.png "Grafana") +![grafana-dashboard](/images/docs/multicluster/grafana-dashboard.png 'Grafana') ## Security @@ -414,7 +415,7 @@ kubectl --context=west -n test run -it --rm --image=alpine:3 test -- \ ## Traffic Splitting -![with-split](/docs/images/multicluster/with-split.svg "Traffic Split") +![with-split](/images/docs/multicluster/with-split.svg 'Traffic Split') It is pretty useful to have services automatically show up in clusters and be able to explicitly address them, however that only covers one use case for @@ -458,7 +459,7 @@ both clusters.Alternatively, for the command line approach, `curl localhost:8080` will give you a message that greets from `west` and `east`. -![podinfo-split](/docs/images/multicluster/split-podinfo.gif "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/split-podinfo.gif 'Cross Cluster Podinfo') You can also watch what's happening with metrics. To see the source side of things (`west`), you can run: @@ -479,7 +480,7 @@ linkerd --context=east -n test viz stat \ There's even a dashboard! Run `linkerd viz dashboard` and send your browser to [localhost:50750](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![podinfo-split](/docs/images/multicluster/ts-dashboard.png "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/ts-dashboard.png 'Cross Cluster Podinfo') ## Cleanup diff --git a/linkerd.io/content/2.10/tasks/using-a-private-docker-repository.md b/linkerd.io/content/2.10/tasks/using-a-private-docker-repository.md index fea82817c8..d0b7c5f960 100644 --- a/linkerd.io/content/2.10/tasks/using-a-private-docker-repository.md +++ b/linkerd.io/content/2.10/tasks/using-a-private-docker-repository.md @@ -32,7 +32,7 @@ image: prom/prometheus:v2.11.1 ``` All of the Linkerd images are publicly available in the -[Linkerd Google Container Repository](https://console.cloud.google.com/gcr/docs/images/linkerd-io/GLOBAL/) +[Linkerd Google Container Repository](https://console.cloud.google.com/gcr/images/docs/linkerd-io/GLOBAL/) Stable images are named using the convention `stable-` and the edge images use the convention `edge-..`. diff --git a/linkerd.io/content/2.11/_index.md b/linkerd.io/content/2.11/_index.md index 6c28dc82e3..11f8979d57 100644 --- a/linkerd.io/content/2.11/_index.md +++ b/linkerd.io/content/2.11/_index.md @@ -2,10 +2,12 @@ title: Docs cascade: type: docs + params: + noIndex: true # Redirect type: _default layout: redirect params: - redirect: ./overview + redirect: ./getting-started --- diff --git a/linkerd.io/content/2.11/checks/index.md b/linkerd.io/content/2.11/checks/index.md index fc6ac87de5..5f2ae8ee80 100644 --- a/linkerd.io/content/2.11/checks/index.md +++ b/linkerd.io/content/2.11/checks/index.md @@ -6,5 +6,5 @@ type: _default layout: redirect params: unlisted: true - redirect: /2/tasks/troubleshooting/ + redirect: /docs/tasks/troubleshooting/ --- diff --git a/linkerd.io/content/2.11/features/dashboard.md b/linkerd.io/content/2.11/features/dashboard.md index 94ac5572d9..7c7e541180 100644 --- a/linkerd.io/content/2.11/features/dashboard.md +++ b/linkerd.io/content/2.11/features/dashboard.md @@ -50,7 +50,7 @@ rate, requests/second and latency), visualize service dependencies and understand the health of specific service routes. One way to pull it up is by running `linkerd viz dashboard` from the command line. -![Top Line Metrics](/docs/images/architecture/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/architecture/stat.png 'Top Line Metrics') ## Grafana @@ -60,13 +60,13 @@ down into the details, even for pods. The dashboards that are provided out of the box include: -![Top Line Metrics](/docs/images/screenshots/grafana-top.png "Top Line Metrics") +![Top Line Metrics](/images/docs/screenshots/grafana-top.png 'Top Line Metrics') -![Deployment Detail](/docs/images/screenshots/grafana-deployment.png "Deployment Detail") +![Deployment Detail](/images/docs/screenshots/grafana-deployment.png 'Deployment Detail') -![Pod Detail](/docs/images/screenshots/grafana-pod.png "Pod Detail") +![Pod Detail](/images/docs/screenshots/grafana-pod.png 'Pod Detail') -![Linkerd Health](/docs/images/screenshots/grafana-health.png "Linkerd Health") +![Linkerd Health](/images/docs/screenshots/grafana-health.png 'Linkerd Health') linkerd -n emojivoto check --proxy @@ -107,10 +107,10 @@ linkerd -n emojivoto viz tap deploy/web All of this functionality is also available in the dashboard, if you would like to use your browser instead: -![Top Line Metrics](/docs/images/getting-started/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/getting-started/stat.png 'Top Line Metrics') -![Deployment Detail](/docs/images/getting-started/inbound-outbound.png "Deployment Detail") +![Deployment Detail](/images/docs/getting-started/inbound-outbound.png 'Deployment Detail') -![Top](/docs/images/getting-started/top.png "Top") +![Top](/images/docs/getting-started/top.png 'Top') -![Tap](/docs/images/getting-started/tap.png "Tap") +![Tap](/images/docs/getting-started/tap.png 'Tap') diff --git a/linkerd.io/content/2.11/features/distributed-tracing.md b/linkerd.io/content/2.11/features/distributed-tracing.md index 2d111828f9..0a8833fe64 100644 --- a/linkerd.io/content/2.11/features/distributed-tracing.md +++ b/linkerd.io/content/2.11/features/distributed-tracing.md @@ -26,13 +26,13 @@ For example, Linkerd can display a live topology of all incoming and outgoing dependencies for a service, without requiring distributed tracing or any other such application modification: -![The Linkerd dashboard showing an automatically generated topology graph](/docs/images/books/webapp-detail.png "The Linkerd dashboard showing an automatically generated topology graph") +![The Linkerd dashboard showing an automatically generated topology graph](/images/docs/books/webapp-detail.png 'The Linkerd dashboard showing an automatically generated topology graph') Likewise, Linkerd can provide golden metrics per service and per _route_, again without requiring distributed tracing or any other such application modification: -![Linkerd dashboard showing an automatically generated route metrics](/docs/images/books/webapp-routes.png "Linkerd dashboard showing an automatically generated route metrics") +![Linkerd dashboard showing an automatically generated route metrics](/images/docs/books/webapp-routes.png 'Linkerd dashboard showing an automatically generated route metrics') ## Using distributed tracing diff --git a/linkerd.io/content/2.11/features/multicluster.md b/linkerd.io/content/2.11/features/multicluster.md index 3db632cb92..cdb1df8337 100644 --- a/linkerd.io/content/2.11/features/multicluster.md +++ b/linkerd.io/content/2.11/features/multicluster.md @@ -34,7 +34,7 @@ full observability, security and routing features of Linkerd apply uniformly to both in-cluster and cluster-calls, and the application does not need to distinguish between those situations. -![Overview](/docs/images/multicluster/feature-overview.svg "Overview") +![Overview](/images/docs/multicluster/feature-overview.svg 'Overview') Linkerd's multi-cluster functionality is implemented by two components: a _service mirror_ and a _gateway_. The _service mirror_ component watches a diff --git a/linkerd.io/content/2.11/features/protocol-detection.md b/linkerd.io/content/2.11/features/protocol-detection.md index 14b1212286..9545e9c35d 100644 --- a/linkerd.io/content/2.11/features/protocol-detection.md +++ b/linkerd.io/content/2.11/features/protocol-detection.md @@ -75,7 +75,7 @@ configuration. If you are using one of those protocols, follow this decision tree to determine which configuration you need to apply. -![Decision tree](/docs/images/protocol-detection-decision-tree.png) +![Decision tree](/images/docs/protocol-detection-decision-tree.png) ## Marking ports as opaque diff --git a/linkerd.io/content/2.11/features/server-policy.md b/linkerd.io/content/2.11/features/server-policy.md index e2f7df0d4f..165626df54 100644 --- a/linkerd.io/content/2.11/features/server-policy.md +++ b/linkerd.io/content/2.11/features/server-policy.md @@ -104,4 +104,4 @@ existing TCP connections. See [emojivoto-policy.yml](https://github.com/linkerd/website/blob/main/run.linkerd.io/public/emojivoto-policy.yml) for an example set of policy definitions for the -[Emojivoto sample application](/2/getting-started/). +[Emojivoto sample application](/docs/getting-started/). diff --git a/linkerd.io/content/2.11/getting-started/_index.md b/linkerd.io/content/2.11/getting-started/_index.md index 0a84a2ad27..875235ee32 100644 --- a/linkerd.io/content/2.11/getting-started/_index.md +++ b/linkerd.io/content/2.11/getting-started/_index.md @@ -237,7 +237,7 @@ linkerd viz dashboard & You should see a screen like this: -![The Linkerd dashboard in action](/docs/images/getting-started/viz-empty-dashboard.png "The Linkerd dashboard in action") +![The Linkerd dashboard in action](/images/docs/getting-started/viz-empty-dashboard.png 'The Linkerd dashboard in action') Click around, explore, and have fun! For extra credit, see if you can find the live metrics for each Emojivoto component, and determine which one has a partial diff --git a/linkerd.io/content/2.11/reference/architecture.md b/linkerd.io/content/2.11/reference/architecture.md index e3980b6a74..962a17284a 100644 --- a/linkerd.io/content/2.11/reference/architecture.md +++ b/linkerd.io/content/2.11/reference/architecture.md @@ -16,7 +16,7 @@ with the control plane for configuration. Linkerd also provides a **CLI** that can be used to interact with the control and data planes. -![Linkerd's architecture](/docs/images/architecture/control-plane.png "Linkerd's architecture") +![Linkerd's architecture](/images/docs/architecture/control-plane.png "Linkerd's architecture") ## CLI diff --git a/linkerd.io/content/2.11/reference/iptables.md b/linkerd.io/content/2.11/reference/iptables.md index 67a7ea89de..a91a6d0d4d 100644 --- a/linkerd.io/content/2.11/reference/iptables.md +++ b/linkerd.io/content/2.11/reference/iptables.md @@ -31,7 +31,7 @@ The redirect chain will be configured with two more rules: Based on these two rules, there are two possible paths that an inbound packet can take, both of which are outlined below. -![Inbound iptables chain traversal](/docs/images/iptables/iptables-fig2-1.png "Inbound iptables chain traversal") +![Inbound iptables chain traversal](/images/docs/iptables/iptables-fig2-1.png 'Inbound iptables chain traversal') The packet will arrive on the `PREROUTING` chain and will be immediately routed to the redirect chain. If its destination port matches any of the inbound ports @@ -79,7 +79,7 @@ configured: been produced by the service, so it should be forwarded to its destination by the proxy. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-2.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-2.png 'Outbound iptables chain traversal') A packet produced by the service will first hit the `OUTPUT` chain; from here, it will be sent to our own output chain for processing. The first rule it @@ -113,7 +113,7 @@ in the pod. This scenario would typically apply when: - The destination is a port bound on localhost (regardless of which container it belongs to). -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-3.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-3.png 'Outbound iptables chain traversal') When the application targets itself through its pod's IP (or loopback address), the packets will traverse the two output chains. The first rule will be skipped, @@ -138,7 +138,7 @@ inbound side to account for outbound packets that are sent locally. is not guaranteed that the destination will be local. The packet follows an unusual path, as depicted in the diagram below. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-4.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-4.png 'Outbound iptables chain traversal') When the packet first traverses the output chains, it will follow the same path an outbound packet would normally take. In such a scenario, the packet's diff --git a/linkerd.io/content/2.11/tasks/books.md b/linkerd.io/content/2.11/tasks/books.md index 3ff8c5fe97..737c0c621b 100644 --- a/linkerd.io/content/2.11/tasks/books.md +++ b/linkerd.io/content/2.11/tasks/books.md @@ -21,7 +21,7 @@ the other services. There are three services: For demo purposes, the app comes with a simple traffic generator. The overall topology looks like this: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') ## Prerequisites @@ -69,7 +69,7 @@ kubectl -n booksapp port-forward svc/webapp 7000 & Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') Unfortunately, there is an error in the app: if you click _Add Book_, it will fail 50% of the time. This is a classic case of non-obvious, intermittent @@ -78,7 +78,7 @@ debug. Kubernetes itself cannot detect or surface this error. From Kubernetes's perspective, it looks like everything's fine, but you know the application is returning errors. -![Failure](/docs/images/books/failure.png "Failure") +![Failure](/images/docs/books/failure.png 'Failure') ## Add Linkerd to the service @@ -109,7 +109,7 @@ out the Linkerd dashboard, run: linkerd viz dashboard & ``` -![Dashboard](/docs/images/books/dashboard.png "Dashboard") +![Dashboard](/images/docs/books/dashboard.png 'Dashboard') Select `booksapp` from the namespace dropdown and click on the [Deployments](http://localhost:50750/namespaces/booksapp/deployments) workload. @@ -127,7 +127,7 @@ has two outgoing dependencies: `authors` and `book`. One is the service for pulling in author information and the other is the service for pulling in book information. -![Detail](/docs/images/books/webapp-detail.png "Detail") +![Detail](/images/docs/books/webapp-detail.png 'Detail') A failure in a dependent service may be exactly what’s causing the errors that `webapp` is returning (and the errors you as a user can see when you click). We @@ -135,7 +135,7 @@ can see that the `books` service is also failing. Let’s scroll a little furthe down the page, we’ll see a live list of all traffic endpoints that `webapp` is receiving. This is interesting: -![Top](/docs/images/books/top.png "Top") +![Top](/images/docs/books/top.png 'Top') Aha! We can see that inbound traffic coming from the `webapp` service going to the `books` service is failing a significant percentage of the time. That could @@ -143,7 +143,7 @@ explain why `webapp` was throwing intermittent failures. Let’s click on the ta (🔬) icon and then on the Start button to look at the actual request and response stream. -![Tap](/docs/images/books/tap.png "Tap") +![Tap](/images/docs/books/tap.png 'Tap') Indeed, many of these requests are returning 500’s. diff --git a/linkerd.io/content/2.11/tasks/canary-release.md b/linkerd.io/content/2.11/tasks/canary-release.md index 4de68dbdc6..6a8905d8e2 100644 --- a/linkerd.io/content/2.11/tasks/canary-release.md +++ b/linkerd.io/content/2.11/tasks/canary-release.md @@ -68,7 +68,7 @@ orchestrates it. A load generator simply makes it easier to execute the rollout as there needs to be some kind of active traffic to complete the operation. Together, these components have a topology that looks like: -![Topology](/docs/images/canary/simple-topology.svg "Topology") +![Topology](/images/docs/canary/simple-topology.svg 'Topology') To add these components to your cluster and include them in the Linkerd [data plane](../reference/architecture/#data-plane), run: @@ -171,7 +171,7 @@ podinfo-primary ClusterIP 10.7.249.63 9898/TCP 23m At this point, the topology looks a little like: -![Initialized](/docs/images/canary/initialized.svg "Initialized") +![Initialized](/images/docs/canary/initialized.svg 'Initialized') {{< note >}} @@ -217,7 +217,7 @@ kubectl -n test get ev --watch While an update is occurring, the resources and traffic will look like this at a high level: -![Ongoing](/docs/images/canary/ongoing.svg "Ongoing") +![Ongoing](/images/docs/canary/ongoing.svg 'Ongoing') After the update is complete, this picture will go back to looking just like the figure from the previous section. @@ -266,7 +266,7 @@ For something a little more visual, you can use the dashboard. Start it by running `linkerd viz dashboard` and then look at the detail page for the [podinfo traffic split](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![Dashboard](/docs/images/canary/traffic-split.png "Dashboard") +![Dashboard](/images/docs/canary/traffic-split.png 'Dashboard') ### Browser diff --git a/linkerd.io/content/2.11/tasks/debugging-your-service.md b/linkerd.io/content/2.11/tasks/debugging-your-service.md index fa22bef005..d1009cfe76 100644 --- a/linkerd.io/content/2.11/tasks/debugging-your-service.md +++ b/linkerd.io/content/2.11/tasks/debugging-your-service.md @@ -17,12 +17,12 @@ command), you should see all the resources in the `emojivoto` namespace, including the deployments. Each deployment running Linkerd shows success rate, requests per second and latency percentiles. -![Top Level Metrics](/docs/images/debugging/stat.png "Top Level Metrics") +![Top Level Metrics](/images/docs/debugging/stat.png 'Top Level Metrics') That's pretty neat, but the first thing you might notice is that the success rate is well below 100%! Click on `web` and let's dig in. -![Deployment Detail](/docs/images/debugging/octopus.png "Deployment Detail") +![Deployment Detail](/images/docs/debugging/octopus.png 'Deployment Detail') You should now be looking at the Deployment page for the web deployment. The first thing you'll see here is that the web deployment is taking traffic from @@ -38,7 +38,7 @@ returning. Let's scroll a little further down the page, we'll see a live list of all traffic that is incoming to _and_ outgoing from `web`. This is interesting: -![Top](/docs/images/debugging/web-top.png "Top") +![Top](/images/docs/debugging/web-top.png 'Top') There are two calls that are not at 100%: the first is vote-bot's call to the `/api/vote` endpoint. The second is the `VoteDoughnut` call from the web @@ -54,7 +54,7 @@ the requests are failing with a is a common error response as you can see from [the code][code]. Linkerd is aware of gRPC's response classification without any other configuration! -![Tap](/docs/images/debugging/web-tap.png "Tap") +![Tap](/images/docs/debugging/web-tap.png 'Tap') At this point, we have everything required to get the endpoint fixed and restore the overall health of our applications. diff --git a/linkerd.io/content/2.11/tasks/distributed-tracing.md b/linkerd.io/content/2.11/tasks/distributed-tracing.md index 378f632d40..c8ada84c3e 100644 --- a/linkerd.io/content/2.11/tasks/distributed-tracing.md +++ b/linkerd.io/content/2.11/tasks/distributed-tracing.md @@ -21,7 +21,7 @@ To use distributed tracing, you'll need to: In the case of emojivoto, once all these steps are complete there will be a topology that looks like: -![Topology](/docs/images/tracing/tracing-topology.svg "Topology") +![Topology](/images/docs/tracing/tracing-topology.svg 'Topology') ## Prerequisites @@ -101,17 +101,17 @@ up in Jaeger. To get to the UI, run: linkerd jaeger dashboard ``` -![Jaeger](/docs/images/tracing/jaeger-empty.png "Jaeger") +![Jaeger](/images/docs/tracing/jaeger-empty.png 'Jaeger') You can search for any service in the dropdown and click Find Traces. `vote-bot` is a great way to get started. -![Search](/docs/images/tracing/jaeger-search.png "Search") +![Search](/images/docs/tracing/jaeger-search.png 'Search') Clicking on a specific trace will provide all the details, you'll be able to see the spans for every proxy! -![Search](/docs/images/tracing/example-trace.png "Search") +![Search](/images/docs/tracing/example-trace.png 'Search') There sure are a lot of `linkerd-proxy` spans in that output. Internally, the proxy has a server and client side. When a request goes through the proxy, it is @@ -127,7 +127,7 @@ meta-data as trace attributes, users can directly jump into related resources traces directly from the linkerd-web dashboard by clicking the Jaeger icon in the Metrics Table, as shown below: -![Linkerd-Jaeger](/docs/images/tracing/linkerd-jaeger-ui.png "Linkerd-Jaeger") +![Linkerd-Jaeger](/images/docs/tracing/linkerd-jaeger-ui.png 'Linkerd-Jaeger') To obtain that functionality you need to install (or upgrade) the Linkerd-Viz extension specifying the service exposing the Jaeger UI. By default, this would @@ -251,7 +251,7 @@ If using helm to install ingress-nginx, you can configure tracing by using: ```yaml controller: config: - enable-opentracing: "true" + enable-opentracing: 'true' zipkin-collector-host: collector.linkerd-jaeger ``` diff --git a/linkerd.io/content/2.11/tasks/fault-injection.md b/linkerd.io/content/2.11/tasks/fault-injection.md index 4311125652..0213435caa 100644 --- a/linkerd.io/content/2.11/tasks/fault-injection.md +++ b/linkerd.io/content/2.11/tasks/fault-injection.md @@ -14,7 +14,7 @@ or even crazy payloads. The [books demo](books/) is a great way to show off this behavior. The overall topology looks like: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') In this guide, you will split some of the requests from `webapp` to `books`. Most requests will end up at the correct `books` destination, however some of diff --git a/linkerd.io/content/2.11/tasks/gitops.md b/linkerd.io/content/2.11/tasks/gitops.md index 5b934e0af6..22d655999a 100644 --- a/linkerd.io/content/2.11/tasks/gitops.md +++ b/linkerd.io/content/2.11/tasks/gitops.md @@ -22,7 +22,7 @@ the [auto proxy injection](../features/proxy-injection/) feature into your workflow. Finally, this guide conclude with steps to upgrade Linkerd to a newer version following a GitOps workflow. -![Linkerd GitOps workflow](/docs/images/gitops/architecture.png 'Linkerd GitOps workflow') +![Linkerd GitOps workflow](/images/docs/gitops/architecture.png 'Linkerd GitOps workflow') The software and tools used in this guide are selected for demonstration purposes only. Feel free to choose others that are most suited for your @@ -184,7 +184,7 @@ argocd proj get demo On the dashboard: -![New project in Argo CD dashboard](/docs/images/gitops/dashboard-project.png 'New project in Argo CD dashboard') +![New project in Argo CD dashboard](/images/docs/gitops/dashboard-project.png 'New project in Argo CD dashboard') ### Deploy the applications @@ -215,7 +215,7 @@ Sync the `main` application: argocd app sync main ``` -![Synchronize the main application](/docs/images/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') +![Synchronize the main application](/images/docs/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') Notice that only the `main` application is synchronized. @@ -245,7 +245,7 @@ for deploy in "cert-manager" "cert-manager-cainjector" "cert-manager-webhook"; \ done ``` -![Synchronize the cert-manager application](/docs/images/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') +![Synchronize the cert-manager application](/images/docs/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') ### Deploy Sealed Secrets @@ -261,7 +261,7 @@ Confirm that sealed-secrets is running: kubectl -n kube-system rollout status deploy/sealed-secrets ``` -![Synchronize the sealed-secrets application](/docs/images/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') +![Synchronize the sealed-secrets application](/images/docs/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') ### Create mTLS trust anchor @@ -356,7 +356,7 @@ Git server earlier. {{< /note >}} -![Synchronize the linkerd-bootstrap application](/docs/images/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') +![Synchronize the linkerd-bootstrap application](/images/docs/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') SealedSecrets should have created a secret containing the decrypted trust anchor. Retrieve the decrypted trust anchor from the secret: @@ -388,7 +388,7 @@ argocd app get linkerd -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Empty default trust anchor](/docs/images/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') +![Empty default trust anchor](/images/docs/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') We will override this parameter in the `linkerd` application with the value of `${trust_anchor}`. @@ -450,7 +450,7 @@ argocd app get linkerd -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Override mTLS trust anchor](/docs/images/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') +![Override mTLS trust anchor](/images/docs/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') Synchronize the `linkerd` application: @@ -464,7 +464,7 @@ Check that Linkerd is ready: linkerd check ``` -![Synchronize Linkerd](/docs/images/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') +![Synchronize Linkerd](/images/docs/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') ### Test with emojivoto @@ -482,7 +482,7 @@ for deploy in "emoji" "vote-bot" "voting" "web" ; \ done ``` -![Synchronize emojivoto](/docs/images/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') +![Synchronize emojivoto](/images/docs/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') ### Upgrade Linkerd to 2.11.1 diff --git a/linkerd.io/content/2.11/tasks/multicluster.md b/linkerd.io/content/2.11/tasks/multicluster.md index 55992f4f2a..8296e8b91f 100644 --- a/linkerd.io/content/2.11/tasks/multicluster.md +++ b/linkerd.io/content/2.11/tasks/multicluster.md @@ -13,6 +13,7 @@ between services that live on different clusters. At a high level, you will: + 1. [Install Linkerd and Linkerd Viz](#install-linkerd-and-linkerd-viz) on two clusters with a shared trust anchor. 1. [Prepare](#preparing-your-cluster) the clusters. @@ -21,7 +22,7 @@ At a high level, you will: 1. [Export](#exporting-the-services) the demo services, to control visibility. 1. [Verify](#security) the security of your clusters. 1. [Split traffic](#traffic-splitting) from pods on the source cluster (`west`) - to the target cluster (`east`) +to the target cluster (`east`) ## Prerequisites @@ -52,7 +53,7 @@ At a high level, you will: ## Install Linkerd and Linkerd Viz -![install](/docs/images/multicluster/install.svg "Two Clusters") +![install](/images/docs/multicluster/install.svg 'Two Clusters') Linkerd requires a shared [trust anchor](generate-certificates/#trust-anchor-certificate) to exist between @@ -131,7 +132,7 @@ done ## Preparing your cluster -![preparation](/docs/images/multicluster/prep-overview.svg "Preparation") +![preparation](/images/docs/multicluster/prep-overview.svg 'Preparation') In order to route traffic between clusters, Linkerd leverages Kubernetes services so that your application code does not need to change and there is @@ -154,7 +155,7 @@ for ctx in west east; do done ``` -![install](/docs/images/multicluster/components.svg "Components") +![install](/images/docs/multicluster/components.svg 'Components') Installed into the `linkerd-multicluster` namespace, the gateway is a simple [pause container](https://github.com/linkerd/linkerd2/blob/main/multicluster/charts/linkerd-multicluster/templates/gateway.yaml#L3) @@ -193,7 +194,7 @@ mirroring services. We'll want to link the clusters together now! ## Linking the clusters -![link-clusters](/docs/images/multicluster/link-flow.svg "Link") +![link-clusters](/images/docs/multicluster/link-flow.svg 'Link') For `west` to mirror services from `east`, the `west` cluster needs to have credentials so that it can watch for services in `east` to be exported. You'd @@ -251,7 +252,7 @@ use the `--api-server-address` flag for `link`. ## Installing the test services -![test-services](/docs/images/multicluster/example-topology.svg "Topology") +![test-services](/images/docs/multicluster/example-topology.svg 'Topology') It is time to test this all out! The first step is to add some services that we can mirror. To add these to both clusters, you can run: @@ -278,7 +279,7 @@ To see what it looks like from the `west` cluster right now, you can run: kubectl --context=west -n test port-forward svc/frontend 8080 ``` -![west-podinfo](/docs/images/multicluster/west-podinfo.gif "West Podinfo") +![west-podinfo](/images/docs/multicluster/west-podinfo.gif 'West Podinfo') With the podinfo landing page available at [http://localhost:8080](http://localhost:8080), you can see how it looks in the @@ -369,7 +370,7 @@ We also provide a grafana dashboard to get a feel for what's going on here. You can get to it by running `linkerd --context=west viz dashboard` and going to [http://localhost:50750/grafana/](http://localhost:50750/grafana/d/linkerd-multicluster/linkerd-multicluster?orgId=1&refresh=1m) -![grafana-dashboard](/docs/images/multicluster/grafana-dashboard.png "Grafana") +![grafana-dashboard](/images/docs/multicluster/grafana-dashboard.png 'Grafana') ## Security @@ -408,7 +409,7 @@ kubectl --context=west -n test run -it --rm --image=alpine:3 test -- \ ## Traffic Splitting -![with-split](/docs/images/multicluster/with-split.svg "Traffic Split") +![with-split](/images/docs/multicluster/with-split.svg 'Traffic Split') It is pretty useful to have services automatically show up in clusters and be able to explicitly address them, however that only covers one use case for @@ -452,7 +453,7 @@ both clusters.Alternatively, for the command line approach, `curl localhost:8080` will give you a message that greets from `west` and `east`. -![podinfo-split](/docs/images/multicluster/split-podinfo.gif "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/split-podinfo.gif 'Cross Cluster Podinfo') You can also watch what's happening with metrics. To see the source side of things (`west`), you can run: @@ -473,7 +474,7 @@ linkerd --context=east -n test viz stat \ There's even a dashboard! Run `linkerd viz dashboard` and send your browser to [localhost:50750](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![podinfo-split](/docs/images/multicluster/ts-dashboard.png "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/ts-dashboard.png 'Cross Cluster Podinfo') ## Cleanup diff --git a/linkerd.io/content/2.11/tasks/using-a-private-docker-repository.md b/linkerd.io/content/2.11/tasks/using-a-private-docker-repository.md index fea82817c8..d0b7c5f960 100644 --- a/linkerd.io/content/2.11/tasks/using-a-private-docker-repository.md +++ b/linkerd.io/content/2.11/tasks/using-a-private-docker-repository.md @@ -32,7 +32,7 @@ image: prom/prometheus:v2.11.1 ``` All of the Linkerd images are publicly available in the -[Linkerd Google Container Repository](https://console.cloud.google.com/gcr/docs/images/linkerd-io/GLOBAL/) +[Linkerd Google Container Repository](https://console.cloud.google.com/gcr/images/docs/linkerd-io/GLOBAL/) Stable images are named using the convention `stable-` and the edge images use the convention `edge-..`. diff --git a/linkerd.io/content/2.12/_index.md b/linkerd.io/content/2.12/_index.md index 6c28dc82e3..11f8979d57 100644 --- a/linkerd.io/content/2.12/_index.md +++ b/linkerd.io/content/2.12/_index.md @@ -2,10 +2,12 @@ title: Docs cascade: type: docs + params: + noIndex: true # Redirect type: _default layout: redirect params: - redirect: ./overview + redirect: ./getting-started --- diff --git a/linkerd.io/content/2.12/checks/index.md b/linkerd.io/content/2.12/checks/index.md index fc6ac87de5..5f2ae8ee80 100644 --- a/linkerd.io/content/2.12/checks/index.md +++ b/linkerd.io/content/2.12/checks/index.md @@ -6,5 +6,5 @@ type: _default layout: redirect params: unlisted: true - redirect: /2/tasks/troubleshooting/ + redirect: /docs/tasks/troubleshooting/ --- diff --git a/linkerd.io/content/2.12/features/dashboard.md b/linkerd.io/content/2.12/features/dashboard.md index d66a157726..c69f941be4 100644 --- a/linkerd.io/content/2.12/features/dashboard.md +++ b/linkerd.io/content/2.12/features/dashboard.md @@ -49,7 +49,7 @@ health of specific service routes. One way to pull it up is by running `linkerd viz dashboard` from the command line. -![Top Line Metrics](/docs/images/architecture/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/architecture/stat.png 'Top Line Metrics') ## Grafana @@ -96,13 +96,13 @@ linkerd -n emojivoto viz tap deploy/web All of this functionality is also available in the dashboard, if you would like to use your browser instead: -![Top Line Metrics](/docs/images/getting-started/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/getting-started/stat.png 'Top Line Metrics') -![Deployment Detail](/docs/images/getting-started/inbound-outbound.png "Deployment Detail") +![Deployment Detail](/images/docs/getting-started/inbound-outbound.png 'Deployment Detail') -![Top](/docs/images/getting-started/top.png "Top") +![Top](/images/docs/getting-started/top.png 'Top') -![Tap](/docs/images/getting-started/tap.png "Tap") +![Tap](/images/docs/getting-started/tap.png 'Tap') ## Futher reading diff --git a/linkerd.io/content/2.12/features/distributed-tracing.md b/linkerd.io/content/2.12/features/distributed-tracing.md index 2d111828f9..0a8833fe64 100644 --- a/linkerd.io/content/2.12/features/distributed-tracing.md +++ b/linkerd.io/content/2.12/features/distributed-tracing.md @@ -26,13 +26,13 @@ For example, Linkerd can display a live topology of all incoming and outgoing dependencies for a service, without requiring distributed tracing or any other such application modification: -![The Linkerd dashboard showing an automatically generated topology graph](/docs/images/books/webapp-detail.png "The Linkerd dashboard showing an automatically generated topology graph") +![The Linkerd dashboard showing an automatically generated topology graph](/images/docs/books/webapp-detail.png 'The Linkerd dashboard showing an automatically generated topology graph') Likewise, Linkerd can provide golden metrics per service and per _route_, again without requiring distributed tracing or any other such application modification: -![Linkerd dashboard showing an automatically generated route metrics](/docs/images/books/webapp-routes.png "Linkerd dashboard showing an automatically generated route metrics") +![Linkerd dashboard showing an automatically generated route metrics](/images/docs/books/webapp-routes.png 'Linkerd dashboard showing an automatically generated route metrics') ## Using distributed tracing diff --git a/linkerd.io/content/2.12/features/multicluster.md b/linkerd.io/content/2.12/features/multicluster.md index 3db632cb92..cdb1df8337 100644 --- a/linkerd.io/content/2.12/features/multicluster.md +++ b/linkerd.io/content/2.12/features/multicluster.md @@ -34,7 +34,7 @@ full observability, security and routing features of Linkerd apply uniformly to both in-cluster and cluster-calls, and the application does not need to distinguish between those situations. -![Overview](/docs/images/multicluster/feature-overview.svg "Overview") +![Overview](/images/docs/multicluster/feature-overview.svg 'Overview') Linkerd's multi-cluster functionality is implemented by two components: a _service mirror_ and a _gateway_. The _service mirror_ component watches a diff --git a/linkerd.io/content/2.12/features/protocol-detection.md b/linkerd.io/content/2.12/features/protocol-detection.md index f91c372d91..0f792865ae 100644 --- a/linkerd.io/content/2.12/features/protocol-detection.md +++ b/linkerd.io/content/2.12/features/protocol-detection.md @@ -76,7 +76,7 @@ configuration. If you are using one of those protocols, follow this decision tree to determine which configuration you need to apply. -![Decision tree](/docs/images/protocol-detection-decision-tree.png) +![Decision tree](/images/docs/protocol-detection-decision-tree.png) ## Marking ports as opaque diff --git a/linkerd.io/content/2.12/getting-started/_index.md b/linkerd.io/content/2.12/getting-started/_index.md index 00a6ae8c1e..3651114173 100644 --- a/linkerd.io/content/2.12/getting-started/_index.md +++ b/linkerd.io/content/2.12/getting-started/_index.md @@ -243,7 +243,7 @@ linkerd viz dashboard & You should see a screen like this: -![The Linkerd dashboard in action](/docs/images/getting-started/viz-empty-dashboard.png "The Linkerd dashboard in action") +![The Linkerd dashboard in action](/images/docs/getting-started/viz-empty-dashboard.png 'The Linkerd dashboard in action') Click around, explore, and have fun! For extra credit, see if you can find the live metrics for each Emojivoto component, and determine which one has a partial diff --git a/linkerd.io/content/2.12/reference/architecture.md b/linkerd.io/content/2.12/reference/architecture.md index ce99afc121..a16ce3aee9 100644 --- a/linkerd.io/content/2.12/reference/architecture.md +++ b/linkerd.io/content/2.12/reference/architecture.md @@ -16,7 +16,7 @@ with the control plane for configuration. Linkerd also provides a **CLI** that can be used to interact with the control and data planes. -![Linkerd's architecture](/docs/images/architecture/control-plane.png "Linkerd's architecture") +![Linkerd's architecture](/images/docs/architecture/control-plane.png "Linkerd's architecture") ## CLI diff --git a/linkerd.io/content/2.12/reference/iptables.md b/linkerd.io/content/2.12/reference/iptables.md index 67a7ea89de..a91a6d0d4d 100644 --- a/linkerd.io/content/2.12/reference/iptables.md +++ b/linkerd.io/content/2.12/reference/iptables.md @@ -31,7 +31,7 @@ The redirect chain will be configured with two more rules: Based on these two rules, there are two possible paths that an inbound packet can take, both of which are outlined below. -![Inbound iptables chain traversal](/docs/images/iptables/iptables-fig2-1.png "Inbound iptables chain traversal") +![Inbound iptables chain traversal](/images/docs/iptables/iptables-fig2-1.png 'Inbound iptables chain traversal') The packet will arrive on the `PREROUTING` chain and will be immediately routed to the redirect chain. If its destination port matches any of the inbound ports @@ -79,7 +79,7 @@ configured: been produced by the service, so it should be forwarded to its destination by the proxy. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-2.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-2.png 'Outbound iptables chain traversal') A packet produced by the service will first hit the `OUTPUT` chain; from here, it will be sent to our own output chain for processing. The first rule it @@ -113,7 +113,7 @@ in the pod. This scenario would typically apply when: - The destination is a port bound on localhost (regardless of which container it belongs to). -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-3.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-3.png 'Outbound iptables chain traversal') When the application targets itself through its pod's IP (or loopback address), the packets will traverse the two output chains. The first rule will be skipped, @@ -138,7 +138,7 @@ inbound side to account for outbound packets that are sent locally. is not guaranteed that the destination will be local. The packet follows an unusual path, as depicted in the diagram below. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-4.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-4.png 'Outbound iptables chain traversal') When the packet first traverses the output chains, it will follow the same path an outbound packet would normally take. In such a scenario, the packet's diff --git a/linkerd.io/content/2.12/tasks/books.md b/linkerd.io/content/2.12/tasks/books.md index 3ff8c5fe97..737c0c621b 100644 --- a/linkerd.io/content/2.12/tasks/books.md +++ b/linkerd.io/content/2.12/tasks/books.md @@ -21,7 +21,7 @@ the other services. There are three services: For demo purposes, the app comes with a simple traffic generator. The overall topology looks like this: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') ## Prerequisites @@ -69,7 +69,7 @@ kubectl -n booksapp port-forward svc/webapp 7000 & Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') Unfortunately, there is an error in the app: if you click _Add Book_, it will fail 50% of the time. This is a classic case of non-obvious, intermittent @@ -78,7 +78,7 @@ debug. Kubernetes itself cannot detect or surface this error. From Kubernetes's perspective, it looks like everything's fine, but you know the application is returning errors. -![Failure](/docs/images/books/failure.png "Failure") +![Failure](/images/docs/books/failure.png 'Failure') ## Add Linkerd to the service @@ -109,7 +109,7 @@ out the Linkerd dashboard, run: linkerd viz dashboard & ``` -![Dashboard](/docs/images/books/dashboard.png "Dashboard") +![Dashboard](/images/docs/books/dashboard.png 'Dashboard') Select `booksapp` from the namespace dropdown and click on the [Deployments](http://localhost:50750/namespaces/booksapp/deployments) workload. @@ -127,7 +127,7 @@ has two outgoing dependencies: `authors` and `book`. One is the service for pulling in author information and the other is the service for pulling in book information. -![Detail](/docs/images/books/webapp-detail.png "Detail") +![Detail](/images/docs/books/webapp-detail.png 'Detail') A failure in a dependent service may be exactly what’s causing the errors that `webapp` is returning (and the errors you as a user can see when you click). We @@ -135,7 +135,7 @@ can see that the `books` service is also failing. Let’s scroll a little furthe down the page, we’ll see a live list of all traffic endpoints that `webapp` is receiving. This is interesting: -![Top](/docs/images/books/top.png "Top") +![Top](/images/docs/books/top.png 'Top') Aha! We can see that inbound traffic coming from the `webapp` service going to the `books` service is failing a significant percentage of the time. That could @@ -143,7 +143,7 @@ explain why `webapp` was throwing intermittent failures. Let’s click on the ta (🔬) icon and then on the Start button to look at the actual request and response stream. -![Tap](/docs/images/books/tap.png "Tap") +![Tap](/images/docs/books/tap.png 'Tap') Indeed, many of these requests are returning 500’s. diff --git a/linkerd.io/content/2.12/tasks/canary-release.md b/linkerd.io/content/2.12/tasks/canary-release.md index f45baea6a2..6c1e83a1a2 100644 --- a/linkerd.io/content/2.12/tasks/canary-release.md +++ b/linkerd.io/content/2.12/tasks/canary-release.md @@ -70,7 +70,7 @@ orchestrates it. A load generator simply makes it easier to execute the rollout as there needs to be some kind of active traffic to complete the operation. Together, these components have a topology that looks like: -![Topology](/docs/images/canary/simple-topology.svg "Topology") +![Topology](/images/docs/canary/simple-topology.svg 'Topology') To add these components to your cluster and include them in the Linkerd [data plane](../reference/architecture/#data-plane), run: @@ -173,7 +173,7 @@ podinfo-primary ClusterIP 10.7.249.63 9898/TCP 23m At this point, the topology looks a little like: -![Initialized](/docs/images/canary/initialized.svg "Initialized") +![Initialized](/images/docs/canary/initialized.svg 'Initialized') {{< note >}} @@ -219,7 +219,7 @@ kubectl -n test get ev --watch While an update is occurring, the resources and traffic will look like this at a high level: -![Ongoing](/docs/images/canary/ongoing.svg "Ongoing") +![Ongoing](/images/docs/canary/ongoing.svg 'Ongoing') After the update is complete, this picture will go back to looking just like the figure from the previous section. @@ -268,7 +268,7 @@ For something a little more visual, you can use the dashboard. Start it by running `linkerd viz dashboard` and then look at the detail page for the [podinfo traffic split](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![Dashboard](/docs/images/canary/traffic-split.png "Dashboard") +![Dashboard](/images/docs/canary/traffic-split.png 'Dashboard') ### Browser diff --git a/linkerd.io/content/2.12/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.12/tasks/configuring-per-route-policy.md index 24606035be..955650c548 100644 --- a/linkerd.io/content/2.12/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.12/tasks/configuring-per-route-policy.md @@ -64,7 +64,7 @@ $ kubectl -n booksapp port-forward svc/webapp 7000 & Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png 'Frontend') +![Frontend](/images/docs/books/frontend.png 'Frontend') ## Creating a Server resource @@ -309,7 +309,7 @@ web UI, we may notice that something is amiss. Attempting to delete an author results in a "not found" error in the web UI: -![Not found](/docs/images/books/delete-404.png "Not found") +![Not found](/images/docs/books/delete-404.png 'Not found') and similarly, adding a new author takes us to an error page. @@ -354,7 +354,7 @@ EOF What happens if we try to delete an author _now_? We still see a failure, but a different one: -![Internal server error](/docs/images/books/delete-503.png "Internal server error") +![Internal server error](/images/docs/books/delete-503.png 'Internal server error') This is because we have created a _route_ matching `DELETE`, `PUT`, and `POST` requests, but we haven't _authorized_ requests to that route. Running the @@ -411,11 +411,11 @@ in this case, we only authenticate the `webapp` deployment's `ServiceAccount` Now, if we attempt to delete an author in the frontend once again, we can: -![Author deleted](/docs/images/books/delete-ok.png "Author deleted") +![Author deleted](/images/docs/books/delete-ok.png 'Author deleted') Similarly, we can now create a new author successfully, as well: -![Author created](/docs/images/books/create-ok.png "Author created") +![Author created](/images/docs/books/create-ok.png 'Author created') Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: diff --git a/linkerd.io/content/2.12/tasks/debugging-your-service.md b/linkerd.io/content/2.12/tasks/debugging-your-service.md index fa22bef005..d1009cfe76 100644 --- a/linkerd.io/content/2.12/tasks/debugging-your-service.md +++ b/linkerd.io/content/2.12/tasks/debugging-your-service.md @@ -17,12 +17,12 @@ command), you should see all the resources in the `emojivoto` namespace, including the deployments. Each deployment running Linkerd shows success rate, requests per second and latency percentiles. -![Top Level Metrics](/docs/images/debugging/stat.png "Top Level Metrics") +![Top Level Metrics](/images/docs/debugging/stat.png 'Top Level Metrics') That's pretty neat, but the first thing you might notice is that the success rate is well below 100%! Click on `web` and let's dig in. -![Deployment Detail](/docs/images/debugging/octopus.png "Deployment Detail") +![Deployment Detail](/images/docs/debugging/octopus.png 'Deployment Detail') You should now be looking at the Deployment page for the web deployment. The first thing you'll see here is that the web deployment is taking traffic from @@ -38,7 +38,7 @@ returning. Let's scroll a little further down the page, we'll see a live list of all traffic that is incoming to _and_ outgoing from `web`. This is interesting: -![Top](/docs/images/debugging/web-top.png "Top") +![Top](/images/docs/debugging/web-top.png 'Top') There are two calls that are not at 100%: the first is vote-bot's call to the `/api/vote` endpoint. The second is the `VoteDoughnut` call from the web @@ -54,7 +54,7 @@ the requests are failing with a is a common error response as you can see from [the code][code]. Linkerd is aware of gRPC's response classification without any other configuration! -![Tap](/docs/images/debugging/web-tap.png "Tap") +![Tap](/images/docs/debugging/web-tap.png 'Tap') At this point, we have everything required to get the endpoint fixed and restore the overall health of our applications. diff --git a/linkerd.io/content/2.12/tasks/distributed-tracing.md b/linkerd.io/content/2.12/tasks/distributed-tracing.md index 7c11e271b6..cfeba415e8 100644 --- a/linkerd.io/content/2.12/tasks/distributed-tracing.md +++ b/linkerd.io/content/2.12/tasks/distributed-tracing.md @@ -21,7 +21,7 @@ To use distributed tracing, you'll need to: In the case of emojivoto, once all these steps are complete there will be a topology that looks like: -![Topology](/docs/images/tracing/tracing-topology.svg "Topology") +![Topology](/images/docs/tracing/tracing-topology.svg 'Topology') ## Prerequisites @@ -101,17 +101,17 @@ up in Jaeger. To get to the UI, run: linkerd jaeger dashboard ``` -![Jaeger](/docs/images/tracing/jaeger-empty.png "Jaeger") +![Jaeger](/images/docs/tracing/jaeger-empty.png 'Jaeger') You can search for any service in the dropdown and click Find Traces. `vote-bot` is a great way to get started. -![Search](/docs/images/tracing/jaeger-search.png "Search") +![Search](/images/docs/tracing/jaeger-search.png 'Search') Clicking on a specific trace will provide all the details, you'll be able to see the spans for every proxy! -![Search](/docs/images/tracing/example-trace.png "Search") +![Search](/images/docs/tracing/example-trace.png 'Search') There sure are a lot of `linkerd-proxy` spans in that output. Internally, the proxy has a server and client side. When a request goes through the proxy, it is @@ -127,7 +127,7 @@ meta-data as trace attributes, users can directly jump into related resources traces directly from the linkerd-web dashboard by clicking the Jaeger icon in the Metrics Table, as shown below: -![Linkerd-Jaeger](/docs/images/tracing/linkerd-jaeger-ui.png "Linkerd-Jaeger") +![Linkerd-Jaeger](/images/docs/tracing/linkerd-jaeger-ui.png 'Linkerd-Jaeger') To obtain that functionality you need to install (or upgrade) the Linkerd-Viz extension specifying the service exposing the Jaeger UI. By default, this would diff --git a/linkerd.io/content/2.12/tasks/fault-injection.md b/linkerd.io/content/2.12/tasks/fault-injection.md index 87902b5ff3..d51b774a5b 100644 --- a/linkerd.io/content/2.12/tasks/fault-injection.md +++ b/linkerd.io/content/2.12/tasks/fault-injection.md @@ -14,7 +14,7 @@ or even crazy payloads. The [books demo](books/) is a great way to show off this behavior. The overall topology looks like: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') In this guide, you will split some of the requests from `webapp` to `books`. Most requests will end up at the correct `books` destination, however some of diff --git a/linkerd.io/content/2.12/tasks/gitops.md b/linkerd.io/content/2.12/tasks/gitops.md index 612dcd768d..88f391b3bc 100644 --- a/linkerd.io/content/2.12/tasks/gitops.md +++ b/linkerd.io/content/2.12/tasks/gitops.md @@ -22,7 +22,7 @@ the [auto proxy injection](../features/proxy-injection/) feature into your workflow. Finally, this guide conclude with steps to upgrade Linkerd to a newer version following a GitOps workflow. -![Linkerd GitOps workflow](/docs/images/gitops/architecture.png 'Linkerd GitOps workflow') +![Linkerd GitOps workflow](/images/docs/gitops/architecture.png 'Linkerd GitOps workflow') The software and tools used in this guide are selected for demonstration purposes only. Feel free to choose others that are most suited for your @@ -184,7 +184,7 @@ argocd proj get demo On the dashboard: -![New project in Argo CD dashboard](/docs/images/gitops/dashboard-project.png 'New project in Argo CD dashboard') +![New project in Argo CD dashboard](/images/docs/gitops/dashboard-project.png 'New project in Argo CD dashboard') ### Deploy the applications @@ -215,7 +215,7 @@ Sync the `main` application: argocd app sync main ``` -![Synchronize the main application](/docs/images/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') +![Synchronize the main application](/images/docs/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') Notice that only the `main` application is synchronized. @@ -237,7 +237,7 @@ for deploy in "cert-manager" "cert-manager-cainjector" "cert-manager-webhook"; \ done ``` -![Synchronize the cert-manager application](/docs/images/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') +![Synchronize the cert-manager application](/images/docs/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') ### Deploy Sealed Secrets @@ -253,7 +253,7 @@ Confirm that sealed-secrets is running: kubectl -n kube-system rollout status deploy/sealed-secrets ``` -![Synchronize the sealed-secrets application](/docs/images/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') +![Synchronize the sealed-secrets application](/images/docs/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') ### Create mTLS trust anchor @@ -348,7 +348,7 @@ Git server earlier. {{< /note >}} -![Synchronize the linkerd-bootstrap application](/docs/images/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') +![Synchronize the linkerd-bootstrap application](/images/docs/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') SealedSecrets should have created a secret containing the decrypted trust anchor. Retrieve the decrypted trust anchor from the secret: @@ -380,7 +380,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Empty default trust anchor](/docs/images/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') +![Empty default trust anchor](/images/docs/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') We will override this parameter in the `linkerd` application with the value of `${trust_anchor}`. @@ -442,7 +442,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Override mTLS trust anchor](/docs/images/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') +![Override mTLS trust anchor](/images/docs/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') Synchronize the `linkerd-crds` and `linkerd-control-plane` applications: @@ -457,7 +457,7 @@ Check that Linkerd is ready: linkerd check ``` -![Synchronize Linkerd](/docs/images/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') +![Synchronize Linkerd](/images/docs/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') ### Test with emojivoto @@ -475,7 +475,7 @@ for deploy in "emoji" "vote-bot" "voting" "web" ; \ done ``` -![Synchronize emojivoto](/docs/images/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') +![Synchronize emojivoto](/images/docs/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') ### Upgrade Linkerd to 2.12.1 diff --git a/linkerd.io/content/2.12/tasks/multicluster.md b/linkerd.io/content/2.12/tasks/multicluster.md index 07214f7412..d6f0128145 100644 --- a/linkerd.io/content/2.12/tasks/multicluster.md +++ b/linkerd.io/content/2.12/tasks/multicluster.md @@ -13,6 +13,7 @@ between services that live on different clusters. At a high level, you will: + 1. [Install Linkerd and Linkerd Viz](#install-linkerd-and-linkerd-viz) on two clusters with a shared trust anchor. 1. [Prepare](#preparing-your-cluster) the clusters. @@ -21,7 +22,7 @@ At a high level, you will: 1. [Export](#exporting-the-services) the demo services, to control visibility. 1. [Verify](#security) the security of your clusters. 1. [Split traffic](#traffic-splitting) from pods on the source cluster (`west`) - to the target cluster (`east`) +to the target cluster (`east`) ## Prerequisites @@ -52,7 +53,7 @@ At a high level, you will: ## Install Linkerd and Linkerd Viz -![install](/docs/images/multicluster/install.svg "Two Clusters") +![install](/images/docs/multicluster/install.svg 'Two Clusters') Linkerd requires a shared [trust anchor](generate-certificates/#trust-anchor-certificate) to exist between @@ -138,7 +139,7 @@ done ## Preparing your cluster -![preparation](/docs/images/multicluster/prep-overview.svg "Preparation") +![preparation](/images/docs/multicluster/prep-overview.svg 'Preparation') In order to route traffic between clusters, Linkerd leverages Kubernetes services so that your application code does not need to change and there is @@ -161,7 +162,7 @@ for ctx in west east; do done ``` -![install](/docs/images/multicluster/components.svg "Components") +![install](/images/docs/multicluster/components.svg 'Components') Installed into the `linkerd-multicluster` namespace, the gateway is a simple [pause container](https://github.com/linkerd/linkerd2/blob/main/multicluster/charts/linkerd-multicluster/templates/gateway.yaml#L3) @@ -200,7 +201,7 @@ mirroring services. We'll want to link the clusters together now! ## Linking the clusters -![link-clusters](/docs/images/multicluster/link-flow.svg "Link") +![link-clusters](/images/docs/multicluster/link-flow.svg 'Link') For `west` to mirror services from `east`, the `west` cluster needs to have credentials so that it can watch for services in `east` to be exported. You'd @@ -258,7 +259,7 @@ use the `--api-server-address` flag for `link`. ## Installing the test services -![test-services](/docs/images/multicluster/example-topology.svg "Topology") +![test-services](/images/docs/multicluster/example-topology.svg 'Topology') It is time to test this all out! The first step is to add some services that we can mirror. To add these to both clusters, you can run: @@ -285,7 +286,7 @@ To see what it looks like from the `west` cluster right now, you can run: kubectl --context=west -n test port-forward svc/frontend 8080 ``` -![west-podinfo](/docs/images/multicluster/west-podinfo.gif "West Podinfo") +![west-podinfo](/images/docs/multicluster/west-podinfo.gif 'West Podinfo') With the podinfo landing page available at [http://localhost:8080](http://localhost:8080), you can see how it looks in the @@ -377,7 +378,7 @@ the [grafana install instructions](grafana/) first to have a working grafana provisioned with Linkerd dashboards). You can get to it by running `linkerd --context=west viz dashboard` and going to -![grafana-dashboard](/docs/images/multicluster/grafana-dashboard.png "Grafana") +![grafana-dashboard](/images/docs/multicluster/grafana-dashboard.png 'Grafana') ## Security @@ -416,7 +417,7 @@ kubectl --context=west -n test run -it --rm --image=alpine:3 test -- \ ## Traffic Splitting -![with-split](/docs/images/multicluster/with-split.svg "Traffic Split") +![with-split](/images/docs/multicluster/with-split.svg 'Traffic Split') It is pretty useful to have services automatically show up in clusters and be able to explicitly address them, however that only covers one use case for @@ -460,7 +461,7 @@ both clusters.Alternatively, for the command line approach, `curl localhost:8080` will give you a message that greets from `west` and `east`. -![podinfo-split](/docs/images/multicluster/split-podinfo.gif "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/split-podinfo.gif 'Cross Cluster Podinfo') You can also watch what's happening with metrics. To see the source side of things (`west`), you can run: @@ -481,7 +482,7 @@ linkerd --context=east -n test viz stat \ There's even a dashboard! Run `linkerd viz dashboard` and send your browser to [localhost:50750](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![podinfo-split](/docs/images/multicluster/ts-dashboard.png "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/ts-dashboard.png 'Cross Cluster Podinfo') ## Cleanup diff --git a/linkerd.io/content/2.13/_index.md b/linkerd.io/content/2.13/_index.md index 6c28dc82e3..11f8979d57 100644 --- a/linkerd.io/content/2.13/_index.md +++ b/linkerd.io/content/2.13/_index.md @@ -2,10 +2,12 @@ title: Docs cascade: type: docs + params: + noIndex: true # Redirect type: _default layout: redirect params: - redirect: ./overview + redirect: ./getting-started --- diff --git a/linkerd.io/content/2.13/checks/index.md b/linkerd.io/content/2.13/checks/index.md index fc6ac87de5..5f2ae8ee80 100644 --- a/linkerd.io/content/2.13/checks/index.md +++ b/linkerd.io/content/2.13/checks/index.md @@ -6,5 +6,5 @@ type: _default layout: redirect params: unlisted: true - redirect: /2/tasks/troubleshooting/ + redirect: /docs/tasks/troubleshooting/ --- diff --git a/linkerd.io/content/2.13/features/dashboard.md b/linkerd.io/content/2.13/features/dashboard.md index d66a157726..c69f941be4 100644 --- a/linkerd.io/content/2.13/features/dashboard.md +++ b/linkerd.io/content/2.13/features/dashboard.md @@ -49,7 +49,7 @@ health of specific service routes. One way to pull it up is by running `linkerd viz dashboard` from the command line. -![Top Line Metrics](/docs/images/architecture/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/architecture/stat.png 'Top Line Metrics') ## Grafana @@ -96,13 +96,13 @@ linkerd -n emojivoto viz tap deploy/web All of this functionality is also available in the dashboard, if you would like to use your browser instead: -![Top Line Metrics](/docs/images/getting-started/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/getting-started/stat.png 'Top Line Metrics') -![Deployment Detail](/docs/images/getting-started/inbound-outbound.png "Deployment Detail") +![Deployment Detail](/images/docs/getting-started/inbound-outbound.png 'Deployment Detail') -![Top](/docs/images/getting-started/top.png "Top") +![Top](/images/docs/getting-started/top.png 'Top') -![Tap](/docs/images/getting-started/tap.png "Tap") +![Tap](/images/docs/getting-started/tap.png 'Tap') ## Futher reading diff --git a/linkerd.io/content/2.13/features/distributed-tracing.md b/linkerd.io/content/2.13/features/distributed-tracing.md index 2d111828f9..0a8833fe64 100644 --- a/linkerd.io/content/2.13/features/distributed-tracing.md +++ b/linkerd.io/content/2.13/features/distributed-tracing.md @@ -26,13 +26,13 @@ For example, Linkerd can display a live topology of all incoming and outgoing dependencies for a service, without requiring distributed tracing or any other such application modification: -![The Linkerd dashboard showing an automatically generated topology graph](/docs/images/books/webapp-detail.png "The Linkerd dashboard showing an automatically generated topology graph") +![The Linkerd dashboard showing an automatically generated topology graph](/images/docs/books/webapp-detail.png 'The Linkerd dashboard showing an automatically generated topology graph') Likewise, Linkerd can provide golden metrics per service and per _route_, again without requiring distributed tracing or any other such application modification: -![Linkerd dashboard showing an automatically generated route metrics](/docs/images/books/webapp-routes.png "Linkerd dashboard showing an automatically generated route metrics") +![Linkerd dashboard showing an automatically generated route metrics](/images/docs/books/webapp-routes.png 'Linkerd dashboard showing an automatically generated route metrics') ## Using distributed tracing diff --git a/linkerd.io/content/2.13/features/multicluster.md b/linkerd.io/content/2.13/features/multicluster.md index 3db632cb92..cdb1df8337 100644 --- a/linkerd.io/content/2.13/features/multicluster.md +++ b/linkerd.io/content/2.13/features/multicluster.md @@ -34,7 +34,7 @@ full observability, security and routing features of Linkerd apply uniformly to both in-cluster and cluster-calls, and the application does not need to distinguish between those situations. -![Overview](/docs/images/multicluster/feature-overview.svg "Overview") +![Overview](/images/docs/multicluster/feature-overview.svg 'Overview') Linkerd's multi-cluster functionality is implemented by two components: a _service mirror_ and a _gateway_. The _service mirror_ component watches a diff --git a/linkerd.io/content/2.13/features/protocol-detection.md b/linkerd.io/content/2.13/features/protocol-detection.md index f91c372d91..0f792865ae 100644 --- a/linkerd.io/content/2.13/features/protocol-detection.md +++ b/linkerd.io/content/2.13/features/protocol-detection.md @@ -76,7 +76,7 @@ configuration. If you are using one of those protocols, follow this decision tree to determine which configuration you need to apply. -![Decision tree](/docs/images/protocol-detection-decision-tree.png) +![Decision tree](/images/docs/protocol-detection-decision-tree.png) ## Marking ports as opaque diff --git a/linkerd.io/content/2.13/getting-started/_index.md b/linkerd.io/content/2.13/getting-started/_index.md index def8d01ff3..2b8572baa6 100644 --- a/linkerd.io/content/2.13/getting-started/_index.md +++ b/linkerd.io/content/2.13/getting-started/_index.md @@ -243,7 +243,7 @@ linkerd viz dashboard & You should see a screen like this: -![The Linkerd dashboard in action](/docs/images/getting-started/viz-empty-dashboard.png "The Linkerd dashboard in action") +![The Linkerd dashboard in action](/images/docs/getting-started/viz-empty-dashboard.png 'The Linkerd dashboard in action') Click around, explore, and have fun! For extra credit, see if you can find the live metrics for each Emojivoto component, and determine which one has a partial diff --git a/linkerd.io/content/2.13/reference/architecture.md b/linkerd.io/content/2.13/reference/architecture.md index ce99afc121..a16ce3aee9 100644 --- a/linkerd.io/content/2.13/reference/architecture.md +++ b/linkerd.io/content/2.13/reference/architecture.md @@ -16,7 +16,7 @@ with the control plane for configuration. Linkerd also provides a **CLI** that can be used to interact with the control and data planes. -![Linkerd's architecture](/docs/images/architecture/control-plane.png "Linkerd's architecture") +![Linkerd's architecture](/images/docs/architecture/control-plane.png "Linkerd's architecture") ## CLI diff --git a/linkerd.io/content/2.13/reference/iptables.md b/linkerd.io/content/2.13/reference/iptables.md index 67a7ea89de..a91a6d0d4d 100644 --- a/linkerd.io/content/2.13/reference/iptables.md +++ b/linkerd.io/content/2.13/reference/iptables.md @@ -31,7 +31,7 @@ The redirect chain will be configured with two more rules: Based on these two rules, there are two possible paths that an inbound packet can take, both of which are outlined below. -![Inbound iptables chain traversal](/docs/images/iptables/iptables-fig2-1.png "Inbound iptables chain traversal") +![Inbound iptables chain traversal](/images/docs/iptables/iptables-fig2-1.png 'Inbound iptables chain traversal') The packet will arrive on the `PREROUTING` chain and will be immediately routed to the redirect chain. If its destination port matches any of the inbound ports @@ -79,7 +79,7 @@ configured: been produced by the service, so it should be forwarded to its destination by the proxy. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-2.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-2.png 'Outbound iptables chain traversal') A packet produced by the service will first hit the `OUTPUT` chain; from here, it will be sent to our own output chain for processing. The first rule it @@ -113,7 +113,7 @@ in the pod. This scenario would typically apply when: - The destination is a port bound on localhost (regardless of which container it belongs to). -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-3.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-3.png 'Outbound iptables chain traversal') When the application targets itself through its pod's IP (or loopback address), the packets will traverse the two output chains. The first rule will be skipped, @@ -138,7 +138,7 @@ inbound side to account for outbound packets that are sent locally. is not guaranteed that the destination will be local. The packet follows an unusual path, as depicted in the diagram below. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-4.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-4.png 'Outbound iptables chain traversal') When the packet first traverses the output chains, it will follow the same path an outbound packet would normally take. In such a scenario, the packet's diff --git a/linkerd.io/content/2.13/tasks/books.md b/linkerd.io/content/2.13/tasks/books.md index 3ff8c5fe97..737c0c621b 100644 --- a/linkerd.io/content/2.13/tasks/books.md +++ b/linkerd.io/content/2.13/tasks/books.md @@ -21,7 +21,7 @@ the other services. There are three services: For demo purposes, the app comes with a simple traffic generator. The overall topology looks like this: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') ## Prerequisites @@ -69,7 +69,7 @@ kubectl -n booksapp port-forward svc/webapp 7000 & Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') Unfortunately, there is an error in the app: if you click _Add Book_, it will fail 50% of the time. This is a classic case of non-obvious, intermittent @@ -78,7 +78,7 @@ debug. Kubernetes itself cannot detect or surface this error. From Kubernetes's perspective, it looks like everything's fine, but you know the application is returning errors. -![Failure](/docs/images/books/failure.png "Failure") +![Failure](/images/docs/books/failure.png 'Failure') ## Add Linkerd to the service @@ -109,7 +109,7 @@ out the Linkerd dashboard, run: linkerd viz dashboard & ``` -![Dashboard](/docs/images/books/dashboard.png "Dashboard") +![Dashboard](/images/docs/books/dashboard.png 'Dashboard') Select `booksapp` from the namespace dropdown and click on the [Deployments](http://localhost:50750/namespaces/booksapp/deployments) workload. @@ -127,7 +127,7 @@ has two outgoing dependencies: `authors` and `book`. One is the service for pulling in author information and the other is the service for pulling in book information. -![Detail](/docs/images/books/webapp-detail.png "Detail") +![Detail](/images/docs/books/webapp-detail.png 'Detail') A failure in a dependent service may be exactly what’s causing the errors that `webapp` is returning (and the errors you as a user can see when you click). We @@ -135,7 +135,7 @@ can see that the `books` service is also failing. Let’s scroll a little furthe down the page, we’ll see a live list of all traffic endpoints that `webapp` is receiving. This is interesting: -![Top](/docs/images/books/top.png "Top") +![Top](/images/docs/books/top.png 'Top') Aha! We can see that inbound traffic coming from the `webapp` service going to the `books` service is failing a significant percentage of the time. That could @@ -143,7 +143,7 @@ explain why `webapp` was throwing intermittent failures. Let’s click on the ta (🔬) icon and then on the Start button to look at the actual request and response stream. -![Tap](/docs/images/books/tap.png "Tap") +![Tap](/images/docs/books/tap.png 'Tap') Indeed, many of these requests are returning 500’s. diff --git a/linkerd.io/content/2.13/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.13/tasks/configuring-per-route-policy.md index 018f3a706a..84d53e4237 100644 --- a/linkerd.io/content/2.13/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.13/tasks/configuring-per-route-policy.md @@ -64,7 +64,7 @@ $ kubectl -n booksapp port-forward svc/webapp 7000 & Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png 'Frontend') +![Frontend](/images/docs/books/frontend.png 'Frontend') ## Creating a Server resource @@ -309,7 +309,7 @@ web UI, we may notice that something is amiss. Attempting to delete an author results in a "not found" error in the web UI: -![Not found](/docs/images/books/delete-404.png "Not found") +![Not found](/images/docs/books/delete-404.png 'Not found') and similarly, adding a new author takes us to an error page. @@ -354,7 +354,7 @@ EOF What happens if we try to delete an author _now_? We still see a failure, but a different one: -![Internal server error](/docs/images/books/delete-503.png "Internal server error") +![Internal server error](/images/docs/books/delete-503.png 'Internal server error') This is because we have created a _route_ matching `DELETE`, `PUT`, and `POST` requests, but we haven't _authorized_ requests to that route. Running the @@ -411,11 +411,11 @@ in this case, we only authenticate the `webapp` deployment's `ServiceAccount` Now, if we attempt to delete an author in the frontend once again, we can: -![Author deleted](/docs/images/books/delete-ok.png "Author deleted") +![Author deleted](/images/docs/books/delete-ok.png 'Author deleted') Similarly, we can now create a new author successfully, as well: -![Author created](/docs/images/books/create-ok.png "Author created") +![Author created](/images/docs/books/create-ok.png 'Author created') Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: diff --git a/linkerd.io/content/2.13/tasks/debugging-your-service.md b/linkerd.io/content/2.13/tasks/debugging-your-service.md index fa22bef005..d1009cfe76 100644 --- a/linkerd.io/content/2.13/tasks/debugging-your-service.md +++ b/linkerd.io/content/2.13/tasks/debugging-your-service.md @@ -17,12 +17,12 @@ command), you should see all the resources in the `emojivoto` namespace, including the deployments. Each deployment running Linkerd shows success rate, requests per second and latency percentiles. -![Top Level Metrics](/docs/images/debugging/stat.png "Top Level Metrics") +![Top Level Metrics](/images/docs/debugging/stat.png 'Top Level Metrics') That's pretty neat, but the first thing you might notice is that the success rate is well below 100%! Click on `web` and let's dig in. -![Deployment Detail](/docs/images/debugging/octopus.png "Deployment Detail") +![Deployment Detail](/images/docs/debugging/octopus.png 'Deployment Detail') You should now be looking at the Deployment page for the web deployment. The first thing you'll see here is that the web deployment is taking traffic from @@ -38,7 +38,7 @@ returning. Let's scroll a little further down the page, we'll see a live list of all traffic that is incoming to _and_ outgoing from `web`. This is interesting: -![Top](/docs/images/debugging/web-top.png "Top") +![Top](/images/docs/debugging/web-top.png 'Top') There are two calls that are not at 100%: the first is vote-bot's call to the `/api/vote` endpoint. The second is the `VoteDoughnut` call from the web @@ -54,7 +54,7 @@ the requests are failing with a is a common error response as you can see from [the code][code]. Linkerd is aware of gRPC's response classification without any other configuration! -![Tap](/docs/images/debugging/web-tap.png "Tap") +![Tap](/images/docs/debugging/web-tap.png 'Tap') At this point, we have everything required to get the endpoint fixed and restore the overall health of our applications. diff --git a/linkerd.io/content/2.13/tasks/distributed-tracing.md b/linkerd.io/content/2.13/tasks/distributed-tracing.md index 9527f5d825..ae409acb27 100644 --- a/linkerd.io/content/2.13/tasks/distributed-tracing.md +++ b/linkerd.io/content/2.13/tasks/distributed-tracing.md @@ -21,7 +21,7 @@ To use distributed tracing, you'll need to: In the case of emojivoto, once all these steps are complete there will be a topology that looks like: -![Topology](/docs/images/tracing/tracing-topology.svg "Topology") +![Topology](/images/docs/tracing/tracing-topology.svg 'Topology') ## Prerequisites @@ -101,17 +101,17 @@ up in Jaeger. To get to the UI, run: linkerd jaeger dashboard ``` -![Jaeger](/docs/images/tracing/jaeger-empty.png "Jaeger") +![Jaeger](/images/docs/tracing/jaeger-empty.png 'Jaeger') You can search for any service in the dropdown and click Find Traces. `vote-bot` is a great way to get started. -![Search](/docs/images/tracing/jaeger-search.png "Search") +![Search](/images/docs/tracing/jaeger-search.png 'Search') Clicking on a specific trace will provide all the details, you'll be able to see the spans for every proxy! -![Search](/docs/images/tracing/example-trace.png "Search") +![Search](/images/docs/tracing/example-trace.png 'Search') There sure are a lot of `linkerd-proxy` spans in that output. Internally, the proxy has a server and client side. When a request goes through the proxy, it is @@ -127,7 +127,7 @@ meta-data as trace attributes, users can directly jump into related resources traces directly from the linkerd-web dashboard by clicking the Jaeger icon in the Metrics Table, as shown below: -![Linkerd-Jaeger](/docs/images/tracing/linkerd-jaeger-ui.png "Linkerd-Jaeger") +![Linkerd-Jaeger](/images/docs/tracing/linkerd-jaeger-ui.png 'Linkerd-Jaeger') To obtain that functionality you need to install (or upgrade) the Linkerd-Viz extension specifying the service exposing the Jaeger UI. By default, this would diff --git a/linkerd.io/content/2.13/tasks/fault-injection.md b/linkerd.io/content/2.13/tasks/fault-injection.md index c1797c2c8b..6f49cfc97c 100644 --- a/linkerd.io/content/2.13/tasks/fault-injection.md +++ b/linkerd.io/content/2.13/tasks/fault-injection.md @@ -12,7 +12,7 @@ return whatever responses you want - 500s, timeouts or even crazy payloads. The [books demo](books/) is a great way to show off this behavior. The overall topology looks like: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') In this guide, you will split some of the requests from `webapp` to `books`. Most requests will end up at the correct `books` destination, however some of diff --git a/linkerd.io/content/2.13/tasks/flagger.md b/linkerd.io/content/2.13/tasks/flagger.md index 862a77363a..0dccc00bb4 100644 --- a/linkerd.io/content/2.13/tasks/flagger.md +++ b/linkerd.io/content/2.13/tasks/flagger.md @@ -70,7 +70,7 @@ orchestrates it. A load generator simply makes it easier to execute the rollout as there needs to be some kind of active traffic to complete the operation. Together, these components have a topology that looks like: -![Topology](/docs/images/canary/simple-topology.svg "Topology") +![Topology](/images/docs/canary/simple-topology.svg 'Topology') To add these components to your cluster and include them in the Linkerd [data plane](../reference/architecture/#data-plane), run: @@ -173,7 +173,7 @@ podinfo-primary ClusterIP 10.7.249.63 9898/TCP 23m At this point, the topology looks a little like: -![Initialized](/docs/images/canary/initialized.svg "Initialized") +![Initialized](/images/docs/canary/initialized.svg 'Initialized') {{< note >}} @@ -219,7 +219,7 @@ kubectl -n test get ev --watch While an update is occurring, the resources and traffic will look like this at a high level: -![Ongoing](/docs/images/canary/ongoing.svg "Ongoing") +![Ongoing](/images/docs/canary/ongoing.svg 'Ongoing') After the update is complete, this picture will go back to looking just like the figure from the previous section. @@ -268,7 +268,7 @@ For something a little more visual, you can use the dashboard. Start it by running `linkerd viz dashboard` and then look at the detail page for the [podinfo traffic split](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![Dashboard](/docs/images/canary/traffic-split.png "Dashboard") +![Dashboard](/images/docs/canary/traffic-split.png 'Dashboard') ### Browser diff --git a/linkerd.io/content/2.13/tasks/gitops.md b/linkerd.io/content/2.13/tasks/gitops.md index a1a96f1dd3..715eda20a4 100644 --- a/linkerd.io/content/2.13/tasks/gitops.md +++ b/linkerd.io/content/2.13/tasks/gitops.md @@ -22,7 +22,7 @@ the [auto proxy injection](../features/proxy-injection/) feature into your workflow. Finally, this guide conclude with steps to upgrade Linkerd to a newer version following a GitOps workflow. -![Linkerd GitOps workflow](/docs/images/gitops/architecture.png 'Linkerd GitOps workflow') +![Linkerd GitOps workflow](/images/docs/gitops/architecture.png 'Linkerd GitOps workflow') The software and tools used in this guide are selected for demonstration purposes only. Feel free to choose others that are most suited for your @@ -184,7 +184,7 @@ argocd proj get demo On the dashboard: -![New project in Argo CD dashboard](/docs/images/gitops/dashboard-project.png 'New project in Argo CD dashboard') +![New project in Argo CD dashboard](/images/docs/gitops/dashboard-project.png 'New project in Argo CD dashboard') ### Deploy the applications @@ -215,7 +215,7 @@ Sync the `main` application: argocd app sync main ``` -![Synchronize the main application](/docs/images/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') +![Synchronize the main application](/images/docs/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') Notice that only the `main` application is synchronized. @@ -237,7 +237,7 @@ for deploy in "cert-manager" "cert-manager-cainjector" "cert-manager-webhook"; \ done ``` -![Synchronize the cert-manager application](/docs/images/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') +![Synchronize the cert-manager application](/images/docs/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') ### Deploy Sealed Secrets @@ -253,7 +253,7 @@ Confirm that sealed-secrets is running: kubectl -n kube-system rollout status deploy/sealed-secrets ``` -![Synchronize the sealed-secrets application](/docs/images/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') +![Synchronize the sealed-secrets application](/images/docs/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') ### Create mTLS trust anchor @@ -348,7 +348,7 @@ Git server earlier. {{< /note >}} -![Synchronize the linkerd-bootstrap application](/docs/images/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') +![Synchronize the linkerd-bootstrap application](/images/docs/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') SealedSecrets should have created a secret containing the decrypted trust anchor. Retrieve the decrypted trust anchor from the secret: @@ -380,7 +380,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Empty default trust anchor](/docs/images/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') +![Empty default trust anchor](/images/docs/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') We will override this parameter in the `linkerd` application with the value of `${trust_anchor}`. @@ -442,7 +442,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Override mTLS trust anchor](/docs/images/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') +![Override mTLS trust anchor](/images/docs/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') Synchronize the `linkerd-crds` and `linkerd-control-plane` applications: @@ -457,7 +457,7 @@ Check that Linkerd is ready: linkerd check ``` -![Synchronize Linkerd](/docs/images/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') +![Synchronize Linkerd](/images/docs/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') ### Test with emojivoto @@ -475,7 +475,7 @@ for deploy in "emoji" "vote-bot" "voting" "web" ; \ done ``` -![Synchronize emojivoto](/docs/images/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') +![Synchronize emojivoto](/images/docs/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') ### Upgrade Linkerd to 2.13.1 diff --git a/linkerd.io/content/2.13/tasks/multicluster.md b/linkerd.io/content/2.13/tasks/multicluster.md index f1af09cd21..7d871bbad9 100644 --- a/linkerd.io/content/2.13/tasks/multicluster.md +++ b/linkerd.io/content/2.13/tasks/multicluster.md @@ -13,6 +13,7 @@ between services that live on different clusters. At a high level, you will: + 1. [Install Linkerd and Linkerd Viz](#install-linkerd-and-linkerd-viz) on two clusters with a shared trust anchor. 1. [Prepare](#preparing-your-cluster) the clusters. @@ -21,7 +22,7 @@ At a high level, you will: 1. [Export](#exporting-the-services) the demo services, to control visibility. 1. [Verify](#security) the security of your clusters. 1. [Split traffic](#traffic-splitting) from pods on the source cluster (`west`) - to the target cluster (`east`) +to the target cluster (`east`) ## Prerequisites @@ -52,7 +53,7 @@ At a high level, you will: ## Install Linkerd and Linkerd Viz -![install](/docs/images/multicluster/install.svg "Two Clusters") +![install](/images/docs/multicluster/install.svg 'Two Clusters') Linkerd requires a shared [trust anchor](generate-certificates/#trust-anchor-certificate) to exist between @@ -138,7 +139,7 @@ done ## Preparing your cluster -![preparation](/docs/images/multicluster/prep-overview.svg "Preparation") +![preparation](/images/docs/multicluster/prep-overview.svg 'Preparation') In order to route traffic between clusters, Linkerd leverages Kubernetes services so that your application code does not need to change and there is @@ -161,7 +162,7 @@ for ctx in west east; do done ``` -![install](/docs/images/multicluster/components.svg "Components") +![install](/images/docs/multicluster/components.svg 'Components') Installed into the `linkerd-multicluster` namespace, the gateway is a simple [pause container](https://github.com/linkerd/linkerd2/blob/main/multicluster/charts/linkerd-multicluster/templates/gateway.yaml#L3) @@ -200,7 +201,7 @@ mirroring services. We'll want to link the clusters together now! ## Linking the clusters -![link-clusters](/docs/images/multicluster/link-flow.svg "Link") +![link-clusters](/images/docs/multicluster/link-flow.svg 'Link') For `west` to mirror services from `east`, the `west` cluster needs to have credentials so that it can watch for services in `east` to be exported. You'd @@ -258,7 +259,7 @@ use the `--api-server-address` flag for `link`. ## Installing the test services -![test-services](/docs/images/multicluster/example-topology.svg "Topology") +![test-services](/images/docs/multicluster/example-topology.svg 'Topology') It is time to test this all out! The first step is to add some services that we can mirror. To add these to both clusters, you can run: @@ -285,7 +286,7 @@ To see what it looks like from the `west` cluster right now, you can run: kubectl --context=west -n test port-forward svc/frontend 8080 ``` -![west-podinfo](/docs/images/multicluster/west-podinfo.gif "West Podinfo") +![west-podinfo](/images/docs/multicluster/west-podinfo.gif 'West Podinfo') With the podinfo landing page available at [http://localhost:8080](http://localhost:8080), you can see how it looks in the @@ -377,7 +378,7 @@ the [grafana install instructions](grafana/) first to have a working grafana provisioned with Linkerd dashboards). You can get to it by running `linkerd --context=west viz dashboard` and going to -![grafana-dashboard](/docs/images/multicluster/grafana-dashboard.png "Grafana") +![grafana-dashboard](/images/docs/multicluster/grafana-dashboard.png 'Grafana') ## Security @@ -416,7 +417,7 @@ kubectl --context=west -n test run -it --rm --image=alpine:3 test -- \ ## Traffic Splitting -![with-split](/docs/images/multicluster/with-split.svg "Traffic Split") +![with-split](/images/docs/multicluster/with-split.svg 'Traffic Split') It is pretty useful to have services automatically show up in clusters and be able to explicitly address them, however that only covers one use case for @@ -460,7 +461,7 @@ both clusters.Alternatively, for the command line approach, `curl localhost:8080` will give you a message that greets from `west` and `east`. -![podinfo-split](/docs/images/multicluster/split-podinfo.gif "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/split-podinfo.gif 'Cross Cluster Podinfo') You can also watch what's happening with metrics. To see the source side of things (`west`), you can run: @@ -481,7 +482,7 @@ linkerd --context=east -n test viz stat \ There's even a dashboard! Run `linkerd viz dashboard` and send your browser to [localhost:50750](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![podinfo-split](/docs/images/multicluster/ts-dashboard.png "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/ts-dashboard.png 'Cross Cluster Podinfo') ## Cleanup diff --git a/linkerd.io/content/2.14/_index.md b/linkerd.io/content/2.14/_index.md index 6c28dc82e3..11f8979d57 100644 --- a/linkerd.io/content/2.14/_index.md +++ b/linkerd.io/content/2.14/_index.md @@ -2,10 +2,12 @@ title: Docs cascade: type: docs + params: + noIndex: true # Redirect type: _default layout: redirect params: - redirect: ./overview + redirect: ./getting-started --- diff --git a/linkerd.io/content/2.14/checks/index.md b/linkerd.io/content/2.14/checks/index.md index fc6ac87de5..5f2ae8ee80 100644 --- a/linkerd.io/content/2.14/checks/index.md +++ b/linkerd.io/content/2.14/checks/index.md @@ -6,5 +6,5 @@ type: _default layout: redirect params: unlisted: true - redirect: /2/tasks/troubleshooting/ + redirect: /docs/tasks/troubleshooting/ --- diff --git a/linkerd.io/content/2.14/features/dashboard.md b/linkerd.io/content/2.14/features/dashboard.md index d66a157726..c69f941be4 100644 --- a/linkerd.io/content/2.14/features/dashboard.md +++ b/linkerd.io/content/2.14/features/dashboard.md @@ -49,7 +49,7 @@ health of specific service routes. One way to pull it up is by running `linkerd viz dashboard` from the command line. -![Top Line Metrics](/docs/images/architecture/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/architecture/stat.png 'Top Line Metrics') ## Grafana @@ -96,13 +96,13 @@ linkerd -n emojivoto viz tap deploy/web All of this functionality is also available in the dashboard, if you would like to use your browser instead: -![Top Line Metrics](/docs/images/getting-started/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/getting-started/stat.png 'Top Line Metrics') -![Deployment Detail](/docs/images/getting-started/inbound-outbound.png "Deployment Detail") +![Deployment Detail](/images/docs/getting-started/inbound-outbound.png 'Deployment Detail') -![Top](/docs/images/getting-started/top.png "Top") +![Top](/images/docs/getting-started/top.png 'Top') -![Tap](/docs/images/getting-started/tap.png "Tap") +![Tap](/images/docs/getting-started/tap.png 'Tap') ## Futher reading diff --git a/linkerd.io/content/2.14/features/distributed-tracing.md b/linkerd.io/content/2.14/features/distributed-tracing.md index 2d111828f9..0a8833fe64 100644 --- a/linkerd.io/content/2.14/features/distributed-tracing.md +++ b/linkerd.io/content/2.14/features/distributed-tracing.md @@ -26,13 +26,13 @@ For example, Linkerd can display a live topology of all incoming and outgoing dependencies for a service, without requiring distributed tracing or any other such application modification: -![The Linkerd dashboard showing an automatically generated topology graph](/docs/images/books/webapp-detail.png "The Linkerd dashboard showing an automatically generated topology graph") +![The Linkerd dashboard showing an automatically generated topology graph](/images/docs/books/webapp-detail.png 'The Linkerd dashboard showing an automatically generated topology graph') Likewise, Linkerd can provide golden metrics per service and per _route_, again without requiring distributed tracing or any other such application modification: -![Linkerd dashboard showing an automatically generated route metrics](/docs/images/books/webapp-routes.png "Linkerd dashboard showing an automatically generated route metrics") +![Linkerd dashboard showing an automatically generated route metrics](/images/docs/books/webapp-routes.png 'Linkerd dashboard showing an automatically generated route metrics') ## Using distributed tracing diff --git a/linkerd.io/content/2.14/features/multicluster.md b/linkerd.io/content/2.14/features/multicluster.md index db9dfd65fe..ef63610e75 100644 --- a/linkerd.io/content/2.14/features/multicluster.md +++ b/linkerd.io/content/2.14/features/multicluster.md @@ -43,7 +43,7 @@ the _Foo_ service as if it were on the local cluster. Linkerd supports two basic forms of multi-cluster communication: hierarchical and flat. -![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png) +![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png) ### Hierarchical networks diff --git a/linkerd.io/content/2.14/features/protocol-detection.md b/linkerd.io/content/2.14/features/protocol-detection.md index f91c372d91..0f792865ae 100644 --- a/linkerd.io/content/2.14/features/protocol-detection.md +++ b/linkerd.io/content/2.14/features/protocol-detection.md @@ -76,7 +76,7 @@ configuration. If you are using one of those protocols, follow this decision tree to determine which configuration you need to apply. -![Decision tree](/docs/images/protocol-detection-decision-tree.png) +![Decision tree](/images/docs/protocol-detection-decision-tree.png) ## Marking ports as opaque diff --git a/linkerd.io/content/2.14/getting-started/_index.md b/linkerd.io/content/2.14/getting-started/_index.md index e8904d1b26..818a918b7c 100644 --- a/linkerd.io/content/2.14/getting-started/_index.md +++ b/linkerd.io/content/2.14/getting-started/_index.md @@ -250,7 +250,7 @@ linkerd viz dashboard & You should see a screen like this: -![The Linkerd dashboard in action](/docs/images/getting-started/viz-empty-dashboard.png "The Linkerd dashboard in action") +![The Linkerd dashboard in action](/images/docs/getting-started/viz-empty-dashboard.png 'The Linkerd dashboard in action') Click around, explore, and have fun! For extra credit, see if you can find the live metrics for each Emojivoto component, and determine which one has a partial diff --git a/linkerd.io/content/2.14/reference/architecture.md b/linkerd.io/content/2.14/reference/architecture.md index ce99afc121..a16ce3aee9 100644 --- a/linkerd.io/content/2.14/reference/architecture.md +++ b/linkerd.io/content/2.14/reference/architecture.md @@ -16,7 +16,7 @@ with the control plane for configuration. Linkerd also provides a **CLI** that can be used to interact with the control and data planes. -![Linkerd's architecture](/docs/images/architecture/control-plane.png "Linkerd's architecture") +![Linkerd's architecture](/images/docs/architecture/control-plane.png "Linkerd's architecture") ## CLI diff --git a/linkerd.io/content/2.14/reference/iptables.md b/linkerd.io/content/2.14/reference/iptables.md index 67a7ea89de..a91a6d0d4d 100644 --- a/linkerd.io/content/2.14/reference/iptables.md +++ b/linkerd.io/content/2.14/reference/iptables.md @@ -31,7 +31,7 @@ The redirect chain will be configured with two more rules: Based on these two rules, there are two possible paths that an inbound packet can take, both of which are outlined below. -![Inbound iptables chain traversal](/docs/images/iptables/iptables-fig2-1.png "Inbound iptables chain traversal") +![Inbound iptables chain traversal](/images/docs/iptables/iptables-fig2-1.png 'Inbound iptables chain traversal') The packet will arrive on the `PREROUTING` chain and will be immediately routed to the redirect chain. If its destination port matches any of the inbound ports @@ -79,7 +79,7 @@ configured: been produced by the service, so it should be forwarded to its destination by the proxy. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-2.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-2.png 'Outbound iptables chain traversal') A packet produced by the service will first hit the `OUTPUT` chain; from here, it will be sent to our own output chain for processing. The first rule it @@ -113,7 +113,7 @@ in the pod. This scenario would typically apply when: - The destination is a port bound on localhost (regardless of which container it belongs to). -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-3.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-3.png 'Outbound iptables chain traversal') When the application targets itself through its pod's IP (or loopback address), the packets will traverse the two output chains. The first rule will be skipped, @@ -138,7 +138,7 @@ inbound side to account for outbound packets that are sent locally. is not guaranteed that the destination will be local. The packet follows an unusual path, as depicted in the diagram below. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-4.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-4.png 'Outbound iptables chain traversal') When the packet first traverses the output chains, it will follow the same path an outbound packet would normally take. In such a scenario, the packet's diff --git a/linkerd.io/content/2.14/reference/multicluster.md b/linkerd.io/content/2.14/reference/multicluster.md index 14de05f034..e95ec0217c 100644 --- a/linkerd.io/content/2.14/reference/multicluster.md +++ b/linkerd.io/content/2.14/reference/multicluster.md @@ -16,7 +16,7 @@ gateway): These modes can be mixed and matched. -![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png) +![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png) Hierarchical mode places a bare minimum of requirements on the underlying network, as it only requires that the gateway IP be reachable. However, flat diff --git a/linkerd.io/content/2.14/tasks/books.md b/linkerd.io/content/2.14/tasks/books.md index dfced757e7..0dc3d97763 100644 --- a/linkerd.io/content/2.14/tasks/books.md +++ b/linkerd.io/content/2.14/tasks/books.md @@ -21,7 +21,7 @@ the other services. There are three services: For demo purposes, the app comes with a simple traffic generator. The overall topology looks like this: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') ## Prerequisites @@ -72,7 +72,7 @@ connection" messages for the rest of the exercise.) Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') Unfortunately, there is an error in the app: if you click _Add Book_, it will fail 50% of the time. This is a classic case of non-obvious, intermittent @@ -81,7 +81,7 @@ debug. Kubernetes itself cannot detect or surface this error. From Kubernetes's perspective, it looks like everything's fine, but you know the application is returning errors. -![Failure](/docs/images/books/failure.png "Failure") +![Failure](/images/docs/books/failure.png 'Failure') ## Add Linkerd to the service @@ -112,7 +112,7 @@ out the Linkerd dashboard, run: linkerd viz dashboard & ``` -![Dashboard](/docs/images/books/dashboard.png "Dashboard") +![Dashboard](/images/docs/books/dashboard.png 'Dashboard') Select `booksapp` from the namespace dropdown and click on the [Deployments](http://localhost:50750/namespaces/booksapp/deployments) workload. @@ -130,7 +130,7 @@ has two outgoing dependencies: `authors` and `book`. One is the service for pulling in author information and the other is the service for pulling in book information. -![Detail](/docs/images/books/webapp-detail.png "Detail") +![Detail](/images/docs/books/webapp-detail.png 'Detail') A failure in a dependent service may be exactly what’s causing the errors that `webapp` is returning (and the errors you as a user can see when you click). We @@ -138,7 +138,7 @@ can see that the `books` service is also failing. Let’s scroll a little furthe down the page, we’ll see a live list of all traffic endpoints that `webapp` is receiving. This is interesting: -![Top](/docs/images/books/top.png "Top") +![Top](/images/docs/books/top.png 'Top') Aha! We can see that inbound traffic coming from the `webapp` service going to the `books` service is failing a significant percentage of the time. That could @@ -146,7 +146,7 @@ explain why `webapp` was throwing intermittent failures. Let’s click on the ta (🔬) icon and then on the Start button to look at the actual request and response stream. -![Tap](/docs/images/books/tap.png "Tap") +![Tap](/images/docs/books/tap.png 'Tap') Indeed, many of these requests are returning 500’s. diff --git a/linkerd.io/content/2.14/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.14/tasks/configuring-per-route-policy.md index a5c8b5c2ef..9a895dd5d5 100644 --- a/linkerd.io/content/2.14/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.14/tasks/configuring-per-route-policy.md @@ -64,7 +64,7 @@ $ kubectl -n booksapp port-forward svc/webapp 7000 & Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') ## Creating a Server resource @@ -330,7 +330,7 @@ web UI, we may notice that something is amiss. Attempting to delete an author results in a "not found" error in the web UI: -![Not found](/docs/images/books/delete-404.png "Not found") +![Not found](/images/docs/books/delete-404.png 'Not found') and similarly, adding a new author takes us to an error page. @@ -375,7 +375,7 @@ EOF What happens if we try to delete an author _now_? We still see a failure, but a different one: -![Internal server error](/docs/images/books/delete-503.png "Internal server error") +![Internal server error](/images/docs/books/delete-503.png 'Internal server error') This is because we have created a _route_ matching `DELETE`, `PUT`, and `POST` requests, but we haven't _authorized_ requests to that route. Running the @@ -432,11 +432,11 @@ in this case, we only authenticate the `webapp` deployment's `ServiceAccount` Now, if we attempt to delete an author in the frontend once again, we can: -![Author deleted](/docs/images/books/delete-ok.png "Author deleted") +![Author deleted](/images/docs/books/delete-ok.png 'Author deleted') Similarly, we can now create a new author successfully, as well: -![Author created](/docs/images/books/create-ok.png "Author created") +![Author created](/images/docs/books/create-ok.png 'Author created') Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: diff --git a/linkerd.io/content/2.14/tasks/debugging-your-service.md b/linkerd.io/content/2.14/tasks/debugging-your-service.md index fa22bef005..d1009cfe76 100644 --- a/linkerd.io/content/2.14/tasks/debugging-your-service.md +++ b/linkerd.io/content/2.14/tasks/debugging-your-service.md @@ -17,12 +17,12 @@ command), you should see all the resources in the `emojivoto` namespace, including the deployments. Each deployment running Linkerd shows success rate, requests per second and latency percentiles. -![Top Level Metrics](/docs/images/debugging/stat.png "Top Level Metrics") +![Top Level Metrics](/images/docs/debugging/stat.png 'Top Level Metrics') That's pretty neat, but the first thing you might notice is that the success rate is well below 100%! Click on `web` and let's dig in. -![Deployment Detail](/docs/images/debugging/octopus.png "Deployment Detail") +![Deployment Detail](/images/docs/debugging/octopus.png 'Deployment Detail') You should now be looking at the Deployment page for the web deployment. The first thing you'll see here is that the web deployment is taking traffic from @@ -38,7 +38,7 @@ returning. Let's scroll a little further down the page, we'll see a live list of all traffic that is incoming to _and_ outgoing from `web`. This is interesting: -![Top](/docs/images/debugging/web-top.png "Top") +![Top](/images/docs/debugging/web-top.png 'Top') There are two calls that are not at 100%: the first is vote-bot's call to the `/api/vote` endpoint. The second is the `VoteDoughnut` call from the web @@ -54,7 +54,7 @@ the requests are failing with a is a common error response as you can see from [the code][code]. Linkerd is aware of gRPC's response classification without any other configuration! -![Tap](/docs/images/debugging/web-tap.png "Tap") +![Tap](/images/docs/debugging/web-tap.png 'Tap') At this point, we have everything required to get the endpoint fixed and restore the overall health of our applications. diff --git a/linkerd.io/content/2.14/tasks/distributed-tracing.md b/linkerd.io/content/2.14/tasks/distributed-tracing.md index 9527f5d825..ae409acb27 100644 --- a/linkerd.io/content/2.14/tasks/distributed-tracing.md +++ b/linkerd.io/content/2.14/tasks/distributed-tracing.md @@ -21,7 +21,7 @@ To use distributed tracing, you'll need to: In the case of emojivoto, once all these steps are complete there will be a topology that looks like: -![Topology](/docs/images/tracing/tracing-topology.svg "Topology") +![Topology](/images/docs/tracing/tracing-topology.svg 'Topology') ## Prerequisites @@ -101,17 +101,17 @@ up in Jaeger. To get to the UI, run: linkerd jaeger dashboard ``` -![Jaeger](/docs/images/tracing/jaeger-empty.png "Jaeger") +![Jaeger](/images/docs/tracing/jaeger-empty.png 'Jaeger') You can search for any service in the dropdown and click Find Traces. `vote-bot` is a great way to get started. -![Search](/docs/images/tracing/jaeger-search.png "Search") +![Search](/images/docs/tracing/jaeger-search.png 'Search') Clicking on a specific trace will provide all the details, you'll be able to see the spans for every proxy! -![Search](/docs/images/tracing/example-trace.png "Search") +![Search](/images/docs/tracing/example-trace.png 'Search') There sure are a lot of `linkerd-proxy` spans in that output. Internally, the proxy has a server and client side. When a request goes through the proxy, it is @@ -127,7 +127,7 @@ meta-data as trace attributes, users can directly jump into related resources traces directly from the linkerd-web dashboard by clicking the Jaeger icon in the Metrics Table, as shown below: -![Linkerd-Jaeger](/docs/images/tracing/linkerd-jaeger-ui.png "Linkerd-Jaeger") +![Linkerd-Jaeger](/images/docs/tracing/linkerd-jaeger-ui.png 'Linkerd-Jaeger') To obtain that functionality you need to install (or upgrade) the Linkerd-Viz extension specifying the service exposing the Jaeger UI. By default, this would diff --git a/linkerd.io/content/2.14/tasks/fault-injection.md b/linkerd.io/content/2.14/tasks/fault-injection.md index 64caf8cb46..00e00598c1 100644 --- a/linkerd.io/content/2.14/tasks/fault-injection.md +++ b/linkerd.io/content/2.14/tasks/fault-injection.md @@ -12,7 +12,7 @@ return whatever responses you want - 500s, timeouts or even crazy payloads. The [books demo](books/) is a great way to show off this behavior. The overall topology looks like: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') In this guide, you will split some of the requests from `webapp` to `books`. Most requests will end up at the correct `books` destination, however some of diff --git a/linkerd.io/content/2.14/tasks/flagger.md b/linkerd.io/content/2.14/tasks/flagger.md index 85c5cc625b..3f775f9b7a 100644 --- a/linkerd.io/content/2.14/tasks/flagger.md +++ b/linkerd.io/content/2.14/tasks/flagger.md @@ -69,7 +69,7 @@ orchestrates it. A load generator simply makes it easier to execute the rollout as there needs to be some kind of active traffic to complete the operation. Together, these components have a topology that looks like: -![Topology](/docs/images/canary/simple-topology.svg "Topology") +![Topology](/images/docs/canary/simple-topology.svg 'Topology') To add these components to your cluster and include them in the Linkerd [data plane](../reference/architecture/#data-plane), run: @@ -192,7 +192,7 @@ metadata: name: podinfo namespace: test spec: - provider: "smi:v1alpha2" + provider: 'smi:v1alpha2' targetRef: apiVersion: apps/v1 kind: Deployment @@ -254,7 +254,7 @@ podinfo-primary ClusterIP 10.7.249.63 9898/TCP 23m At this point, the topology looks a little like: -![Initialized](/docs/images/canary/initialized.svg "Initialized") +![Initialized](/images/docs/canary/initialized.svg 'Initialized') {{< note >}} @@ -300,7 +300,7 @@ kubectl -n test get ev --watch While an update is occurring, the resources and traffic will look like this at a high level: -![Ongoing](/docs/images/canary/ongoing.svg "Ongoing") +![Ongoing](/images/docs/canary/ongoing.svg 'Ongoing') After the update is complete, this picture will go back to looking just like the figure from the previous section. diff --git a/linkerd.io/content/2.14/tasks/gitops.md b/linkerd.io/content/2.14/tasks/gitops.md index cb6db15ee0..0a36fe704a 100644 --- a/linkerd.io/content/2.14/tasks/gitops.md +++ b/linkerd.io/content/2.14/tasks/gitops.md @@ -22,7 +22,7 @@ the [auto proxy injection](../features/proxy-injection/) feature into your workflow. Finally, this guide conclude with steps to upgrade Linkerd to a newer version following a GitOps workflow. -![Linkerd GitOps workflow](/docs/images/gitops/architecture.png 'Linkerd GitOps workflow') +![Linkerd GitOps workflow](/images/docs/gitops/architecture.png 'Linkerd GitOps workflow') The software and tools used in this guide are selected for demonstration purposes only. Feel free to choose others that are most suited for your @@ -184,7 +184,7 @@ argocd proj get demo On the dashboard: -![New project in Argo CD dashboard](/docs/images/gitops/dashboard-project.png 'New project in Argo CD dashboard') +![New project in Argo CD dashboard](/images/docs/gitops/dashboard-project.png 'New project in Argo CD dashboard') ### Deploy the applications @@ -215,7 +215,7 @@ Sync the `main` application: argocd app sync main ``` -![Synchronize the main application](/docs/images/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') +![Synchronize the main application](/images/docs/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') Notice that only the `main` application is synchronized. @@ -237,7 +237,7 @@ for deploy in "cert-manager" "cert-manager-cainjector" "cert-manager-webhook"; \ done ``` -![Synchronize the cert-manager application](/docs/images/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') +![Synchronize the cert-manager application](/images/docs/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') ### Deploy Sealed Secrets @@ -253,7 +253,7 @@ Confirm that sealed-secrets is running: kubectl -n kube-system rollout status deploy/sealed-secrets ``` -![Synchronize the sealed-secrets application](/docs/images/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') +![Synchronize the sealed-secrets application](/images/docs/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') ### Create mTLS trust anchor @@ -348,7 +348,7 @@ Git server earlier. {{< /note >}} -![Synchronize the linkerd-bootstrap application](/docs/images/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') +![Synchronize the linkerd-bootstrap application](/images/docs/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') SealedSecrets should have created a secret containing the decrypted trust anchor. Retrieve the decrypted trust anchor from the secret: @@ -380,7 +380,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Empty default trust anchor](/docs/images/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') +![Empty default trust anchor](/images/docs/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') We will override this parameter in the `linkerd` application with the value of `${trust_anchor}`. @@ -442,7 +442,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Override mTLS trust anchor](/docs/images/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') +![Override mTLS trust anchor](/images/docs/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') Synchronize the `linkerd-crds` and `linkerd-control-plane` applications: @@ -457,7 +457,7 @@ Check that Linkerd is ready: linkerd check ``` -![Synchronize Linkerd](/docs/images/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') +![Synchronize Linkerd](/images/docs/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') ### Test with emojivoto @@ -475,7 +475,7 @@ for deploy in "emoji" "vote-bot" "voting" "web" ; \ done ``` -![Synchronize emojivoto](/docs/images/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') +![Synchronize emojivoto](/images/docs/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') ### Upgrade Linkerd to 2.14.1 diff --git a/linkerd.io/content/2.14/tasks/multicluster.md b/linkerd.io/content/2.14/tasks/multicluster.md index f1af09cd21..7d871bbad9 100644 --- a/linkerd.io/content/2.14/tasks/multicluster.md +++ b/linkerd.io/content/2.14/tasks/multicluster.md @@ -13,6 +13,7 @@ between services that live on different clusters. At a high level, you will: + 1. [Install Linkerd and Linkerd Viz](#install-linkerd-and-linkerd-viz) on two clusters with a shared trust anchor. 1. [Prepare](#preparing-your-cluster) the clusters. @@ -21,7 +22,7 @@ At a high level, you will: 1. [Export](#exporting-the-services) the demo services, to control visibility. 1. [Verify](#security) the security of your clusters. 1. [Split traffic](#traffic-splitting) from pods on the source cluster (`west`) - to the target cluster (`east`) +to the target cluster (`east`) ## Prerequisites @@ -52,7 +53,7 @@ At a high level, you will: ## Install Linkerd and Linkerd Viz -![install](/docs/images/multicluster/install.svg "Two Clusters") +![install](/images/docs/multicluster/install.svg 'Two Clusters') Linkerd requires a shared [trust anchor](generate-certificates/#trust-anchor-certificate) to exist between @@ -138,7 +139,7 @@ done ## Preparing your cluster -![preparation](/docs/images/multicluster/prep-overview.svg "Preparation") +![preparation](/images/docs/multicluster/prep-overview.svg 'Preparation') In order to route traffic between clusters, Linkerd leverages Kubernetes services so that your application code does not need to change and there is @@ -161,7 +162,7 @@ for ctx in west east; do done ``` -![install](/docs/images/multicluster/components.svg "Components") +![install](/images/docs/multicluster/components.svg 'Components') Installed into the `linkerd-multicluster` namespace, the gateway is a simple [pause container](https://github.com/linkerd/linkerd2/blob/main/multicluster/charts/linkerd-multicluster/templates/gateway.yaml#L3) @@ -200,7 +201,7 @@ mirroring services. We'll want to link the clusters together now! ## Linking the clusters -![link-clusters](/docs/images/multicluster/link-flow.svg "Link") +![link-clusters](/images/docs/multicluster/link-flow.svg 'Link') For `west` to mirror services from `east`, the `west` cluster needs to have credentials so that it can watch for services in `east` to be exported. You'd @@ -258,7 +259,7 @@ use the `--api-server-address` flag for `link`. ## Installing the test services -![test-services](/docs/images/multicluster/example-topology.svg "Topology") +![test-services](/images/docs/multicluster/example-topology.svg 'Topology') It is time to test this all out! The first step is to add some services that we can mirror. To add these to both clusters, you can run: @@ -285,7 +286,7 @@ To see what it looks like from the `west` cluster right now, you can run: kubectl --context=west -n test port-forward svc/frontend 8080 ``` -![west-podinfo](/docs/images/multicluster/west-podinfo.gif "West Podinfo") +![west-podinfo](/images/docs/multicluster/west-podinfo.gif 'West Podinfo') With the podinfo landing page available at [http://localhost:8080](http://localhost:8080), you can see how it looks in the @@ -377,7 +378,7 @@ the [grafana install instructions](grafana/) first to have a working grafana provisioned with Linkerd dashboards). You can get to it by running `linkerd --context=west viz dashboard` and going to -![grafana-dashboard](/docs/images/multicluster/grafana-dashboard.png "Grafana") +![grafana-dashboard](/images/docs/multicluster/grafana-dashboard.png 'Grafana') ## Security @@ -416,7 +417,7 @@ kubectl --context=west -n test run -it --rm --image=alpine:3 test -- \ ## Traffic Splitting -![with-split](/docs/images/multicluster/with-split.svg "Traffic Split") +![with-split](/images/docs/multicluster/with-split.svg 'Traffic Split') It is pretty useful to have services automatically show up in clusters and be able to explicitly address them, however that only covers one use case for @@ -460,7 +461,7 @@ both clusters.Alternatively, for the command line approach, `curl localhost:8080` will give you a message that greets from `west` and `east`. -![podinfo-split](/docs/images/multicluster/split-podinfo.gif "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/split-podinfo.gif 'Cross Cluster Podinfo') You can also watch what's happening with metrics. To see the source side of things (`west`), you can run: @@ -481,7 +482,7 @@ linkerd --context=east -n test viz stat \ There's even a dashboard! Run `linkerd viz dashboard` and send your browser to [localhost:50750](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![podinfo-split](/docs/images/multicluster/ts-dashboard.png "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/ts-dashboard.png 'Cross Cluster Podinfo') ## Cleanup diff --git a/linkerd.io/content/2.15/_index.md b/linkerd.io/content/2.15/_index.md index 6c28dc82e3..11f8979d57 100644 --- a/linkerd.io/content/2.15/_index.md +++ b/linkerd.io/content/2.15/_index.md @@ -2,10 +2,12 @@ title: Docs cascade: type: docs + params: + noIndex: true # Redirect type: _default layout: redirect params: - redirect: ./overview + redirect: ./getting-started --- diff --git a/linkerd.io/content/2.15/checks/index.md b/linkerd.io/content/2.15/checks/index.md index fc6ac87de5..5f2ae8ee80 100644 --- a/linkerd.io/content/2.15/checks/index.md +++ b/linkerd.io/content/2.15/checks/index.md @@ -6,5 +6,5 @@ type: _default layout: redirect params: unlisted: true - redirect: /2/tasks/troubleshooting/ + redirect: /docs/tasks/troubleshooting/ --- diff --git a/linkerd.io/content/2.15/features/dashboard.md b/linkerd.io/content/2.15/features/dashboard.md index d66a157726..c69f941be4 100644 --- a/linkerd.io/content/2.15/features/dashboard.md +++ b/linkerd.io/content/2.15/features/dashboard.md @@ -49,7 +49,7 @@ health of specific service routes. One way to pull it up is by running `linkerd viz dashboard` from the command line. -![Top Line Metrics](/docs/images/architecture/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/architecture/stat.png 'Top Line Metrics') ## Grafana @@ -96,13 +96,13 @@ linkerd -n emojivoto viz tap deploy/web All of this functionality is also available in the dashboard, if you would like to use your browser instead: -![Top Line Metrics](/docs/images/getting-started/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/getting-started/stat.png 'Top Line Metrics') -![Deployment Detail](/docs/images/getting-started/inbound-outbound.png "Deployment Detail") +![Deployment Detail](/images/docs/getting-started/inbound-outbound.png 'Deployment Detail') -![Top](/docs/images/getting-started/top.png "Top") +![Top](/images/docs/getting-started/top.png 'Top') -![Tap](/docs/images/getting-started/tap.png "Tap") +![Tap](/images/docs/getting-started/tap.png 'Tap') ## Futher reading diff --git a/linkerd.io/content/2.15/features/distributed-tracing.md b/linkerd.io/content/2.15/features/distributed-tracing.md index 2d111828f9..0a8833fe64 100644 --- a/linkerd.io/content/2.15/features/distributed-tracing.md +++ b/linkerd.io/content/2.15/features/distributed-tracing.md @@ -26,13 +26,13 @@ For example, Linkerd can display a live topology of all incoming and outgoing dependencies for a service, without requiring distributed tracing or any other such application modification: -![The Linkerd dashboard showing an automatically generated topology graph](/docs/images/books/webapp-detail.png "The Linkerd dashboard showing an automatically generated topology graph") +![The Linkerd dashboard showing an automatically generated topology graph](/images/docs/books/webapp-detail.png 'The Linkerd dashboard showing an automatically generated topology graph') Likewise, Linkerd can provide golden metrics per service and per _route_, again without requiring distributed tracing or any other such application modification: -![Linkerd dashboard showing an automatically generated route metrics](/docs/images/books/webapp-routes.png "Linkerd dashboard showing an automatically generated route metrics") +![Linkerd dashboard showing an automatically generated route metrics](/images/docs/books/webapp-routes.png 'Linkerd dashboard showing an automatically generated route metrics') ## Using distributed tracing diff --git a/linkerd.io/content/2.15/features/multicluster.md b/linkerd.io/content/2.15/features/multicluster.md index db9dfd65fe..ef63610e75 100644 --- a/linkerd.io/content/2.15/features/multicluster.md +++ b/linkerd.io/content/2.15/features/multicluster.md @@ -43,7 +43,7 @@ the _Foo_ service as if it were on the local cluster. Linkerd supports two basic forms of multi-cluster communication: hierarchical and flat. -![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png) +![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png) ### Hierarchical networks diff --git a/linkerd.io/content/2.15/features/protocol-detection.md b/linkerd.io/content/2.15/features/protocol-detection.md index f91c372d91..0f792865ae 100644 --- a/linkerd.io/content/2.15/features/protocol-detection.md +++ b/linkerd.io/content/2.15/features/protocol-detection.md @@ -76,7 +76,7 @@ configuration. If you are using one of those protocols, follow this decision tree to determine which configuration you need to apply. -![Decision tree](/docs/images/protocol-detection-decision-tree.png) +![Decision tree](/images/docs/protocol-detection-decision-tree.png) ## Marking ports as opaque diff --git a/linkerd.io/content/2.15/getting-started/_index.md b/linkerd.io/content/2.15/getting-started/_index.md index 3edac1391c..e0fc522542 100644 --- a/linkerd.io/content/2.15/getting-started/_index.md +++ b/linkerd.io/content/2.15/getting-started/_index.md @@ -245,7 +245,7 @@ linkerd viz dashboard & You should see a screen like this: -![The Linkerd dashboard in action](/docs/images/getting-started/viz-empty-dashboard.png "The Linkerd dashboard in action") +![The Linkerd dashboard in action](/images/docs/getting-started/viz-empty-dashboard.png 'The Linkerd dashboard in action') Click around, explore, and have fun! For extra credit, see if you can find the live metrics for each Emojivoto component, and determine which one has a partial diff --git a/linkerd.io/content/2.15/reference/architecture.md b/linkerd.io/content/2.15/reference/architecture.md index ce99afc121..a16ce3aee9 100644 --- a/linkerd.io/content/2.15/reference/architecture.md +++ b/linkerd.io/content/2.15/reference/architecture.md @@ -16,7 +16,7 @@ with the control plane for configuration. Linkerd also provides a **CLI** that can be used to interact with the control and data planes. -![Linkerd's architecture](/docs/images/architecture/control-plane.png "Linkerd's architecture") +![Linkerd's architecture](/images/docs/architecture/control-plane.png "Linkerd's architecture") ## CLI diff --git a/linkerd.io/content/2.15/reference/iptables.md b/linkerd.io/content/2.15/reference/iptables.md index 67a7ea89de..a91a6d0d4d 100644 --- a/linkerd.io/content/2.15/reference/iptables.md +++ b/linkerd.io/content/2.15/reference/iptables.md @@ -31,7 +31,7 @@ The redirect chain will be configured with two more rules: Based on these two rules, there are two possible paths that an inbound packet can take, both of which are outlined below. -![Inbound iptables chain traversal](/docs/images/iptables/iptables-fig2-1.png "Inbound iptables chain traversal") +![Inbound iptables chain traversal](/images/docs/iptables/iptables-fig2-1.png 'Inbound iptables chain traversal') The packet will arrive on the `PREROUTING` chain and will be immediately routed to the redirect chain. If its destination port matches any of the inbound ports @@ -79,7 +79,7 @@ configured: been produced by the service, so it should be forwarded to its destination by the proxy. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-2.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-2.png 'Outbound iptables chain traversal') A packet produced by the service will first hit the `OUTPUT` chain; from here, it will be sent to our own output chain for processing. The first rule it @@ -113,7 +113,7 @@ in the pod. This scenario would typically apply when: - The destination is a port bound on localhost (regardless of which container it belongs to). -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-3.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-3.png 'Outbound iptables chain traversal') When the application targets itself through its pod's IP (or loopback address), the packets will traverse the two output chains. The first rule will be skipped, @@ -138,7 +138,7 @@ inbound side to account for outbound packets that are sent locally. is not guaranteed that the destination will be local. The packet follows an unusual path, as depicted in the diagram below. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-4.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-4.png 'Outbound iptables chain traversal') When the packet first traverses the output chains, it will follow the same path an outbound packet would normally take. In such a scenario, the packet's diff --git a/linkerd.io/content/2.15/reference/multicluster.md b/linkerd.io/content/2.15/reference/multicluster.md index 14de05f034..e95ec0217c 100644 --- a/linkerd.io/content/2.15/reference/multicluster.md +++ b/linkerd.io/content/2.15/reference/multicluster.md @@ -16,7 +16,7 @@ gateway): These modes can be mixed and matched. -![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png) +![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png) Hierarchical mode places a bare minimum of requirements on the underlying network, as it only requires that the gateway IP be reachable. However, flat diff --git a/linkerd.io/content/2.15/tasks/books.md b/linkerd.io/content/2.15/tasks/books.md index dfced757e7..0dc3d97763 100644 --- a/linkerd.io/content/2.15/tasks/books.md +++ b/linkerd.io/content/2.15/tasks/books.md @@ -21,7 +21,7 @@ the other services. There are three services: For demo purposes, the app comes with a simple traffic generator. The overall topology looks like this: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') ## Prerequisites @@ -72,7 +72,7 @@ connection" messages for the rest of the exercise.) Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') Unfortunately, there is an error in the app: if you click _Add Book_, it will fail 50% of the time. This is a classic case of non-obvious, intermittent @@ -81,7 +81,7 @@ debug. Kubernetes itself cannot detect or surface this error. From Kubernetes's perspective, it looks like everything's fine, but you know the application is returning errors. -![Failure](/docs/images/books/failure.png "Failure") +![Failure](/images/docs/books/failure.png 'Failure') ## Add Linkerd to the service @@ -112,7 +112,7 @@ out the Linkerd dashboard, run: linkerd viz dashboard & ``` -![Dashboard](/docs/images/books/dashboard.png "Dashboard") +![Dashboard](/images/docs/books/dashboard.png 'Dashboard') Select `booksapp` from the namespace dropdown and click on the [Deployments](http://localhost:50750/namespaces/booksapp/deployments) workload. @@ -130,7 +130,7 @@ has two outgoing dependencies: `authors` and `book`. One is the service for pulling in author information and the other is the service for pulling in book information. -![Detail](/docs/images/books/webapp-detail.png "Detail") +![Detail](/images/docs/books/webapp-detail.png 'Detail') A failure in a dependent service may be exactly what’s causing the errors that `webapp` is returning (and the errors you as a user can see when you click). We @@ -138,7 +138,7 @@ can see that the `books` service is also failing. Let’s scroll a little furthe down the page, we’ll see a live list of all traffic endpoints that `webapp` is receiving. This is interesting: -![Top](/docs/images/books/top.png "Top") +![Top](/images/docs/books/top.png 'Top') Aha! We can see that inbound traffic coming from the `webapp` service going to the `books` service is failing a significant percentage of the time. That could @@ -146,7 +146,7 @@ explain why `webapp` was throwing intermittent failures. Let’s click on the ta (🔬) icon and then on the Start button to look at the actual request and response stream. -![Tap](/docs/images/books/tap.png "Tap") +![Tap](/images/docs/books/tap.png 'Tap') Indeed, many of these requests are returning 500’s. diff --git a/linkerd.io/content/2.15/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.15/tasks/configuring-per-route-policy.md index a5c8b5c2ef..9a895dd5d5 100644 --- a/linkerd.io/content/2.15/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.15/tasks/configuring-per-route-policy.md @@ -64,7 +64,7 @@ $ kubectl -n booksapp port-forward svc/webapp 7000 & Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') ## Creating a Server resource @@ -330,7 +330,7 @@ web UI, we may notice that something is amiss. Attempting to delete an author results in a "not found" error in the web UI: -![Not found](/docs/images/books/delete-404.png "Not found") +![Not found](/images/docs/books/delete-404.png 'Not found') and similarly, adding a new author takes us to an error page. @@ -375,7 +375,7 @@ EOF What happens if we try to delete an author _now_? We still see a failure, but a different one: -![Internal server error](/docs/images/books/delete-503.png "Internal server error") +![Internal server error](/images/docs/books/delete-503.png 'Internal server error') This is because we have created a _route_ matching `DELETE`, `PUT`, and `POST` requests, but we haven't _authorized_ requests to that route. Running the @@ -432,11 +432,11 @@ in this case, we only authenticate the `webapp` deployment's `ServiceAccount` Now, if we attempt to delete an author in the frontend once again, we can: -![Author deleted](/docs/images/books/delete-ok.png "Author deleted") +![Author deleted](/images/docs/books/delete-ok.png 'Author deleted') Similarly, we can now create a new author successfully, as well: -![Author created](/docs/images/books/create-ok.png "Author created") +![Author created](/images/docs/books/create-ok.png 'Author created') Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: diff --git a/linkerd.io/content/2.15/tasks/debugging-your-service.md b/linkerd.io/content/2.15/tasks/debugging-your-service.md index fa22bef005..d1009cfe76 100644 --- a/linkerd.io/content/2.15/tasks/debugging-your-service.md +++ b/linkerd.io/content/2.15/tasks/debugging-your-service.md @@ -17,12 +17,12 @@ command), you should see all the resources in the `emojivoto` namespace, including the deployments. Each deployment running Linkerd shows success rate, requests per second and latency percentiles. -![Top Level Metrics](/docs/images/debugging/stat.png "Top Level Metrics") +![Top Level Metrics](/images/docs/debugging/stat.png 'Top Level Metrics') That's pretty neat, but the first thing you might notice is that the success rate is well below 100%! Click on `web` and let's dig in. -![Deployment Detail](/docs/images/debugging/octopus.png "Deployment Detail") +![Deployment Detail](/images/docs/debugging/octopus.png 'Deployment Detail') You should now be looking at the Deployment page for the web deployment. The first thing you'll see here is that the web deployment is taking traffic from @@ -38,7 +38,7 @@ returning. Let's scroll a little further down the page, we'll see a live list of all traffic that is incoming to _and_ outgoing from `web`. This is interesting: -![Top](/docs/images/debugging/web-top.png "Top") +![Top](/images/docs/debugging/web-top.png 'Top') There are two calls that are not at 100%: the first is vote-bot's call to the `/api/vote` endpoint. The second is the `VoteDoughnut` call from the web @@ -54,7 +54,7 @@ the requests are failing with a is a common error response as you can see from [the code][code]. Linkerd is aware of gRPC's response classification without any other configuration! -![Tap](/docs/images/debugging/web-tap.png "Tap") +![Tap](/images/docs/debugging/web-tap.png 'Tap') At this point, we have everything required to get the endpoint fixed and restore the overall health of our applications. diff --git a/linkerd.io/content/2.15/tasks/distributed-tracing.md b/linkerd.io/content/2.15/tasks/distributed-tracing.md index 9527f5d825..ae409acb27 100644 --- a/linkerd.io/content/2.15/tasks/distributed-tracing.md +++ b/linkerd.io/content/2.15/tasks/distributed-tracing.md @@ -21,7 +21,7 @@ To use distributed tracing, you'll need to: In the case of emojivoto, once all these steps are complete there will be a topology that looks like: -![Topology](/docs/images/tracing/tracing-topology.svg "Topology") +![Topology](/images/docs/tracing/tracing-topology.svg 'Topology') ## Prerequisites @@ -101,17 +101,17 @@ up in Jaeger. To get to the UI, run: linkerd jaeger dashboard ``` -![Jaeger](/docs/images/tracing/jaeger-empty.png "Jaeger") +![Jaeger](/images/docs/tracing/jaeger-empty.png 'Jaeger') You can search for any service in the dropdown and click Find Traces. `vote-bot` is a great way to get started. -![Search](/docs/images/tracing/jaeger-search.png "Search") +![Search](/images/docs/tracing/jaeger-search.png 'Search') Clicking on a specific trace will provide all the details, you'll be able to see the spans for every proxy! -![Search](/docs/images/tracing/example-trace.png "Search") +![Search](/images/docs/tracing/example-trace.png 'Search') There sure are a lot of `linkerd-proxy` spans in that output. Internally, the proxy has a server and client side. When a request goes through the proxy, it is @@ -127,7 +127,7 @@ meta-data as trace attributes, users can directly jump into related resources traces directly from the linkerd-web dashboard by clicking the Jaeger icon in the Metrics Table, as shown below: -![Linkerd-Jaeger](/docs/images/tracing/linkerd-jaeger-ui.png "Linkerd-Jaeger") +![Linkerd-Jaeger](/images/docs/tracing/linkerd-jaeger-ui.png 'Linkerd-Jaeger') To obtain that functionality you need to install (or upgrade) the Linkerd-Viz extension specifying the service exposing the Jaeger UI. By default, this would diff --git a/linkerd.io/content/2.15/tasks/fault-injection.md b/linkerd.io/content/2.15/tasks/fault-injection.md index 64caf8cb46..00e00598c1 100644 --- a/linkerd.io/content/2.15/tasks/fault-injection.md +++ b/linkerd.io/content/2.15/tasks/fault-injection.md @@ -12,7 +12,7 @@ return whatever responses you want - 500s, timeouts or even crazy payloads. The [books demo](books/) is a great way to show off this behavior. The overall topology looks like: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') In this guide, you will split some of the requests from `webapp` to `books`. Most requests will end up at the correct `books` destination, however some of diff --git a/linkerd.io/content/2.15/tasks/flagger.md b/linkerd.io/content/2.15/tasks/flagger.md index 6f6ab6fcc2..356ac3b38e 100644 --- a/linkerd.io/content/2.15/tasks/flagger.md +++ b/linkerd.io/content/2.15/tasks/flagger.md @@ -69,7 +69,7 @@ orchestrates it. A load generator simply makes it easier to execute the rollout as there needs to be some kind of active traffic to complete the operation. Together, these components have a topology that looks like: -![Topology](/docs/images/canary/simple-topology.svg "Topology") +![Topology](/images/docs/canary/simple-topology.svg 'Topology') To add these components to your cluster and include them in the Linkerd [data plane](../reference/architecture/#data-plane), run: @@ -213,7 +213,7 @@ podinfo-primary ClusterIP 10.7.249.63 9898/TCP 23m At this point, the topology looks a little like: -![Initialized](/docs/images/canary/initialized.svg "Initialized") +![Initialized](/images/docs/canary/initialized.svg 'Initialized') {{< note >}} @@ -259,7 +259,7 @@ kubectl -n test get ev --watch While an update is occurring, the resources and traffic will look like this at a high level: -![Ongoing](/docs/images/canary/ongoing.svg "Ongoing") +![Ongoing](/images/docs/canary/ongoing.svg 'Ongoing') After the update is complete, this picture will go back to looking just like the figure from the previous section. diff --git a/linkerd.io/content/2.15/tasks/gitops.md b/linkerd.io/content/2.15/tasks/gitops.md index 18e90ce73c..51328396a3 100644 --- a/linkerd.io/content/2.15/tasks/gitops.md +++ b/linkerd.io/content/2.15/tasks/gitops.md @@ -22,7 +22,7 @@ the [auto proxy injection](../features/proxy-injection/) feature into your workflow. Finally, this guide conclude with steps to upgrade Linkerd to a newer version following a GitOps workflow. -![Linkerd GitOps workflow](/docs/images/gitops/architecture.png 'Linkerd GitOps workflow') +![Linkerd GitOps workflow](/images/docs/gitops/architecture.png 'Linkerd GitOps workflow') The software and tools used in this guide are selected for demonstration purposes only. Feel free to choose others that are most suited for your @@ -184,7 +184,7 @@ argocd proj get demo On the dashboard: -![New project in Argo CD dashboard](/docs/images/gitops/dashboard-project.png 'New project in Argo CD dashboard') +![New project in Argo CD dashboard](/images/docs/gitops/dashboard-project.png 'New project in Argo CD dashboard') ### Deploy the applications @@ -215,7 +215,7 @@ Sync the `main` application: argocd app sync main ``` -![Synchronize the main application](/docs/images/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') +![Synchronize the main application](/images/docs/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') Notice that only the `main` application is synchronized. @@ -237,7 +237,7 @@ for deploy in "cert-manager" "cert-manager-cainjector" "cert-manager-webhook"; \ done ``` -![Synchronize the cert-manager application](/docs/images/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') +![Synchronize the cert-manager application](/images/docs/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') ### Deploy Sealed Secrets @@ -253,7 +253,7 @@ Confirm that sealed-secrets is running: kubectl -n kube-system rollout status deploy/sealed-secrets ``` -![Synchronize the sealed-secrets application](/docs/images/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') +![Synchronize the sealed-secrets application](/images/docs/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') ### Create mTLS trust anchor @@ -348,7 +348,7 @@ Git server earlier. {{< /note >}} -![Synchronize the linkerd-bootstrap application](/docs/images/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') +![Synchronize the linkerd-bootstrap application](/images/docs/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') SealedSecrets should have created a secret containing the decrypted trust anchor. Retrieve the decrypted trust anchor from the secret: @@ -380,7 +380,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Empty default trust anchor](/docs/images/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') +![Empty default trust anchor](/images/docs/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') We will override this parameter in the `linkerd` application with the value of `${trust_anchor}`. @@ -395,7 +395,7 @@ Ensure that the multi-line string is indented correctly. E.g., source: chart: linkerd-control-plane repoURL: https://helm.linkerd.io/edge - targetRevision: {{% chart-version %}} + targetRevision: { { % chart-version % } } helm: parameters: - name: identityTrustAnchorsPEM @@ -442,7 +442,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Override mTLS trust anchor](/docs/images/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') +![Override mTLS trust anchor](/images/docs/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') Synchronize the `linkerd-crds` and `linkerd-control-plane` applications: @@ -457,7 +457,7 @@ Check that Linkerd is ready: linkerd check ``` -![Synchronize Linkerd](/docs/images/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') +![Synchronize Linkerd](/images/docs/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') ### Test with emojivoto @@ -475,7 +475,7 @@ for deploy in "emoji" "vote-bot" "voting" "web" ; \ done ``` -![Synchronize emojivoto](/docs/images/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') +![Synchronize emojivoto](/images/docs/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') ### Upgrade Linkerd diff --git a/linkerd.io/content/2.15/tasks/multicluster.md b/linkerd.io/content/2.15/tasks/multicluster.md index f1af09cd21..7d871bbad9 100644 --- a/linkerd.io/content/2.15/tasks/multicluster.md +++ b/linkerd.io/content/2.15/tasks/multicluster.md @@ -13,6 +13,7 @@ between services that live on different clusters. At a high level, you will: + 1. [Install Linkerd and Linkerd Viz](#install-linkerd-and-linkerd-viz) on two clusters with a shared trust anchor. 1. [Prepare](#preparing-your-cluster) the clusters. @@ -21,7 +22,7 @@ At a high level, you will: 1. [Export](#exporting-the-services) the demo services, to control visibility. 1. [Verify](#security) the security of your clusters. 1. [Split traffic](#traffic-splitting) from pods on the source cluster (`west`) - to the target cluster (`east`) +to the target cluster (`east`) ## Prerequisites @@ -52,7 +53,7 @@ At a high level, you will: ## Install Linkerd and Linkerd Viz -![install](/docs/images/multicluster/install.svg "Two Clusters") +![install](/images/docs/multicluster/install.svg 'Two Clusters') Linkerd requires a shared [trust anchor](generate-certificates/#trust-anchor-certificate) to exist between @@ -138,7 +139,7 @@ done ## Preparing your cluster -![preparation](/docs/images/multicluster/prep-overview.svg "Preparation") +![preparation](/images/docs/multicluster/prep-overview.svg 'Preparation') In order to route traffic between clusters, Linkerd leverages Kubernetes services so that your application code does not need to change and there is @@ -161,7 +162,7 @@ for ctx in west east; do done ``` -![install](/docs/images/multicluster/components.svg "Components") +![install](/images/docs/multicluster/components.svg 'Components') Installed into the `linkerd-multicluster` namespace, the gateway is a simple [pause container](https://github.com/linkerd/linkerd2/blob/main/multicluster/charts/linkerd-multicluster/templates/gateway.yaml#L3) @@ -200,7 +201,7 @@ mirroring services. We'll want to link the clusters together now! ## Linking the clusters -![link-clusters](/docs/images/multicluster/link-flow.svg "Link") +![link-clusters](/images/docs/multicluster/link-flow.svg 'Link') For `west` to mirror services from `east`, the `west` cluster needs to have credentials so that it can watch for services in `east` to be exported. You'd @@ -258,7 +259,7 @@ use the `--api-server-address` flag for `link`. ## Installing the test services -![test-services](/docs/images/multicluster/example-topology.svg "Topology") +![test-services](/images/docs/multicluster/example-topology.svg 'Topology') It is time to test this all out! The first step is to add some services that we can mirror. To add these to both clusters, you can run: @@ -285,7 +286,7 @@ To see what it looks like from the `west` cluster right now, you can run: kubectl --context=west -n test port-forward svc/frontend 8080 ``` -![west-podinfo](/docs/images/multicluster/west-podinfo.gif "West Podinfo") +![west-podinfo](/images/docs/multicluster/west-podinfo.gif 'West Podinfo') With the podinfo landing page available at [http://localhost:8080](http://localhost:8080), you can see how it looks in the @@ -377,7 +378,7 @@ the [grafana install instructions](grafana/) first to have a working grafana provisioned with Linkerd dashboards). You can get to it by running `linkerd --context=west viz dashboard` and going to -![grafana-dashboard](/docs/images/multicluster/grafana-dashboard.png "Grafana") +![grafana-dashboard](/images/docs/multicluster/grafana-dashboard.png 'Grafana') ## Security @@ -416,7 +417,7 @@ kubectl --context=west -n test run -it --rm --image=alpine:3 test -- \ ## Traffic Splitting -![with-split](/docs/images/multicluster/with-split.svg "Traffic Split") +![with-split](/images/docs/multicluster/with-split.svg 'Traffic Split') It is pretty useful to have services automatically show up in clusters and be able to explicitly address them, however that only covers one use case for @@ -460,7 +461,7 @@ both clusters.Alternatively, for the command line approach, `curl localhost:8080` will give you a message that greets from `west` and `east`. -![podinfo-split](/docs/images/multicluster/split-podinfo.gif "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/split-podinfo.gif 'Cross Cluster Podinfo') You can also watch what's happening with metrics. To see the source side of things (`west`), you can run: @@ -481,7 +482,7 @@ linkerd --context=east -n test viz stat \ There's even a dashboard! Run `linkerd viz dashboard` and send your browser to [localhost:50750](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![podinfo-split](/docs/images/multicluster/ts-dashboard.png "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/ts-dashboard.png 'Cross Cluster Podinfo') ## Cleanup diff --git a/linkerd.io/content/2.16/_index.md b/linkerd.io/content/2.16/_index.md index 6c28dc82e3..11f8979d57 100644 --- a/linkerd.io/content/2.16/_index.md +++ b/linkerd.io/content/2.16/_index.md @@ -2,10 +2,12 @@ title: Docs cascade: type: docs + params: + noIndex: true # Redirect type: _default layout: redirect params: - redirect: ./overview + redirect: ./getting-started --- diff --git a/linkerd.io/content/2.16/checks/index.md b/linkerd.io/content/2.16/checks/index.md index fc6ac87de5..5f2ae8ee80 100644 --- a/linkerd.io/content/2.16/checks/index.md +++ b/linkerd.io/content/2.16/checks/index.md @@ -6,5 +6,5 @@ type: _default layout: redirect params: unlisted: true - redirect: /2/tasks/troubleshooting/ + redirect: /docs/tasks/troubleshooting/ --- diff --git a/linkerd.io/content/2.16/features/dashboard.md b/linkerd.io/content/2.16/features/dashboard.md index d66a157726..c69f941be4 100644 --- a/linkerd.io/content/2.16/features/dashboard.md +++ b/linkerd.io/content/2.16/features/dashboard.md @@ -49,7 +49,7 @@ health of specific service routes. One way to pull it up is by running `linkerd viz dashboard` from the command line. -![Top Line Metrics](/docs/images/architecture/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/architecture/stat.png 'Top Line Metrics') ## Grafana @@ -96,13 +96,13 @@ linkerd -n emojivoto viz tap deploy/web All of this functionality is also available in the dashboard, if you would like to use your browser instead: -![Top Line Metrics](/docs/images/getting-started/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/getting-started/stat.png 'Top Line Metrics') -![Deployment Detail](/docs/images/getting-started/inbound-outbound.png "Deployment Detail") +![Deployment Detail](/images/docs/getting-started/inbound-outbound.png 'Deployment Detail') -![Top](/docs/images/getting-started/top.png "Top") +![Top](/images/docs/getting-started/top.png 'Top') -![Tap](/docs/images/getting-started/tap.png "Tap") +![Tap](/images/docs/getting-started/tap.png 'Tap') ## Futher reading diff --git a/linkerd.io/content/2.16/features/distributed-tracing.md b/linkerd.io/content/2.16/features/distributed-tracing.md index 2d111828f9..0a8833fe64 100644 --- a/linkerd.io/content/2.16/features/distributed-tracing.md +++ b/linkerd.io/content/2.16/features/distributed-tracing.md @@ -26,13 +26,13 @@ For example, Linkerd can display a live topology of all incoming and outgoing dependencies for a service, without requiring distributed tracing or any other such application modification: -![The Linkerd dashboard showing an automatically generated topology graph](/docs/images/books/webapp-detail.png "The Linkerd dashboard showing an automatically generated topology graph") +![The Linkerd dashboard showing an automatically generated topology graph](/images/docs/books/webapp-detail.png 'The Linkerd dashboard showing an automatically generated topology graph') Likewise, Linkerd can provide golden metrics per service and per _route_, again without requiring distributed tracing or any other such application modification: -![Linkerd dashboard showing an automatically generated route metrics](/docs/images/books/webapp-routes.png "Linkerd dashboard showing an automatically generated route metrics") +![Linkerd dashboard showing an automatically generated route metrics](/images/docs/books/webapp-routes.png 'Linkerd dashboard showing an automatically generated route metrics') ## Using distributed tracing diff --git a/linkerd.io/content/2.16/features/multicluster.md b/linkerd.io/content/2.16/features/multicluster.md index db9dfd65fe..ef63610e75 100644 --- a/linkerd.io/content/2.16/features/multicluster.md +++ b/linkerd.io/content/2.16/features/multicluster.md @@ -43,7 +43,7 @@ the _Foo_ service as if it were on the local cluster. Linkerd supports two basic forms of multi-cluster communication: hierarchical and flat. -![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png) +![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png) ### Hierarchical networks diff --git a/linkerd.io/content/2.16/features/protocol-detection.md b/linkerd.io/content/2.16/features/protocol-detection.md index def058b7da..150535351a 100644 --- a/linkerd.io/content/2.16/features/protocol-detection.md +++ b/linkerd.io/content/2.16/features/protocol-detection.md @@ -84,7 +84,7 @@ configuration. If you are using one of those protocols, follow this decision tree to determine which configuration you need to apply. -![Decision tree](/docs/images/protocol-detection-decision-tree.png) +![Decision tree](/images/docs/protocol-detection-decision-tree.png) ## Marking ports as opaque diff --git a/linkerd.io/content/2.16/getting-started/_index.md b/linkerd.io/content/2.16/getting-started/_index.md index 3edac1391c..e0fc522542 100644 --- a/linkerd.io/content/2.16/getting-started/_index.md +++ b/linkerd.io/content/2.16/getting-started/_index.md @@ -245,7 +245,7 @@ linkerd viz dashboard & You should see a screen like this: -![The Linkerd dashboard in action](/docs/images/getting-started/viz-empty-dashboard.png "The Linkerd dashboard in action") +![The Linkerd dashboard in action](/images/docs/getting-started/viz-empty-dashboard.png 'The Linkerd dashboard in action') Click around, explore, and have fun! For extra credit, see if you can find the live metrics for each Emojivoto component, and determine which one has a partial diff --git a/linkerd.io/content/2.16/reference/architecture.md b/linkerd.io/content/2.16/reference/architecture.md index f3a4bcdfca..8dc18491ef 100644 --- a/linkerd.io/content/2.16/reference/architecture.md +++ b/linkerd.io/content/2.16/reference/architecture.md @@ -16,7 +16,7 @@ with the control plane for configuration. Linkerd also provides a **CLI** that can be used to interact with the control and data planes. -![Linkerd's architecture](/docs/images/architecture/control-plane.png "Linkerd's architecture") +![Linkerd's architecture](/images/docs/architecture/control-plane.png "Linkerd's architecture") ## CLI diff --git a/linkerd.io/content/2.16/reference/iptables.md b/linkerd.io/content/2.16/reference/iptables.md index 67a7ea89de..a91a6d0d4d 100644 --- a/linkerd.io/content/2.16/reference/iptables.md +++ b/linkerd.io/content/2.16/reference/iptables.md @@ -31,7 +31,7 @@ The redirect chain will be configured with two more rules: Based on these two rules, there are two possible paths that an inbound packet can take, both of which are outlined below. -![Inbound iptables chain traversal](/docs/images/iptables/iptables-fig2-1.png "Inbound iptables chain traversal") +![Inbound iptables chain traversal](/images/docs/iptables/iptables-fig2-1.png 'Inbound iptables chain traversal') The packet will arrive on the `PREROUTING` chain and will be immediately routed to the redirect chain. If its destination port matches any of the inbound ports @@ -79,7 +79,7 @@ configured: been produced by the service, so it should be forwarded to its destination by the proxy. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-2.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-2.png 'Outbound iptables chain traversal') A packet produced by the service will first hit the `OUTPUT` chain; from here, it will be sent to our own output chain for processing. The first rule it @@ -113,7 +113,7 @@ in the pod. This scenario would typically apply when: - The destination is a port bound on localhost (regardless of which container it belongs to). -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-3.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-3.png 'Outbound iptables chain traversal') When the application targets itself through its pod's IP (or loopback address), the packets will traverse the two output chains. The first rule will be skipped, @@ -138,7 +138,7 @@ inbound side to account for outbound packets that are sent locally. is not guaranteed that the destination will be local. The packet follows an unusual path, as depicted in the diagram below. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-4.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-4.png 'Outbound iptables chain traversal') When the packet first traverses the output chains, it will follow the same path an outbound packet would normally take. In such a scenario, the packet's diff --git a/linkerd.io/content/2.16/reference/multicluster.md b/linkerd.io/content/2.16/reference/multicluster.md index 14de05f034..e95ec0217c 100644 --- a/linkerd.io/content/2.16/reference/multicluster.md +++ b/linkerd.io/content/2.16/reference/multicluster.md @@ -16,7 +16,7 @@ gateway): These modes can be mixed and matched. -![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png) +![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png) Hierarchical mode places a bare minimum of requirements on the underlying network, as it only requires that the gateway IP be reachable. However, flat diff --git a/linkerd.io/content/2.16/tasks/books.md b/linkerd.io/content/2.16/tasks/books.md index 91a284a805..e1911221d1 100644 --- a/linkerd.io/content/2.16/tasks/books.md +++ b/linkerd.io/content/2.16/tasks/books.md @@ -21,7 +21,7 @@ the other services. There are three services: For demo purposes, the app comes with a simple traffic generator. The overall topology looks like this: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') ## Prerequisites @@ -71,7 +71,7 @@ connection" messages for the rest of the exercise.) Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') Unfortunately, there is an error in the app: if you click _Add Book_, it will fail 50% of the time. This is a classic case of non-obvious, intermittent @@ -80,7 +80,7 @@ debug. Kubernetes itself cannot detect or surface this error. From Kubernetes's perspective, it looks like everything's fine, but you know the application is returning errors. -![Failure](/docs/images/books/failure.png "Failure") +![Failure](/images/docs/books/failure.png 'Failure') ## Add Linkerd to the service diff --git a/linkerd.io/content/2.16/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.16/tasks/configuring-per-route-policy.md index a5c8b5c2ef..9a895dd5d5 100644 --- a/linkerd.io/content/2.16/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.16/tasks/configuring-per-route-policy.md @@ -64,7 +64,7 @@ $ kubectl -n booksapp port-forward svc/webapp 7000 & Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') ## Creating a Server resource @@ -330,7 +330,7 @@ web UI, we may notice that something is amiss. Attempting to delete an author results in a "not found" error in the web UI: -![Not found](/docs/images/books/delete-404.png "Not found") +![Not found](/images/docs/books/delete-404.png 'Not found') and similarly, adding a new author takes us to an error page. @@ -375,7 +375,7 @@ EOF What happens if we try to delete an author _now_? We still see a failure, but a different one: -![Internal server error](/docs/images/books/delete-503.png "Internal server error") +![Internal server error](/images/docs/books/delete-503.png 'Internal server error') This is because we have created a _route_ matching `DELETE`, `PUT`, and `POST` requests, but we haven't _authorized_ requests to that route. Running the @@ -432,11 +432,11 @@ in this case, we only authenticate the `webapp` deployment's `ServiceAccount` Now, if we attempt to delete an author in the frontend once again, we can: -![Author deleted](/docs/images/books/delete-ok.png "Author deleted") +![Author deleted](/images/docs/books/delete-ok.png 'Author deleted') Similarly, we can now create a new author successfully, as well: -![Author created](/docs/images/books/create-ok.png "Author created") +![Author created](/images/docs/books/create-ok.png 'Author created') Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: diff --git a/linkerd.io/content/2.16/tasks/debugging-your-service.md b/linkerd.io/content/2.16/tasks/debugging-your-service.md index fa22bef005..d1009cfe76 100644 --- a/linkerd.io/content/2.16/tasks/debugging-your-service.md +++ b/linkerd.io/content/2.16/tasks/debugging-your-service.md @@ -17,12 +17,12 @@ command), you should see all the resources in the `emojivoto` namespace, including the deployments. Each deployment running Linkerd shows success rate, requests per second and latency percentiles. -![Top Level Metrics](/docs/images/debugging/stat.png "Top Level Metrics") +![Top Level Metrics](/images/docs/debugging/stat.png 'Top Level Metrics') That's pretty neat, but the first thing you might notice is that the success rate is well below 100%! Click on `web` and let's dig in. -![Deployment Detail](/docs/images/debugging/octopus.png "Deployment Detail") +![Deployment Detail](/images/docs/debugging/octopus.png 'Deployment Detail') You should now be looking at the Deployment page for the web deployment. The first thing you'll see here is that the web deployment is taking traffic from @@ -38,7 +38,7 @@ returning. Let's scroll a little further down the page, we'll see a live list of all traffic that is incoming to _and_ outgoing from `web`. This is interesting: -![Top](/docs/images/debugging/web-top.png "Top") +![Top](/images/docs/debugging/web-top.png 'Top') There are two calls that are not at 100%: the first is vote-bot's call to the `/api/vote` endpoint. The second is the `VoteDoughnut` call from the web @@ -54,7 +54,7 @@ the requests are failing with a is a common error response as you can see from [the code][code]. Linkerd is aware of gRPC's response classification without any other configuration! -![Tap](/docs/images/debugging/web-tap.png "Tap") +![Tap](/images/docs/debugging/web-tap.png 'Tap') At this point, we have everything required to get the endpoint fixed and restore the overall health of our applications. diff --git a/linkerd.io/content/2.16/tasks/distributed-tracing.md b/linkerd.io/content/2.16/tasks/distributed-tracing.md index 9527f5d825..ae409acb27 100644 --- a/linkerd.io/content/2.16/tasks/distributed-tracing.md +++ b/linkerd.io/content/2.16/tasks/distributed-tracing.md @@ -21,7 +21,7 @@ To use distributed tracing, you'll need to: In the case of emojivoto, once all these steps are complete there will be a topology that looks like: -![Topology](/docs/images/tracing/tracing-topology.svg "Topology") +![Topology](/images/docs/tracing/tracing-topology.svg 'Topology') ## Prerequisites @@ -101,17 +101,17 @@ up in Jaeger. To get to the UI, run: linkerd jaeger dashboard ``` -![Jaeger](/docs/images/tracing/jaeger-empty.png "Jaeger") +![Jaeger](/images/docs/tracing/jaeger-empty.png 'Jaeger') You can search for any service in the dropdown and click Find Traces. `vote-bot` is a great way to get started. -![Search](/docs/images/tracing/jaeger-search.png "Search") +![Search](/images/docs/tracing/jaeger-search.png 'Search') Clicking on a specific trace will provide all the details, you'll be able to see the spans for every proxy! -![Search](/docs/images/tracing/example-trace.png "Search") +![Search](/images/docs/tracing/example-trace.png 'Search') There sure are a lot of `linkerd-proxy` spans in that output. Internally, the proxy has a server and client side. When a request goes through the proxy, it is @@ -127,7 +127,7 @@ meta-data as trace attributes, users can directly jump into related resources traces directly from the linkerd-web dashboard by clicking the Jaeger icon in the Metrics Table, as shown below: -![Linkerd-Jaeger](/docs/images/tracing/linkerd-jaeger-ui.png "Linkerd-Jaeger") +![Linkerd-Jaeger](/images/docs/tracing/linkerd-jaeger-ui.png 'Linkerd-Jaeger') To obtain that functionality you need to install (or upgrade) the Linkerd-Viz extension specifying the service exposing the Jaeger UI. By default, this would diff --git a/linkerd.io/content/2.16/tasks/fault-injection.md b/linkerd.io/content/2.16/tasks/fault-injection.md index 64caf8cb46..00e00598c1 100644 --- a/linkerd.io/content/2.16/tasks/fault-injection.md +++ b/linkerd.io/content/2.16/tasks/fault-injection.md @@ -12,7 +12,7 @@ return whatever responses you want - 500s, timeouts or even crazy payloads. The [books demo](books/) is a great way to show off this behavior. The overall topology looks like: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') In this guide, you will split some of the requests from `webapp` to `books`. Most requests will end up at the correct `books` destination, however some of diff --git a/linkerd.io/content/2.16/tasks/flagger.md b/linkerd.io/content/2.16/tasks/flagger.md index 6f6ab6fcc2..356ac3b38e 100644 --- a/linkerd.io/content/2.16/tasks/flagger.md +++ b/linkerd.io/content/2.16/tasks/flagger.md @@ -69,7 +69,7 @@ orchestrates it. A load generator simply makes it easier to execute the rollout as there needs to be some kind of active traffic to complete the operation. Together, these components have a topology that looks like: -![Topology](/docs/images/canary/simple-topology.svg "Topology") +![Topology](/images/docs/canary/simple-topology.svg 'Topology') To add these components to your cluster and include them in the Linkerd [data plane](../reference/architecture/#data-plane), run: @@ -213,7 +213,7 @@ podinfo-primary ClusterIP 10.7.249.63 9898/TCP 23m At this point, the topology looks a little like: -![Initialized](/docs/images/canary/initialized.svg "Initialized") +![Initialized](/images/docs/canary/initialized.svg 'Initialized') {{< note >}} @@ -259,7 +259,7 @@ kubectl -n test get ev --watch While an update is occurring, the resources and traffic will look like this at a high level: -![Ongoing](/docs/images/canary/ongoing.svg "Ongoing") +![Ongoing](/images/docs/canary/ongoing.svg 'Ongoing') After the update is complete, this picture will go back to looking just like the figure from the previous section. diff --git a/linkerd.io/content/2.16/tasks/gitops.md b/linkerd.io/content/2.16/tasks/gitops.md index 18e90ce73c..51328396a3 100644 --- a/linkerd.io/content/2.16/tasks/gitops.md +++ b/linkerd.io/content/2.16/tasks/gitops.md @@ -22,7 +22,7 @@ the [auto proxy injection](../features/proxy-injection/) feature into your workflow. Finally, this guide conclude with steps to upgrade Linkerd to a newer version following a GitOps workflow. -![Linkerd GitOps workflow](/docs/images/gitops/architecture.png 'Linkerd GitOps workflow') +![Linkerd GitOps workflow](/images/docs/gitops/architecture.png 'Linkerd GitOps workflow') The software and tools used in this guide are selected for demonstration purposes only. Feel free to choose others that are most suited for your @@ -184,7 +184,7 @@ argocd proj get demo On the dashboard: -![New project in Argo CD dashboard](/docs/images/gitops/dashboard-project.png 'New project in Argo CD dashboard') +![New project in Argo CD dashboard](/images/docs/gitops/dashboard-project.png 'New project in Argo CD dashboard') ### Deploy the applications @@ -215,7 +215,7 @@ Sync the `main` application: argocd app sync main ``` -![Synchronize the main application](/docs/images/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') +![Synchronize the main application](/images/docs/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') Notice that only the `main` application is synchronized. @@ -237,7 +237,7 @@ for deploy in "cert-manager" "cert-manager-cainjector" "cert-manager-webhook"; \ done ``` -![Synchronize the cert-manager application](/docs/images/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') +![Synchronize the cert-manager application](/images/docs/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') ### Deploy Sealed Secrets @@ -253,7 +253,7 @@ Confirm that sealed-secrets is running: kubectl -n kube-system rollout status deploy/sealed-secrets ``` -![Synchronize the sealed-secrets application](/docs/images/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') +![Synchronize the sealed-secrets application](/images/docs/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') ### Create mTLS trust anchor @@ -348,7 +348,7 @@ Git server earlier. {{< /note >}} -![Synchronize the linkerd-bootstrap application](/docs/images/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') +![Synchronize the linkerd-bootstrap application](/images/docs/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') SealedSecrets should have created a secret containing the decrypted trust anchor. Retrieve the decrypted trust anchor from the secret: @@ -380,7 +380,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Empty default trust anchor](/docs/images/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') +![Empty default trust anchor](/images/docs/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') We will override this parameter in the `linkerd` application with the value of `${trust_anchor}`. @@ -395,7 +395,7 @@ Ensure that the multi-line string is indented correctly. E.g., source: chart: linkerd-control-plane repoURL: https://helm.linkerd.io/edge - targetRevision: {{% chart-version %}} + targetRevision: { { % chart-version % } } helm: parameters: - name: identityTrustAnchorsPEM @@ -442,7 +442,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Override mTLS trust anchor](/docs/images/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') +![Override mTLS trust anchor](/images/docs/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') Synchronize the `linkerd-crds` and `linkerd-control-plane` applications: @@ -457,7 +457,7 @@ Check that Linkerd is ready: linkerd check ``` -![Synchronize Linkerd](/docs/images/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') +![Synchronize Linkerd](/images/docs/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') ### Test with emojivoto @@ -475,7 +475,7 @@ for deploy in "emoji" "vote-bot" "voting" "web" ; \ done ``` -![Synchronize emojivoto](/docs/images/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') +![Synchronize emojivoto](/images/docs/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') ### Upgrade Linkerd diff --git a/linkerd.io/content/2.16/tasks/multicluster.md b/linkerd.io/content/2.16/tasks/multicluster.md index f1af09cd21..7d871bbad9 100644 --- a/linkerd.io/content/2.16/tasks/multicluster.md +++ b/linkerd.io/content/2.16/tasks/multicluster.md @@ -13,6 +13,7 @@ between services that live on different clusters. At a high level, you will: + 1. [Install Linkerd and Linkerd Viz](#install-linkerd-and-linkerd-viz) on two clusters with a shared trust anchor. 1. [Prepare](#preparing-your-cluster) the clusters. @@ -21,7 +22,7 @@ At a high level, you will: 1. [Export](#exporting-the-services) the demo services, to control visibility. 1. [Verify](#security) the security of your clusters. 1. [Split traffic](#traffic-splitting) from pods on the source cluster (`west`) - to the target cluster (`east`) +to the target cluster (`east`) ## Prerequisites @@ -52,7 +53,7 @@ At a high level, you will: ## Install Linkerd and Linkerd Viz -![install](/docs/images/multicluster/install.svg "Two Clusters") +![install](/images/docs/multicluster/install.svg 'Two Clusters') Linkerd requires a shared [trust anchor](generate-certificates/#trust-anchor-certificate) to exist between @@ -138,7 +139,7 @@ done ## Preparing your cluster -![preparation](/docs/images/multicluster/prep-overview.svg "Preparation") +![preparation](/images/docs/multicluster/prep-overview.svg 'Preparation') In order to route traffic between clusters, Linkerd leverages Kubernetes services so that your application code does not need to change and there is @@ -161,7 +162,7 @@ for ctx in west east; do done ``` -![install](/docs/images/multicluster/components.svg "Components") +![install](/images/docs/multicluster/components.svg 'Components') Installed into the `linkerd-multicluster` namespace, the gateway is a simple [pause container](https://github.com/linkerd/linkerd2/blob/main/multicluster/charts/linkerd-multicluster/templates/gateway.yaml#L3) @@ -200,7 +201,7 @@ mirroring services. We'll want to link the clusters together now! ## Linking the clusters -![link-clusters](/docs/images/multicluster/link-flow.svg "Link") +![link-clusters](/images/docs/multicluster/link-flow.svg 'Link') For `west` to mirror services from `east`, the `west` cluster needs to have credentials so that it can watch for services in `east` to be exported. You'd @@ -258,7 +259,7 @@ use the `--api-server-address` flag for `link`. ## Installing the test services -![test-services](/docs/images/multicluster/example-topology.svg "Topology") +![test-services](/images/docs/multicluster/example-topology.svg 'Topology') It is time to test this all out! The first step is to add some services that we can mirror. To add these to both clusters, you can run: @@ -285,7 +286,7 @@ To see what it looks like from the `west` cluster right now, you can run: kubectl --context=west -n test port-forward svc/frontend 8080 ``` -![west-podinfo](/docs/images/multicluster/west-podinfo.gif "West Podinfo") +![west-podinfo](/images/docs/multicluster/west-podinfo.gif 'West Podinfo') With the podinfo landing page available at [http://localhost:8080](http://localhost:8080), you can see how it looks in the @@ -377,7 +378,7 @@ the [grafana install instructions](grafana/) first to have a working grafana provisioned with Linkerd dashboards). You can get to it by running `linkerd --context=west viz dashboard` and going to -![grafana-dashboard](/docs/images/multicluster/grafana-dashboard.png "Grafana") +![grafana-dashboard](/images/docs/multicluster/grafana-dashboard.png 'Grafana') ## Security @@ -416,7 +417,7 @@ kubectl --context=west -n test run -it --rm --image=alpine:3 test -- \ ## Traffic Splitting -![with-split](/docs/images/multicluster/with-split.svg "Traffic Split") +![with-split](/images/docs/multicluster/with-split.svg 'Traffic Split') It is pretty useful to have services automatically show up in clusters and be able to explicitly address them, however that only covers one use case for @@ -460,7 +461,7 @@ both clusters.Alternatively, for the command line approach, `curl localhost:8080` will give you a message that greets from `west` and `east`. -![podinfo-split](/docs/images/multicluster/split-podinfo.gif "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/split-podinfo.gif 'Cross Cluster Podinfo') You can also watch what's happening with metrics. To see the source side of things (`west`), you can run: @@ -481,7 +482,7 @@ linkerd --context=east -n test viz stat \ There's even a dashboard! Run `linkerd viz dashboard` and send your browser to [localhost:50750](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![podinfo-split](/docs/images/multicluster/ts-dashboard.png "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/ts-dashboard.png 'Cross Cluster Podinfo') ## Cleanup diff --git a/linkerd.io/content/2.17/_index.md b/linkerd.io/content/2.17/_index.md index 6c28dc82e3..11f8979d57 100644 --- a/linkerd.io/content/2.17/_index.md +++ b/linkerd.io/content/2.17/_index.md @@ -2,10 +2,12 @@ title: Docs cascade: type: docs + params: + noIndex: true # Redirect type: _default layout: redirect params: - redirect: ./overview + redirect: ./getting-started --- diff --git a/linkerd.io/content/2.17/checks/index.md b/linkerd.io/content/2.17/checks/index.md index fc6ac87de5..5f2ae8ee80 100644 --- a/linkerd.io/content/2.17/checks/index.md +++ b/linkerd.io/content/2.17/checks/index.md @@ -6,5 +6,5 @@ type: _default layout: redirect params: unlisted: true - redirect: /2/tasks/troubleshooting/ + redirect: /docs/tasks/troubleshooting/ --- diff --git a/linkerd.io/content/2.17/features/dashboard.md b/linkerd.io/content/2.17/features/dashboard.md index d66a157726..c69f941be4 100644 --- a/linkerd.io/content/2.17/features/dashboard.md +++ b/linkerd.io/content/2.17/features/dashboard.md @@ -49,7 +49,7 @@ health of specific service routes. One way to pull it up is by running `linkerd viz dashboard` from the command line. -![Top Line Metrics](/docs/images/architecture/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/architecture/stat.png 'Top Line Metrics') ## Grafana @@ -96,13 +96,13 @@ linkerd -n emojivoto viz tap deploy/web All of this functionality is also available in the dashboard, if you would like to use your browser instead: -![Top Line Metrics](/docs/images/getting-started/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/getting-started/stat.png 'Top Line Metrics') -![Deployment Detail](/docs/images/getting-started/inbound-outbound.png "Deployment Detail") +![Deployment Detail](/images/docs/getting-started/inbound-outbound.png 'Deployment Detail') -![Top](/docs/images/getting-started/top.png "Top") +![Top](/images/docs/getting-started/top.png 'Top') -![Tap](/docs/images/getting-started/tap.png "Tap") +![Tap](/images/docs/getting-started/tap.png 'Tap') ## Futher reading diff --git a/linkerd.io/content/2.17/features/distributed-tracing.md b/linkerd.io/content/2.17/features/distributed-tracing.md index 2d111828f9..0a8833fe64 100644 --- a/linkerd.io/content/2.17/features/distributed-tracing.md +++ b/linkerd.io/content/2.17/features/distributed-tracing.md @@ -26,13 +26,13 @@ For example, Linkerd can display a live topology of all incoming and outgoing dependencies for a service, without requiring distributed tracing or any other such application modification: -![The Linkerd dashboard showing an automatically generated topology graph](/docs/images/books/webapp-detail.png "The Linkerd dashboard showing an automatically generated topology graph") +![The Linkerd dashboard showing an automatically generated topology graph](/images/docs/books/webapp-detail.png 'The Linkerd dashboard showing an automatically generated topology graph') Likewise, Linkerd can provide golden metrics per service and per _route_, again without requiring distributed tracing or any other such application modification: -![Linkerd dashboard showing an automatically generated route metrics](/docs/images/books/webapp-routes.png "Linkerd dashboard showing an automatically generated route metrics") +![Linkerd dashboard showing an automatically generated route metrics](/images/docs/books/webapp-routes.png 'Linkerd dashboard showing an automatically generated route metrics') ## Using distributed tracing diff --git a/linkerd.io/content/2.17/features/multicluster.md b/linkerd.io/content/2.17/features/multicluster.md index 4a0551d226..f3313eb928 100644 --- a/linkerd.io/content/2.17/features/multicluster.md +++ b/linkerd.io/content/2.17/features/multicluster.md @@ -43,7 +43,7 @@ the _Foo_ service as if it were on the local cluster. Linkerd supports three basic forms of multi-cluster communication: hierarchical, flat, and federated. -![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png) +![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png) ### Hierarchical networks diff --git a/linkerd.io/content/2.17/features/protocol-detection.md b/linkerd.io/content/2.17/features/protocol-detection.md index def058b7da..150535351a 100644 --- a/linkerd.io/content/2.17/features/protocol-detection.md +++ b/linkerd.io/content/2.17/features/protocol-detection.md @@ -84,7 +84,7 @@ configuration. If you are using one of those protocols, follow this decision tree to determine which configuration you need to apply. -![Decision tree](/docs/images/protocol-detection-decision-tree.png) +![Decision tree](/images/docs/protocol-detection-decision-tree.png) ## Marking ports as opaque diff --git a/linkerd.io/content/2.17/getting-started/_index.md b/linkerd.io/content/2.17/getting-started/_index.md index 3edac1391c..e0fc522542 100644 --- a/linkerd.io/content/2.17/getting-started/_index.md +++ b/linkerd.io/content/2.17/getting-started/_index.md @@ -245,7 +245,7 @@ linkerd viz dashboard & You should see a screen like this: -![The Linkerd dashboard in action](/docs/images/getting-started/viz-empty-dashboard.png "The Linkerd dashboard in action") +![The Linkerd dashboard in action](/images/docs/getting-started/viz-empty-dashboard.png 'The Linkerd dashboard in action') Click around, explore, and have fun! For extra credit, see if you can find the live metrics for each Emojivoto component, and determine which one has a partial diff --git a/linkerd.io/content/2.17/reference/architecture.md b/linkerd.io/content/2.17/reference/architecture.md index f3a4bcdfca..8dc18491ef 100644 --- a/linkerd.io/content/2.17/reference/architecture.md +++ b/linkerd.io/content/2.17/reference/architecture.md @@ -16,7 +16,7 @@ with the control plane for configuration. Linkerd also provides a **CLI** that can be used to interact with the control and data planes. -![Linkerd's architecture](/docs/images/architecture/control-plane.png "Linkerd's architecture") +![Linkerd's architecture](/images/docs/architecture/control-plane.png "Linkerd's architecture") ## CLI diff --git a/linkerd.io/content/2.17/reference/iptables.md b/linkerd.io/content/2.17/reference/iptables.md index 67a7ea89de..a91a6d0d4d 100644 --- a/linkerd.io/content/2.17/reference/iptables.md +++ b/linkerd.io/content/2.17/reference/iptables.md @@ -31,7 +31,7 @@ The redirect chain will be configured with two more rules: Based on these two rules, there are two possible paths that an inbound packet can take, both of which are outlined below. -![Inbound iptables chain traversal](/docs/images/iptables/iptables-fig2-1.png "Inbound iptables chain traversal") +![Inbound iptables chain traversal](/images/docs/iptables/iptables-fig2-1.png 'Inbound iptables chain traversal') The packet will arrive on the `PREROUTING` chain and will be immediately routed to the redirect chain. If its destination port matches any of the inbound ports @@ -79,7 +79,7 @@ configured: been produced by the service, so it should be forwarded to its destination by the proxy. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-2.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-2.png 'Outbound iptables chain traversal') A packet produced by the service will first hit the `OUTPUT` chain; from here, it will be sent to our own output chain for processing. The first rule it @@ -113,7 +113,7 @@ in the pod. This scenario would typically apply when: - The destination is a port bound on localhost (regardless of which container it belongs to). -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-3.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-3.png 'Outbound iptables chain traversal') When the application targets itself through its pod's IP (or loopback address), the packets will traverse the two output chains. The first rule will be skipped, @@ -138,7 +138,7 @@ inbound side to account for outbound packets that are sent locally. is not guaranteed that the destination will be local. The packet follows an unusual path, as depicted in the diagram below. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-4.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-4.png 'Outbound iptables chain traversal') When the packet first traverses the output chains, it will follow the same path an outbound packet would normally take. In such a scenario, the packet's diff --git a/linkerd.io/content/2.17/reference/multicluster.md b/linkerd.io/content/2.17/reference/multicluster.md index 78505d107d..b5ecf7d1a8 100644 --- a/linkerd.io/content/2.17/reference/multicluster.md +++ b/linkerd.io/content/2.17/reference/multicluster.md @@ -18,7 +18,7 @@ modes: hierarchical (using a gateway), flat (without a gateway), and federated. These modes can be mixed and matched. -![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png) +![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png) Hierarchical mode places a bare minimum of requirements on the underlying network, as it only requires that the gateway IP be reachable. However, flat diff --git a/linkerd.io/content/2.17/tasks/books.md b/linkerd.io/content/2.17/tasks/books.md index b881bcbeaa..6f9460c53f 100644 --- a/linkerd.io/content/2.17/tasks/books.md +++ b/linkerd.io/content/2.17/tasks/books.md @@ -21,7 +21,7 @@ the other services. There are three services: For demo purposes, the app comes with a simple traffic generator. The overall topology looks like this: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') ## Prerequisites @@ -71,7 +71,7 @@ connection" messages for the rest of the exercise.) Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') Unfortunately, there is an error in the app: if you click _Add Book_, it will fail 50% of the time. This is a classic case of non-obvious, intermittent @@ -80,7 +80,7 @@ debug. Kubernetes itself cannot detect or surface this error. From Kubernetes's perspective, it looks like everything's fine, but you know the application is returning errors. -![Failure](/docs/images/books/failure.png "Failure") +![Failure](/images/docs/books/failure.png 'Failure') ## Add Linkerd to the service diff --git a/linkerd.io/content/2.17/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.17/tasks/configuring-per-route-policy.md index a5c8b5c2ef..9a895dd5d5 100644 --- a/linkerd.io/content/2.17/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.17/tasks/configuring-per-route-policy.md @@ -64,7 +64,7 @@ $ kubectl -n booksapp port-forward svc/webapp 7000 & Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') ## Creating a Server resource @@ -330,7 +330,7 @@ web UI, we may notice that something is amiss. Attempting to delete an author results in a "not found" error in the web UI: -![Not found](/docs/images/books/delete-404.png "Not found") +![Not found](/images/docs/books/delete-404.png 'Not found') and similarly, adding a new author takes us to an error page. @@ -375,7 +375,7 @@ EOF What happens if we try to delete an author _now_? We still see a failure, but a different one: -![Internal server error](/docs/images/books/delete-503.png "Internal server error") +![Internal server error](/images/docs/books/delete-503.png 'Internal server error') This is because we have created a _route_ matching `DELETE`, `PUT`, and `POST` requests, but we haven't _authorized_ requests to that route. Running the @@ -432,11 +432,11 @@ in this case, we only authenticate the `webapp` deployment's `ServiceAccount` Now, if we attempt to delete an author in the frontend once again, we can: -![Author deleted](/docs/images/books/delete-ok.png "Author deleted") +![Author deleted](/images/docs/books/delete-ok.png 'Author deleted') Similarly, we can now create a new author successfully, as well: -![Author created](/docs/images/books/create-ok.png "Author created") +![Author created](/images/docs/books/create-ok.png 'Author created') Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: diff --git a/linkerd.io/content/2.17/tasks/debugging-your-service.md b/linkerd.io/content/2.17/tasks/debugging-your-service.md index fa22bef005..d1009cfe76 100644 --- a/linkerd.io/content/2.17/tasks/debugging-your-service.md +++ b/linkerd.io/content/2.17/tasks/debugging-your-service.md @@ -17,12 +17,12 @@ command), you should see all the resources in the `emojivoto` namespace, including the deployments. Each deployment running Linkerd shows success rate, requests per second and latency percentiles. -![Top Level Metrics](/docs/images/debugging/stat.png "Top Level Metrics") +![Top Level Metrics](/images/docs/debugging/stat.png 'Top Level Metrics') That's pretty neat, but the first thing you might notice is that the success rate is well below 100%! Click on `web` and let's dig in. -![Deployment Detail](/docs/images/debugging/octopus.png "Deployment Detail") +![Deployment Detail](/images/docs/debugging/octopus.png 'Deployment Detail') You should now be looking at the Deployment page for the web deployment. The first thing you'll see here is that the web deployment is taking traffic from @@ -38,7 +38,7 @@ returning. Let's scroll a little further down the page, we'll see a live list of all traffic that is incoming to _and_ outgoing from `web`. This is interesting: -![Top](/docs/images/debugging/web-top.png "Top") +![Top](/images/docs/debugging/web-top.png 'Top') There are two calls that are not at 100%: the first is vote-bot's call to the `/api/vote` endpoint. The second is the `VoteDoughnut` call from the web @@ -54,7 +54,7 @@ the requests are failing with a is a common error response as you can see from [the code][code]. Linkerd is aware of gRPC's response classification without any other configuration! -![Tap](/docs/images/debugging/web-tap.png "Tap") +![Tap](/images/docs/debugging/web-tap.png 'Tap') At this point, we have everything required to get the endpoint fixed and restore the overall health of our applications. diff --git a/linkerd.io/content/2.17/tasks/distributed-tracing.md b/linkerd.io/content/2.17/tasks/distributed-tracing.md index d599c6b7eb..5f04b4c720 100644 --- a/linkerd.io/content/2.17/tasks/distributed-tracing.md +++ b/linkerd.io/content/2.17/tasks/distributed-tracing.md @@ -21,7 +21,7 @@ To use distributed tracing, you'll need to: In the case of emojivoto, once all these steps are complete there will be a topology that looks like: -![Topology](/docs/images/tracing/tracing-topology.svg "Topology") +![Topology](/images/docs/tracing/tracing-topology.svg 'Topology') ## Prerequisites @@ -119,17 +119,17 @@ up in Jaeger. To get to the UI, run: linkerd jaeger dashboard ``` -![Jaeger](/docs/images/tracing/jaeger-empty.png "Jaeger") +![Jaeger](/images/docs/tracing/jaeger-empty.png 'Jaeger') You can search for any service in the dropdown and click Find Traces. `vote-bot` is a great way to get started. -![Search](/docs/images/tracing/jaeger-search.png "Search") +![Search](/images/docs/tracing/jaeger-search.png 'Search') Clicking on a specific trace will provide all the details, you'll be able to see the spans for every proxy! -![Search](/docs/images/tracing/example-trace.png "Search") +![Search](/images/docs/tracing/example-trace.png 'Search') There sure are a lot of `linkerd-proxy` spans in that output. Internally, the proxy has a server and client side. When a request goes through the proxy, it is @@ -145,7 +145,7 @@ meta-data as trace attributes, users can directly jump into related resources traces directly from the linkerd-web dashboard by clicking the Jaeger icon in the Metrics Table, as shown below: -![Linkerd-Jaeger](/docs/images/tracing/linkerd-jaeger-ui.png "Linkerd-Jaeger") +![Linkerd-Jaeger](/images/docs/tracing/linkerd-jaeger-ui.png 'Linkerd-Jaeger') To obtain that functionality you need to install (or upgrade) the Linkerd-Viz extension specifying the service exposing the Jaeger UI. By default, this would diff --git a/linkerd.io/content/2.17/tasks/fault-injection.md b/linkerd.io/content/2.17/tasks/fault-injection.md index 1108152181..5172a374a0 100644 --- a/linkerd.io/content/2.17/tasks/fault-injection.md +++ b/linkerd.io/content/2.17/tasks/fault-injection.md @@ -12,7 +12,7 @@ return whatever responses you want - 500s, timeouts or even crazy payloads. The [books demo](books/) is a great way to show off this behavior. The overall topology looks like: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') In this guide, you will split some of the requests from `webapp` to `books`. Most requests will end up at the correct `books` destination, however some of diff --git a/linkerd.io/content/2.17/tasks/flagger.md b/linkerd.io/content/2.17/tasks/flagger.md index 6f6ab6fcc2..356ac3b38e 100644 --- a/linkerd.io/content/2.17/tasks/flagger.md +++ b/linkerd.io/content/2.17/tasks/flagger.md @@ -69,7 +69,7 @@ orchestrates it. A load generator simply makes it easier to execute the rollout as there needs to be some kind of active traffic to complete the operation. Together, these components have a topology that looks like: -![Topology](/docs/images/canary/simple-topology.svg "Topology") +![Topology](/images/docs/canary/simple-topology.svg 'Topology') To add these components to your cluster and include them in the Linkerd [data plane](../reference/architecture/#data-plane), run: @@ -213,7 +213,7 @@ podinfo-primary ClusterIP 10.7.249.63 9898/TCP 23m At this point, the topology looks a little like: -![Initialized](/docs/images/canary/initialized.svg "Initialized") +![Initialized](/images/docs/canary/initialized.svg 'Initialized') {{< note >}} @@ -259,7 +259,7 @@ kubectl -n test get ev --watch While an update is occurring, the resources and traffic will look like this at a high level: -![Ongoing](/docs/images/canary/ongoing.svg "Ongoing") +![Ongoing](/images/docs/canary/ongoing.svg 'Ongoing') After the update is complete, this picture will go back to looking just like the figure from the previous section. diff --git a/linkerd.io/content/2.17/tasks/gitops.md b/linkerd.io/content/2.17/tasks/gitops.md index 18e90ce73c..51328396a3 100644 --- a/linkerd.io/content/2.17/tasks/gitops.md +++ b/linkerd.io/content/2.17/tasks/gitops.md @@ -22,7 +22,7 @@ the [auto proxy injection](../features/proxy-injection/) feature into your workflow. Finally, this guide conclude with steps to upgrade Linkerd to a newer version following a GitOps workflow. -![Linkerd GitOps workflow](/docs/images/gitops/architecture.png 'Linkerd GitOps workflow') +![Linkerd GitOps workflow](/images/docs/gitops/architecture.png 'Linkerd GitOps workflow') The software and tools used in this guide are selected for demonstration purposes only. Feel free to choose others that are most suited for your @@ -184,7 +184,7 @@ argocd proj get demo On the dashboard: -![New project in Argo CD dashboard](/docs/images/gitops/dashboard-project.png 'New project in Argo CD dashboard') +![New project in Argo CD dashboard](/images/docs/gitops/dashboard-project.png 'New project in Argo CD dashboard') ### Deploy the applications @@ -215,7 +215,7 @@ Sync the `main` application: argocd app sync main ``` -![Synchronize the main application](/docs/images/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') +![Synchronize the main application](/images/docs/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') Notice that only the `main` application is synchronized. @@ -237,7 +237,7 @@ for deploy in "cert-manager" "cert-manager-cainjector" "cert-manager-webhook"; \ done ``` -![Synchronize the cert-manager application](/docs/images/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') +![Synchronize the cert-manager application](/images/docs/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') ### Deploy Sealed Secrets @@ -253,7 +253,7 @@ Confirm that sealed-secrets is running: kubectl -n kube-system rollout status deploy/sealed-secrets ``` -![Synchronize the sealed-secrets application](/docs/images/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') +![Synchronize the sealed-secrets application](/images/docs/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') ### Create mTLS trust anchor @@ -348,7 +348,7 @@ Git server earlier. {{< /note >}} -![Synchronize the linkerd-bootstrap application](/docs/images/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') +![Synchronize the linkerd-bootstrap application](/images/docs/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') SealedSecrets should have created a secret containing the decrypted trust anchor. Retrieve the decrypted trust anchor from the secret: @@ -380,7 +380,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Empty default trust anchor](/docs/images/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') +![Empty default trust anchor](/images/docs/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') We will override this parameter in the `linkerd` application with the value of `${trust_anchor}`. @@ -395,7 +395,7 @@ Ensure that the multi-line string is indented correctly. E.g., source: chart: linkerd-control-plane repoURL: https://helm.linkerd.io/edge - targetRevision: {{% chart-version %}} + targetRevision: { { % chart-version % } } helm: parameters: - name: identityTrustAnchorsPEM @@ -442,7 +442,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Override mTLS trust anchor](/docs/images/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') +![Override mTLS trust anchor](/images/docs/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') Synchronize the `linkerd-crds` and `linkerd-control-plane` applications: @@ -457,7 +457,7 @@ Check that Linkerd is ready: linkerd check ``` -![Synchronize Linkerd](/docs/images/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') +![Synchronize Linkerd](/images/docs/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') ### Test with emojivoto @@ -475,7 +475,7 @@ for deploy in "emoji" "vote-bot" "voting" "web" ; \ done ``` -![Synchronize emojivoto](/docs/images/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') +![Synchronize emojivoto](/images/docs/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') ### Upgrade Linkerd diff --git a/linkerd.io/content/2.17/tasks/multicluster.md b/linkerd.io/content/2.17/tasks/multicluster.md index f102807f30..3416766195 100644 --- a/linkerd.io/content/2.17/tasks/multicluster.md +++ b/linkerd.io/content/2.17/tasks/multicluster.md @@ -13,6 +13,7 @@ between services that live on different clusters. At a high level, you will: + 1. [Install Linkerd and Linkerd Viz](#install-linkerd-and-linkerd-viz) on two clusters with a shared trust anchor. 1. [Prepare](#preparing-your-cluster) the clusters. @@ -21,7 +22,7 @@ At a high level, you will: 1. [Export](#exporting-the-services) the demo services, to control visibility. 1. [Verify](#security) the security of your clusters. 1. [Split traffic](#traffic-splitting) from pods on the source cluster (`west`) - to the target cluster (`east`) +to the target cluster (`east`) ## Prerequisites @@ -52,7 +53,7 @@ At a high level, you will: ## Install Linkerd and Linkerd Viz -![install](/docs/images/multicluster/install.svg "Two Clusters") +![install](/images/docs/multicluster/install.svg 'Two Clusters') Linkerd requires a shared [trust anchor](generate-certificates/#trust-anchor-certificate) to exist between @@ -138,7 +139,7 @@ done ## Preparing your cluster -![preparation](/docs/images/multicluster/prep-overview.svg "Preparation") +![preparation](/images/docs/multicluster/prep-overview.svg 'Preparation') In order to route traffic between clusters, Linkerd leverages Kubernetes services so that your application code does not need to change and there is @@ -161,7 +162,7 @@ for ctx in west east; do done ``` -![install](/docs/images/multicluster/components.svg "Components") +![install](/images/docs/multicluster/components.svg 'Components') Installed into the `linkerd-multicluster` namespace, the gateway is a simple [pause container](https://github.com/linkerd/linkerd2/blob/main/multicluster/charts/linkerd-multicluster/templates/gateway.yaml#L3) @@ -200,7 +201,7 @@ mirroring services. We'll want to link the clusters together now! ## Linking the clusters -![link-clusters](/docs/images/multicluster/link-flow.svg "Link") +![link-clusters](/images/docs/multicluster/link-flow.svg 'Link') For `west` to mirror services from `east`, the `west` cluster needs to have credentials so that it can watch for services in `east` to be exported. You'd @@ -258,7 +259,7 @@ use the `--api-server-address` flag for `link`. ## Installing the test services -![test-services](/docs/images/multicluster/example-topology.svg "Topology") +![test-services](/images/docs/multicluster/example-topology.svg 'Topology') It is time to test this all out! The first step is to add some services that we can mirror. To add these to both clusters, you can run: @@ -285,7 +286,7 @@ To see what it looks like from the `west` cluster right now, you can run: kubectl --context=west -n test port-forward svc/frontend 8080 ``` -![west-podinfo](/docs/images/multicluster/west-podinfo.gif "West Podinfo") +![west-podinfo](/images/docs/multicluster/west-podinfo.gif 'West Podinfo') With the podinfo landing page available at [http://localhost:8080](http://localhost:8080), you can see how it looks in the @@ -377,7 +378,7 @@ the [grafana install instructions](grafana/) first to have a working grafana provisioned with Linkerd dashboards). You can get to it by running `linkerd --context=west viz dashboard` and going to -![grafana-dashboard](/docs/images/multicluster/grafana-dashboard.png "Grafana") +![grafana-dashboard](/images/docs/multicluster/grafana-dashboard.png 'Grafana') ## Security @@ -416,7 +417,7 @@ kubectl --context=west -n test run -it --rm --image=alpine:3 test -- \ ## Traffic Splitting -![with-split](/docs/images/multicluster/with-split.svg "Traffic Split") +![with-split](/images/docs/multicluster/with-split.svg 'Traffic Split') It is pretty useful to have services automatically show up in clusters and be able to explicitly address them, however that only covers one use case for @@ -460,7 +461,7 @@ both clusters.Alternatively, for the command line approach, `curl localhost:8080` will give you a message that greets from `west` and `east`. -![podinfo-split](/docs/images/multicluster/split-podinfo.gif "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/split-podinfo.gif 'Cross Cluster Podinfo') You can also watch what's happening with metrics. To see the source side of things (`west`), you can run: @@ -481,7 +482,7 @@ linkerd --context=east -n test viz stat \ There's even a dashboard! Run `linkerd viz dashboard` and send your browser to [localhost:50750](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![podinfo-split](/docs/images/multicluster/ts-dashboard.png "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/ts-dashboard.png 'Cross Cluster Podinfo') ## Cleanup diff --git a/linkerd.io/content/2.18/_index.md b/linkerd.io/content/2.18/_index.md index 6c28dc82e3..11f8979d57 100644 --- a/linkerd.io/content/2.18/_index.md +++ b/linkerd.io/content/2.18/_index.md @@ -2,10 +2,12 @@ title: Docs cascade: type: docs + params: + noIndex: true # Redirect type: _default layout: redirect params: - redirect: ./overview + redirect: ./getting-started --- diff --git a/linkerd.io/content/2.18/checks/index.md b/linkerd.io/content/2.18/checks/index.md index fc6ac87de5..5f2ae8ee80 100644 --- a/linkerd.io/content/2.18/checks/index.md +++ b/linkerd.io/content/2.18/checks/index.md @@ -6,5 +6,5 @@ type: _default layout: redirect params: unlisted: true - redirect: /2/tasks/troubleshooting/ + redirect: /docs/tasks/troubleshooting/ --- diff --git a/linkerd.io/content/2.18/features/dashboard.md b/linkerd.io/content/2.18/features/dashboard.md index d66a157726..c69f941be4 100644 --- a/linkerd.io/content/2.18/features/dashboard.md +++ b/linkerd.io/content/2.18/features/dashboard.md @@ -49,7 +49,7 @@ health of specific service routes. One way to pull it up is by running `linkerd viz dashboard` from the command line. -![Top Line Metrics](/docs/images/architecture/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/architecture/stat.png 'Top Line Metrics') ## Grafana @@ -96,13 +96,13 @@ linkerd -n emojivoto viz tap deploy/web All of this functionality is also available in the dashboard, if you would like to use your browser instead: -![Top Line Metrics](/docs/images/getting-started/stat.png "Top Line Metrics") +![Top Line Metrics](/images/docs/getting-started/stat.png 'Top Line Metrics') -![Deployment Detail](/docs/images/getting-started/inbound-outbound.png "Deployment Detail") +![Deployment Detail](/images/docs/getting-started/inbound-outbound.png 'Deployment Detail') -![Top](/docs/images/getting-started/top.png "Top") +![Top](/images/docs/getting-started/top.png 'Top') -![Tap](/docs/images/getting-started/tap.png "Tap") +![Tap](/images/docs/getting-started/tap.png 'Tap') ## Futher reading diff --git a/linkerd.io/content/2.18/features/distributed-tracing.md b/linkerd.io/content/2.18/features/distributed-tracing.md index 2d111828f9..0a8833fe64 100644 --- a/linkerd.io/content/2.18/features/distributed-tracing.md +++ b/linkerd.io/content/2.18/features/distributed-tracing.md @@ -26,13 +26,13 @@ For example, Linkerd can display a live topology of all incoming and outgoing dependencies for a service, without requiring distributed tracing or any other such application modification: -![The Linkerd dashboard showing an automatically generated topology graph](/docs/images/books/webapp-detail.png "The Linkerd dashboard showing an automatically generated topology graph") +![The Linkerd dashboard showing an automatically generated topology graph](/images/docs/books/webapp-detail.png 'The Linkerd dashboard showing an automatically generated topology graph') Likewise, Linkerd can provide golden metrics per service and per _route_, again without requiring distributed tracing or any other such application modification: -![Linkerd dashboard showing an automatically generated route metrics](/docs/images/books/webapp-routes.png "Linkerd dashboard showing an automatically generated route metrics") +![Linkerd dashboard showing an automatically generated route metrics](/images/docs/books/webapp-routes.png 'Linkerd dashboard showing an automatically generated route metrics') ## Using distributed tracing diff --git a/linkerd.io/content/2.18/features/multicluster.md b/linkerd.io/content/2.18/features/multicluster.md index 79a1fe8da6..9c1371847a 100644 --- a/linkerd.io/content/2.18/features/multicluster.md +++ b/linkerd.io/content/2.18/features/multicluster.md @@ -43,7 +43,7 @@ the _Foo_ service as if it were on the local cluster. Linkerd supports three basic forms of multi-cluster communication: hierarchical, flat, and federated. -![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png) +![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png) ### Hierarchical networks diff --git a/linkerd.io/content/2.18/getting-started/_index.md b/linkerd.io/content/2.18/getting-started/_index.md index 13971f80db..1b00784931 100644 --- a/linkerd.io/content/2.18/getting-started/_index.md +++ b/linkerd.io/content/2.18/getting-started/_index.md @@ -252,7 +252,7 @@ linkerd viz dashboard & You should see a screen like this: -![The Linkerd dashboard in action](/docs/images/getting-started/viz-empty-dashboard.png "The Linkerd dashboard in action") +![The Linkerd dashboard in action](/images/docs/getting-started/viz-empty-dashboard.png 'The Linkerd dashboard in action') Click around, explore, and have fun! For extra credit, see if you can find the live metrics for each Emojivoto component, and determine which one has a partial diff --git a/linkerd.io/content/2.18/reference/architecture.md b/linkerd.io/content/2.18/reference/architecture.md index f3a4bcdfca..8dc18491ef 100644 --- a/linkerd.io/content/2.18/reference/architecture.md +++ b/linkerd.io/content/2.18/reference/architecture.md @@ -16,7 +16,7 @@ with the control plane for configuration. Linkerd also provides a **CLI** that can be used to interact with the control and data planes. -![Linkerd's architecture](/docs/images/architecture/control-plane.png "Linkerd's architecture") +![Linkerd's architecture](/images/docs/architecture/control-plane.png "Linkerd's architecture") ## CLI diff --git a/linkerd.io/content/2.18/reference/iptables.md b/linkerd.io/content/2.18/reference/iptables.md index 67a7ea89de..a91a6d0d4d 100644 --- a/linkerd.io/content/2.18/reference/iptables.md +++ b/linkerd.io/content/2.18/reference/iptables.md @@ -31,7 +31,7 @@ The redirect chain will be configured with two more rules: Based on these two rules, there are two possible paths that an inbound packet can take, both of which are outlined below. -![Inbound iptables chain traversal](/docs/images/iptables/iptables-fig2-1.png "Inbound iptables chain traversal") +![Inbound iptables chain traversal](/images/docs/iptables/iptables-fig2-1.png 'Inbound iptables chain traversal') The packet will arrive on the `PREROUTING` chain and will be immediately routed to the redirect chain. If its destination port matches any of the inbound ports @@ -79,7 +79,7 @@ configured: been produced by the service, so it should be forwarded to its destination by the proxy. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-2.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-2.png 'Outbound iptables chain traversal') A packet produced by the service will first hit the `OUTPUT` chain; from here, it will be sent to our own output chain for processing. The first rule it @@ -113,7 +113,7 @@ in the pod. This scenario would typically apply when: - The destination is a port bound on localhost (regardless of which container it belongs to). -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-3.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-3.png 'Outbound iptables chain traversal') When the application targets itself through its pod's IP (or loopback address), the packets will traverse the two output chains. The first rule will be skipped, @@ -138,7 +138,7 @@ inbound side to account for outbound packets that are sent locally. is not guaranteed that the destination will be local. The packet follows an unusual path, as depicted in the diagram below. -![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-4.png "Outbound iptables chain traversal") +![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-4.png 'Outbound iptables chain traversal') When the packet first traverses the output chains, it will follow the same path an outbound packet would normally take. In such a scenario, the packet's diff --git a/linkerd.io/content/2.18/reference/multicluster.md b/linkerd.io/content/2.18/reference/multicluster.md index 013191edba..9c2a032437 100644 --- a/linkerd.io/content/2.18/reference/multicluster.md +++ b/linkerd.io/content/2.18/reference/multicluster.md @@ -18,7 +18,7 @@ modes: hierarchical (using a gateway), flat (without a gateway), and federated. These modes can be mixed and matched. -![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png) +![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png) Hierarchical mode places a bare minimum of requirements on the underlying network, as it only requires that the gateway IP be reachable. However, flat diff --git a/linkerd.io/content/2.18/tasks/books.md b/linkerd.io/content/2.18/tasks/books.md index b881bcbeaa..6f9460c53f 100644 --- a/linkerd.io/content/2.18/tasks/books.md +++ b/linkerd.io/content/2.18/tasks/books.md @@ -21,7 +21,7 @@ the other services. There are three services: For demo purposes, the app comes with a simple traffic generator. The overall topology looks like this: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') ## Prerequisites @@ -71,7 +71,7 @@ connection" messages for the rest of the exercise.) Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') Unfortunately, there is an error in the app: if you click _Add Book_, it will fail 50% of the time. This is a classic case of non-obvious, intermittent @@ -80,7 +80,7 @@ debug. Kubernetes itself cannot detect or surface this error. From Kubernetes's perspective, it looks like everything's fine, but you know the application is returning errors. -![Failure](/docs/images/books/failure.png "Failure") +![Failure](/images/docs/books/failure.png 'Failure') ## Add Linkerd to the service diff --git a/linkerd.io/content/2.18/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.18/tasks/configuring-per-route-policy.md index 011c10ff9e..9231a8faab 100644 --- a/linkerd.io/content/2.18/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.18/tasks/configuring-per-route-policy.md @@ -64,7 +64,7 @@ $ kubectl -n booksapp port-forward svc/webapp 7000 & Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the frontend. -![Frontend](/docs/images/books/frontend.png "Frontend") +![Frontend](/images/docs/books/frontend.png 'Frontend') ## Creating a Server resource @@ -330,7 +330,7 @@ web UI, we may notice that something is amiss. Attempting to delete an author results in a "not found" error in the web UI: -![Not found](/docs/images/books/delete-404.png "Not found") +![Not found](/images/docs/books/delete-404.png 'Not found') and similarly, adding a new author takes us to an error page. @@ -375,7 +375,7 @@ EOF What happens if we try to delete an author _now_? We still see a failure, but a different one: -![Internal server error](/docs/images/books/delete-503.png "Internal server error") +![Internal server error](/images/docs/books/delete-503.png 'Internal server error') This is because we have created a _route_ matching `DELETE`, `PUT`, and `POST` requests, but we haven't _authorized_ requests to that route. Running the @@ -432,11 +432,11 @@ in this case, we only authenticate the `webapp` deployment's `ServiceAccount` Now, if we attempt to delete an author in the frontend once again, we can: -![Author deleted](/docs/images/books/delete-ok.png "Author deleted") +![Author deleted](/images/docs/books/delete-ok.png 'Author deleted') Similarly, we can now create a new author successfully, as well: -![Author created](/docs/images/books/create-ok.png "Author created") +![Author created](/images/docs/books/create-ok.png 'Author created') Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: diff --git a/linkerd.io/content/2.18/tasks/debugging-your-service.md b/linkerd.io/content/2.18/tasks/debugging-your-service.md index fa22bef005..d1009cfe76 100644 --- a/linkerd.io/content/2.18/tasks/debugging-your-service.md +++ b/linkerd.io/content/2.18/tasks/debugging-your-service.md @@ -17,12 +17,12 @@ command), you should see all the resources in the `emojivoto` namespace, including the deployments. Each deployment running Linkerd shows success rate, requests per second and latency percentiles. -![Top Level Metrics](/docs/images/debugging/stat.png "Top Level Metrics") +![Top Level Metrics](/images/docs/debugging/stat.png 'Top Level Metrics') That's pretty neat, but the first thing you might notice is that the success rate is well below 100%! Click on `web` and let's dig in. -![Deployment Detail](/docs/images/debugging/octopus.png "Deployment Detail") +![Deployment Detail](/images/docs/debugging/octopus.png 'Deployment Detail') You should now be looking at the Deployment page for the web deployment. The first thing you'll see here is that the web deployment is taking traffic from @@ -38,7 +38,7 @@ returning. Let's scroll a little further down the page, we'll see a live list of all traffic that is incoming to _and_ outgoing from `web`. This is interesting: -![Top](/docs/images/debugging/web-top.png "Top") +![Top](/images/docs/debugging/web-top.png 'Top') There are two calls that are not at 100%: the first is vote-bot's call to the `/api/vote` endpoint. The second is the `VoteDoughnut` call from the web @@ -54,7 +54,7 @@ the requests are failing with a is a common error response as you can see from [the code][code]. Linkerd is aware of gRPC's response classification without any other configuration! -![Tap](/docs/images/debugging/web-tap.png "Tap") +![Tap](/images/docs/debugging/web-tap.png 'Tap') At this point, we have everything required to get the endpoint fixed and restore the overall health of our applications. diff --git a/linkerd.io/content/2.18/tasks/distributed-tracing.md b/linkerd.io/content/2.18/tasks/distributed-tracing.md index 554a6bf07f..391e7d2a31 100644 --- a/linkerd.io/content/2.18/tasks/distributed-tracing.md +++ b/linkerd.io/content/2.18/tasks/distributed-tracing.md @@ -21,7 +21,7 @@ To use distributed tracing, you'll need to: In the case of emojivoto, once all these steps are complete there will be a topology that looks like: -![Topology](/docs/images/tracing/tracing-topology.svg 'Topology') +![Topology](/images/docs/tracing/tracing-topology.svg 'Topology') ## Prerequisites @@ -113,17 +113,17 @@ up in Jaeger. To get to the UI, run: linkerd jaeger dashboard ``` -![Jaeger](/docs/images/tracing/jaeger-empty.png 'Jaeger') +![Jaeger](/images/docs/tracing/jaeger-empty.png 'Jaeger') You can search for any service in the dropdown and click Find Traces. `vote-bot` is a great way to get started. -![Search](/docs/images/tracing/jaeger-search.png 'Search') +![Search](/images/docs/tracing/jaeger-search.png 'Search') Clicking on a specific trace will provide all the details, you'll be able to see the spans for every proxy! -![Search](/docs/images/tracing/example-trace.png 'Search') +![Search](/images/docs/tracing/example-trace.png 'Search') There sure are a lot of `linkerd-proxy` spans in that output. Internally, the proxy has a server and client side. When a request goes through the proxy, it is @@ -139,7 +139,7 @@ meta-data as trace attributes, users can directly jump into related resources traces directly from the linkerd-web dashboard by clicking the Jaeger icon in the Metrics Table, as shown below: -![Linkerd-Jaeger](/docs/images/tracing/linkerd-jaeger-ui.png 'Linkerd-Jaeger') +![Linkerd-Jaeger](/images/docs/tracing/linkerd-jaeger-ui.png 'Linkerd-Jaeger') To obtain that functionality you need to install (or upgrade) the Linkerd-Viz extension specifying the service exposing the Jaeger UI. By default, this would diff --git a/linkerd.io/content/2.18/tasks/fault-injection.md b/linkerd.io/content/2.18/tasks/fault-injection.md index 1108152181..5172a374a0 100644 --- a/linkerd.io/content/2.18/tasks/fault-injection.md +++ b/linkerd.io/content/2.18/tasks/fault-injection.md @@ -12,7 +12,7 @@ return whatever responses you want - 500s, timeouts or even crazy payloads. The [books demo](books/) is a great way to show off this behavior. The overall topology looks like: -![Topology](/docs/images/books/topology.png "Topology") +![Topology](/images/docs/books/topology.png 'Topology') In this guide, you will split some of the requests from `webapp` to `books`. Most requests will end up at the correct `books` destination, however some of diff --git a/linkerd.io/content/2.18/tasks/flagger.md b/linkerd.io/content/2.18/tasks/flagger.md index 6f6ab6fcc2..356ac3b38e 100644 --- a/linkerd.io/content/2.18/tasks/flagger.md +++ b/linkerd.io/content/2.18/tasks/flagger.md @@ -69,7 +69,7 @@ orchestrates it. A load generator simply makes it easier to execute the rollout as there needs to be some kind of active traffic to complete the operation. Together, these components have a topology that looks like: -![Topology](/docs/images/canary/simple-topology.svg "Topology") +![Topology](/images/docs/canary/simple-topology.svg 'Topology') To add these components to your cluster and include them in the Linkerd [data plane](../reference/architecture/#data-plane), run: @@ -213,7 +213,7 @@ podinfo-primary ClusterIP 10.7.249.63 9898/TCP 23m At this point, the topology looks a little like: -![Initialized](/docs/images/canary/initialized.svg "Initialized") +![Initialized](/images/docs/canary/initialized.svg 'Initialized') {{< note >}} @@ -259,7 +259,7 @@ kubectl -n test get ev --watch While an update is occurring, the resources and traffic will look like this at a high level: -![Ongoing](/docs/images/canary/ongoing.svg "Ongoing") +![Ongoing](/images/docs/canary/ongoing.svg 'Ongoing') After the update is complete, this picture will go back to looking just like the figure from the previous section. diff --git a/linkerd.io/content/2.18/tasks/gitops.md b/linkerd.io/content/2.18/tasks/gitops.md index 18e90ce73c..51328396a3 100644 --- a/linkerd.io/content/2.18/tasks/gitops.md +++ b/linkerd.io/content/2.18/tasks/gitops.md @@ -22,7 +22,7 @@ the [auto proxy injection](../features/proxy-injection/) feature into your workflow. Finally, this guide conclude with steps to upgrade Linkerd to a newer version following a GitOps workflow. -![Linkerd GitOps workflow](/docs/images/gitops/architecture.png 'Linkerd GitOps workflow') +![Linkerd GitOps workflow](/images/docs/gitops/architecture.png 'Linkerd GitOps workflow') The software and tools used in this guide are selected for demonstration purposes only. Feel free to choose others that are most suited for your @@ -184,7 +184,7 @@ argocd proj get demo On the dashboard: -![New project in Argo CD dashboard](/docs/images/gitops/dashboard-project.png 'New project in Argo CD dashboard') +![New project in Argo CD dashboard](/images/docs/gitops/dashboard-project.png 'New project in Argo CD dashboard') ### Deploy the applications @@ -215,7 +215,7 @@ Sync the `main` application: argocd app sync main ``` -![Synchronize the main application](/docs/images/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') +![Synchronize the main application](/images/docs/gitops/dashboard-applications-main-sync.png 'Synchronize the main application') Notice that only the `main` application is synchronized. @@ -237,7 +237,7 @@ for deploy in "cert-manager" "cert-manager-cainjector" "cert-manager-webhook"; \ done ``` -![Synchronize the cert-manager application](/docs/images/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') +![Synchronize the cert-manager application](/images/docs/gitops/dashboard-cert-manager-sync.png 'Synchronize the cert-manager application') ### Deploy Sealed Secrets @@ -253,7 +253,7 @@ Confirm that sealed-secrets is running: kubectl -n kube-system rollout status deploy/sealed-secrets ``` -![Synchronize the sealed-secrets application](/docs/images/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') +![Synchronize the sealed-secrets application](/images/docs/gitops/dashboard-sealed-secrets-sync.png 'Synchronize the sealed-secrets application') ### Create mTLS trust anchor @@ -348,7 +348,7 @@ Git server earlier. {{< /note >}} -![Synchronize the linkerd-bootstrap application](/docs/images/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') +![Synchronize the linkerd-bootstrap application](/images/docs/gitops/dashboard-linkerd-bootstrap-sync.png 'Synchronize the linkerd-bootstrap application') SealedSecrets should have created a secret containing the decrypted trust anchor. Retrieve the decrypted trust anchor from the secret: @@ -380,7 +380,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Empty default trust anchor](/docs/images/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') +![Empty default trust anchor](/images/docs/gitops/dashboard-trust-anchor-empty.png 'Empty default trust anchor') We will override this parameter in the `linkerd` application with the value of `${trust_anchor}`. @@ -395,7 +395,7 @@ Ensure that the multi-line string is indented correctly. E.g., source: chart: linkerd-control-plane repoURL: https://helm.linkerd.io/edge - targetRevision: {{% chart-version %}} + targetRevision: { { % chart-version % } } helm: parameters: - name: identityTrustAnchorsPEM @@ -442,7 +442,7 @@ argocd app get linkerd-control-plane -ojson | \ jq -r '.spec.source.helm.parameters[] | select(.name == "identityTrustAnchorsPEM") | .value' ``` -![Override mTLS trust anchor](/docs/images/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') +![Override mTLS trust anchor](/images/docs/gitops/dashboard-trust-anchor-override.png 'Override mTLS trust anchor') Synchronize the `linkerd-crds` and `linkerd-control-plane` applications: @@ -457,7 +457,7 @@ Check that Linkerd is ready: linkerd check ``` -![Synchronize Linkerd](/docs/images/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') +![Synchronize Linkerd](/images/docs/gitops/dashboard-linkerd-sync.png 'Synchronize Linkerd') ### Test with emojivoto @@ -475,7 +475,7 @@ for deploy in "emoji" "vote-bot" "voting" "web" ; \ done ``` -![Synchronize emojivoto](/docs/images/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') +![Synchronize emojivoto](/images/docs/gitops/dashboard-emojivoto-sync.png 'Synchronize emojivoto') ### Upgrade Linkerd diff --git a/linkerd.io/content/2.18/tasks/multicluster.md b/linkerd.io/content/2.18/tasks/multicluster.md index 3a80b3f3ed..e0c8c81185 100644 --- a/linkerd.io/content/2.18/tasks/multicluster.md +++ b/linkerd.io/content/2.18/tasks/multicluster.md @@ -13,6 +13,7 @@ between services that live on different clusters. At a high level, you will: + 1. [Install Linkerd and Linkerd Viz](#install-linkerd-and-linkerd-viz) on two clusters with a shared trust anchor. 1. [Prepare](#preparing-your-cluster) the clusters. @@ -21,7 +22,7 @@ At a high level, you will: 1. [Export](#exporting-the-services) the demo services, to control visibility. 1. [Verify](#security) the security of your clusters. 1. [Split traffic](#traffic-splitting) from pods on the source cluster (`west`) - to the target cluster (`east`) +to the target cluster (`east`) ## Prerequisites @@ -52,7 +53,7 @@ At a high level, you will: ## Install Linkerd and Linkerd Viz -![install](/docs/images/multicluster/install.svg "Two Clusters") +![install](/images/docs/multicluster/install.svg 'Two Clusters') Linkerd requires a shared [trust anchor](generate-certificates/#trust-anchor-certificate) to exist between @@ -138,7 +139,7 @@ done ## Preparing your cluster -![preparation](/docs/images/multicluster/prep-overview.svg "Preparation") +![preparation](/images/docs/multicluster/prep-overview.svg 'Preparation') In order to route traffic between clusters, Linkerd leverages Kubernetes services so that your application code does not need to change and there is @@ -161,7 +162,7 @@ for ctx in west east; do done ``` -![install](/docs/images/multicluster/components.svg "Components") +![install](/images/docs/multicluster/components.svg 'Components') Installed into the `linkerd-multicluster` namespace, the gateway is a simple [pause container](https://github.com/linkerd/linkerd2/blob/main/multicluster/charts/linkerd-multicluster/templates/gateway.yaml#L3) @@ -200,7 +201,7 @@ mirroring services. We'll want to link the clusters together now! ## Linking the clusters -![link-clusters](/docs/images/multicluster/link-flow.svg "Link") +![link-clusters](/images/docs/multicluster/link-flow.svg 'Link') For `west` to mirror services from `east`, the `west` cluster needs to have credentials so that it can watch for services in `east` to be exported. You'd @@ -275,7 +276,7 @@ use the `--api-server-address` flag for `link`. ## Installing the test services -![test-services](/docs/images/multicluster/example-topology.svg "Topology") +![test-services](/images/docs/multicluster/example-topology.svg 'Topology') It is time to test this all out! The first step is to add some services that we can mirror. To add these to both clusters, you can run: @@ -303,7 +304,7 @@ To see what it looks like from the `west` cluster right now, you can run: kubectl --context=west -n test port-forward svc/frontend 8080 ``` -![west-podinfo](/docs/images/multicluster/west-podinfo.gif "West Podinfo") +![west-podinfo](/images/docs/multicluster/west-podinfo.gif 'West Podinfo') With the podinfo landing page available at [http://localhost:8080](http://localhost:8080), you can see how it looks in the @@ -394,7 +395,7 @@ the [grafana install instructions](grafana/) first to have a working grafana provisioned with Linkerd dashboards). You can get to it by running `linkerd --context=west viz dashboard` and going to -![grafana-dashboard](/docs/images/multicluster/grafana-dashboard.png "Grafana") +![grafana-dashboard](/images/docs/multicluster/grafana-dashboard.png 'Grafana') ## Security @@ -433,7 +434,7 @@ kubectl --context=west -n test run -it --rm --image=alpine:3 test -- \ ## Traffic Splitting -![with-split](/docs/images/multicluster/with-split.svg "Traffic Split") +![with-split](/images/docs/multicluster/with-split.svg 'Traffic Split') It is pretty useful to have services automatically show up in clusters and be able to explicitly address them, however that only covers one use case for @@ -477,7 +478,7 @@ both clusters.Alternatively, for the command line approach, `curl localhost:8080` will give you a message that greets from `west` and `east`. -![podinfo-split](/docs/images/multicluster/split-podinfo.gif "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/split-podinfo.gif 'Cross Cluster Podinfo') You can also watch what's happening with metrics. To see the source side of things (`west`), you can run: @@ -498,7 +499,7 @@ linkerd --context=east -n test viz stat \ There's even a dashboard! Run `linkerd viz dashboard` and send your browser to [localhost:50750](http://localhost:50750/namespaces/test/trafficsplits/podinfo). -![podinfo-split](/docs/images/multicluster/ts-dashboard.png "Cross Cluster Podinfo") +![podinfo-split](/images/docs/multicluster/ts-dashboard.png 'Cross Cluster Podinfo') ## Cleanup diff --git a/linkerd.io/content/2.19/_index.md b/linkerd.io/content/2.19/_index.md deleted file mode 100644 index 6c28dc82e3..0000000000 --- a/linkerd.io/content/2.19/_index.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Docs -cascade: - type: docs - -# Redirect -type: _default -layout: redirect -params: - redirect: ./overview ---- diff --git a/linkerd.io/content/_index.md b/linkerd.io/content/_index.md index d7cad1558c..22dbd7047b 100644 --- a/linkerd.io/content/_index.md +++ b/linkerd.io/content/_index.md @@ -12,7 +12,7 @@ params: other meshes. 100% open source, CNCF graduated, and written in Rust. buttons: - text: Get Started - href: /2/getting-started/ + href: /docs/getting-started/ variant: primary - text: Get Involved href: /community/get-involved/ @@ -118,7 +118,7 @@ params: Instantly track success rates, latencies, and request volumes for every meshed workload, without changes or config. image: /home/features/bar-chart.svg - url: /2/features/telemetry/ + url: /docs/features/telemetry/ - title: Simpler than any other mesh content: |- Minimalist, Kubernetes-native design. No hidden magic, as little YAML @@ -130,19 +130,19 @@ params: Transparently add mutual TLS to any on-cluster TCP communication with no configuration. image: /home/features/settings.svg - url: /2/features/server-policy/ + url: /docs/features/server-policy/ - title: Designed by engineers, for engineers content: |- Self-contained control plane, incrementally deployable data plane, and lots and lots of diagnostics and debugging tools. image: /home/features/startup.svg - url: /2/tasks/debugging-502s/ + url: /docs/tasks/debugging-502s/ - title: Latency-aware load balancing and cross-cluster failover content: |- Instantly add latency-aware load balancing, request retries, timeouts, and blue-green deploys to keep your applications resilient. image: /home/features/balance.svg - url: /2/features/load-balancing/ + url: /docs/features/load-balancing/ - title: State-of-the-art ultralight Rust dataplane content: |- Incredibly small and blazing fast Linkerd2-proxy _micro-proxy_ written diff --git a/linkerd.io/content/blog/2018/0918-announcing-linkerd-2-0/index.md b/linkerd.io/content/blog/2018/0918-announcing-linkerd-2-0/index.md index 371c2a5809..a022fa2024 100644 --- a/linkerd.io/content/blog/2018/0918-announcing-linkerd-2-0/index.md +++ b/linkerd.io/content/blog/2018/0918-announcing-linkerd-2-0/index.md @@ -20,7 +20,7 @@ You can try Linkerd 2.0 on a Kubernetes 1.9+ cluster in 60 seconds by running: curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh ``` -(Or check out the full [Getting Started Guide](/2/getting-started/).) +(Or check out the full [Getting Started Guide](/docs/getting-started/).) The 2.0 release of Linkerd brings some very significant changes. First, we’ve completely rewritten Linkerd to be orders of magnitude faster and smaller than @@ -44,7 +44,7 @@ cluster, you can run Linkerd on your service and get: Even better, installing Linkerd 2.0 on your service requires no configuration or code changes. -![Linkerd overview](cover.png "Linkerd overview") +![Linkerd overview](cover.png 'Linkerd overview') The 2.0 release is something we’re really excited about because it addresses two major problems with the traditional (!) service mesh model: diff --git a/linkerd.io/content/blog/2018/1114-grpc-load-balancing-on-kubernetes-without-tears/index.md b/linkerd.io/content/blog/2018/1114-grpc-load-balancing-on-kubernetes-without-tears/index.md index 0f0a5f37ae..66a5c56a1c 100644 --- a/linkerd.io/content/blog/2018/1114-grpc-load-balancing-on-kubernetes-without-tears/index.md +++ b/linkerd.io/content/blog/2018/1114-grpc-load-balancing-on-kubernetes-without-tears/index.md @@ -12,7 +12,7 @@ happens when you take a [simple gRPC Node.js microservices app](https://github.com/sourishkrout/nodevoto) and deploy it on Kubernetes: -![pods](grpc-pods.png "Pods") +![pods](grpc-pods.png 'Pods') While the `voting` service displayed here has several pods, it's clear from Kubernetes's CPU graphs that only one of the pods is actually doing any @@ -41,7 +41,7 @@ it reduces the overhead of connection management. However, it also means that connection is established, there's no more balancing to be done. All requests will get pinned to a single destination pod, as shown below: -![diagram](diagram-1.png "Diagram") +![diagram](diagram-1.png 'Diagram') ## Why doesn't this affect HTTP/1.1? @@ -72,7 +72,7 @@ gRPC load balancing, we need to shift from connection balancing to _request_ balancing. In other words, we need to open an HTTP/2 connection to each destination, and balance _requests_ across these connections, as shown below: -![diagram](diagram-2.png "Diagram") +![diagram](diagram-2.png 'Diagram') In network terms, this means we need to make decisions at L5/L7 rather than L3/L4, i.e. we need to understand the protocol sent over the TCP connections. @@ -107,7 +107,7 @@ service, it adds a tiny, ultra-fast proxy to each pod, and these proxies watch the Kubernetes API and do gRPC load balancing automatically. Our deployment then looks like this: -![multiplex](multiplex.png "Multiplex") +![multiplex](multiplex.png 'Multiplex') Using Linkerd has a couple big advantages. First, it works with services written in any language, with any gRPC client, and any deployment model (headless or @@ -129,15 +129,15 @@ impact on system performance will be negligible. ## gRPC Load Balancing in 60 seconds Linkerd is very easy to try. Just follow the steps in the -[Linkerd Getting Started Instructions](/2/getting-started/) — install the CLI on -your laptop, install the control plane on your cluster, and "mesh" your service -(inject the proxies into each pod). You'll have Linkerd running on your service -in no time, and should see proper gRPC balancing immediately. +[Linkerd Getting Started Instructions](/docs/getting-started/) — install the CLI +on your laptop, install the control plane on your cluster, and "mesh" your +service (inject the proxies into each pod). You'll have Linkerd running on your +service in no time, and should see proper gRPC balancing immediately. Let's take a look at our sample `voting` service again, this time after installing Linkerd: -![Voting service](voting-service.png "Voting service") +![Voting service](voting-service.png 'Voting service') As we can see, the CPU graphs for all pods are active, indicating that all pods are now taking traffic—without having to change a line of code. Voila, gRPC load @@ -148,7 +148,7 @@ to guess what's happening from CPU charts any more. Here's a Linkerd graph that's showing the success rate, request volume, and latency percentiles of each pod: -![Pod overview](pod-overview.png "Pod overview") +![Pod overview](pod-overview.png 'Pod overview') We can see that each pod is getting around 5 RPS. We can also see that, while we've solved our load balancing problem, we still have some work to do on our diff --git a/linkerd.io/content/blog/2019/0212-announcing-linkerd-2-2/index.md b/linkerd.io/content/blog/2019/0212-announcing-linkerd-2-2/index.md index c0ae7712f8..4cdf7f26e5 100644 --- a/linkerd.io/content/blog/2019/0212-announcing-linkerd-2-2/index.md +++ b/linkerd.io/content/blog/2019/0212-announcing-linkerd-2-2/index.md @@ -35,10 +35,10 @@ With that, on to the features! Linkerd 2.2 can now automatically retry failed requests, improving the overall success rate of your application in the presence of partial failures. Building -on top of the [service profiles](/2/features/service-profiles) model introduced -in 2.1, Linkerd allows you to configure this behavior on a per-route basis. -Here's a [quick screencast](https://asciinema.org/a/227055) of using retries and -timeouts to handle a failing endpoint. +on top of the [service profiles](/docs/features/service-profiles/) model +introduced in 2.1, Linkerd allows you to configure this behavior on a per-route +basis. Here's a [quick screencast](https://asciinema.org/a/227055) of using +retries and timeouts to handle a failing endpoint. In this screencast we can see that the output of `linkerd routes` now includes an ACTUAL_SUCCESS column, measuring success rate of requests on the wire, and an diff --git a/linkerd.io/content/blog/2019/0222-how-we-designed-retries-in-linkerd-2-2/index.md b/linkerd.io/content/blog/2019/0222-how-we-designed-retries-in-linkerd-2-2/index.md index 3e672ef738..61711e3ce0 100644 --- a/linkerd.io/content/blog/2019/0222-how-we-designed-retries-in-linkerd-2-2/index.md +++ b/linkerd.io/content/blog/2019/0222-how-we-designed-retries-in-linkerd-2-2/index.md @@ -20,10 +20,10 @@ automatically retry failed requests. This gives Linkerd the ability to automatically handle partial or transient failures in a service, without the application having to be aware: if a request fails, Linkerd can just try it again! Combined with Linkerd's -[request-level load balancing](/2/features/load-balancing/), this also allows +[request-level load balancing](/docs/features/load-balancing/), this also allows Linkerd to handle failures of individual pods. In Linkerd, you specify retries -as part of a [service profile](/2/features/service-profiles/) (introduced in a -[previous blog post](/2018/12/08/service-profiles-for-per-route-metrics/)). +as part of a [service profile](/docs/features/service-profiles/) (introduced in +a [previous blog post](/2018/12/08/service-profiles-for-per-route-metrics/)). Marking a route as retryable is as simple as adding \`isRetryable: true\` to the corresponding service profile entry: @@ -215,7 +215,7 @@ POST /authors.json authors 0.00% 0.0rps Success rate looks great but the p95 and p99 latencies have increased. This is to be expected because doing retries takes time. However, we can limit this by -setting a [timeouts](/2/features/retries-and-timeouts/#timeouts) another new +setting a [timeouts](/docs/features/retries-and-timeouts/#timeouts) another new feature of Linkerd 2.x - at the maximum duration that we’re willing to wait. For the purposes of this demo, I’ll set a timeout of 25ms. Your results will vary depending on the characteristics of your system. diff --git a/linkerd.io/content/blog/2019/0410-browser-testing-from-scratch-building-quick-and-easy-integration-tests-with-webdriverio-and-saucelabs/index.md b/linkerd.io/content/blog/2019/0410-browser-testing-from-scratch-building-quick-and-easy-integration-tests-with-webdriverio-and-saucelabs/index.md index 6c3b633c4d..28d240bde7 100644 --- a/linkerd.io/content/blog/2019/0410-browser-testing-from-scratch-building-quick-and-easy-integration-tests-with-webdriverio-and-saucelabs/index.md +++ b/linkerd.io/content/blog/2019/0410-browser-testing-from-scratch-building-quick-and-easy-integration-tests-with-webdriverio-and-saucelabs/index.md @@ -17,14 +17,14 @@ One of the coolest things about working on [Linkerd](http://www.linkerd.io) is how excited our users are about our clean, deceptively simple dashboard, built with [React](https://reactjs.org/) and [Material-UI](https://material-ui.com/). (Wanna try it? Get our super-light, open-source service mesh -[up and running in just a few minutes](/2/getting-started/)!) +[up and running in just a few minutes](/docs/getting-started/)!) -![Linkerd dashboard](linkerd-dashboard-screenshot.png "Linkerd dashboard screenshot from edge release 19.3.2.") +![Linkerd dashboard](linkerd-dashboard-screenshot.png 'Linkerd dashboard screenshot from edge release 19.3.2.') And when I say excited, I mean unsolicited-praise excited: we constantly get messages from users like this: -![Tweet](happy-tweet.png "Tweet by a happy Linkerd user!") +![Tweet](happy-tweet.png 'Tweet by a happy Linkerd user!') We want to keep our users happy with a clean, consistent dashboard as we constantly roll out new features and improvements, so recently we built a suite @@ -144,28 +144,28 @@ what to do. In your text editor, open `wdio.conf.js` and paste: ```javascript exports.config = { port: 9515, // default for ChromeDriver - path: "/", - services: ["chromedriver"], - runner: "local", - specs: ["./integration/specs/*.js"], + path: '/', + services: ['chromedriver'], + runner: 'local', + specs: ['./integration/specs/*.js'], exclude: [ // 'path/to/excluded/files' ], maxInstances: 10, capabilities: [ - { browserName: "chrome", platform: "OS X 10.13", version: "69.0" }, + { browserName: 'chrome', platform: 'OS X 10.13', version: '69.0' }, ], bail: 0, - baseUrl: "http://localhost", + baseUrl: 'http://localhost', waitforTimeout: 10000, connectionRetryTimeout: 90000, connectionRetryCount: 3, - framework: "mocha", + framework: 'mocha', mochaOpts: { - ui: "bdd", + ui: 'bdd', timeout: 60000, }, -}; +} ``` As you can see, we're specifying that our tests will live in @@ -195,14 +195,14 @@ check that the title of the page is what we expect. Open `first-test.js` and paste the following: ```javascript -const assert = require("assert"); -describe("logo link test", function () { - it("should redirect to the home view if logo is clicked", () => { - browser.url("http://www.linkerd.io"); - const title = browser.getTitle(); - assert.equal(title, "Linkerd - Linkerd"); - }); -}); +const assert = require('assert') +describe('logo link test', function () { + it('should redirect to the home view if logo is clicked', () => { + browser.url('http://www.linkerd.io') + const title = browser.getTitle() + assert.equal(title, 'Linkerd - Linkerd') + }) +}) ``` Right now, you're still in your `specs` directory. Go back up to your `app` @@ -218,7 +218,7 @@ You should see a message in your terminal saying "Starting ChromeDriver on port [Linkerd.io](http://linkerd.io) and close. You should see "1 Passing" in your terminal! -![WebdriverIO](terminal-message.png "WebdriverIO success message in terminal: 1 passed, 1 total (100% completed)") +![WebdriverIO](terminal-message.png 'WebdriverIO success message in terminal: 1 passed, 1 total (100% completed)') Awesome, you just successfully ran an integration test with WebdriverIO! 🥳 You can start building these tests out to test your application. I've found the @@ -272,31 +272,31 @@ Open that file and paste in the following: ```javascript exports.config = { - runner: "local", + runner: 'local', user: process.env.SAUCE_USERNAME, key: process.env.SAUCE_ACCESS_KEY, sauceConnect: true, - specs: ["./integration/specs/*.js"], + specs: ['./integration/specs/*.js'], // Patterns to exclude. exclude: [ // 'path/to/excluded/files' ], maxInstances: 10, capabilities: [ - { browserName: "firefox", platform: "Windows 10", version: "60.0" }, - { browserName: "chrome", platform: "OS X 10.13", version: "69.0" }, + { browserName: 'firefox', platform: 'Windows 10', version: '60.0' }, + { browserName: 'chrome', platform: 'OS X 10.13', version: '69.0' }, ], bail: 0, - baseUrl: "http://localhost", + baseUrl: 'http://localhost', waitforTimeout: 10000, connectionRetryTimeout: 90000, connectionRetryCount: 3, - framework: "mocha", + framework: 'mocha', mochaOpts: { - ui: "bdd", + ui: 'bdd', timeout: 60000, }, -}; +} ``` As you can see, we've removed the `port` , `path` and `services` variables and @@ -342,7 +342,7 @@ real-time from the [SauceLabs dashboard](https://app.saucelabs.com/dashboard/tests) (and even take over if you want to manually control where the test goes). -![SauceLabs dashboard](saucelabs-dashboard-screenshot.png "SauceLabs dashboard screenshot showing a report of an integration test") +![SauceLabs dashboard](saucelabs-dashboard-screenshot.png 'SauceLabs dashboard screenshot showing a report of an integration test') If any tests fail, you'll immediately get the URL in your terminal window with a video of the test and information about what happened. (Break the test and try diff --git a/linkerd.io/content/blog/2019/0429-design-principles/index.md b/linkerd.io/content/blog/2019/0429-design-principles/index.md index 5e3d6786f9..f9c831ab6e 100644 --- a/linkerd.io/content/blog/2019/0429-design-principles/index.md +++ b/linkerd.io/content/blog/2019/0429-design-principles/index.md @@ -62,7 +62,8 @@ beings are going to spend their time and energy with our project. For us _not_ to minimize the operational cost would do our users a tremendous disservice. For more detail about these principles and some examples of them in action, -check out the [Linkerd design principles documentation](/2/design-principles/). +check out the +[Linkerd design principles documentation](/design-principles/). Linkerd is a community project and is hosted by the [Cloud Native Computing Foundation](https://cncf.io). If you have feature diff --git a/linkerd.io/content/blog/2019/0820-announcing-linkerd-2-5/index.md b/linkerd.io/content/blog/2019/0820-announcing-linkerd-2-5/index.md index 2c6dec6e93..cfd58fc9b4 100644 --- a/linkerd.io/content/blog/2019/0820-announcing-linkerd-2-5/index.md +++ b/linkerd.io/content/blog/2019/0820-announcing-linkerd-2-5/index.md @@ -15,7 +15,7 @@ command to obey Kubernetes RBAC rules, improves Linkerd's CLI to report metrics during traffic splits, allows logging levels to be set dynamically, and much, much more. -Linkerd's [new Helm support](/2/tasks/install-helm/) offers an alternative to +Linkerd's [new Helm support](/docs/tasks/install-helm/) offers an alternative to `linkerd install` for installation. If you're a Helm 2 or Helm 3 user, you can use this install Linkerd with your existing deployment flow. Even if you're not, this method may provide a better mechanism for environments that require lots of @@ -38,7 +38,7 @@ enhancements, and bug fixes, including: - Dynamically configurable proxy logging levels. - A new `linkerd stat trafficsplits` command to show metrics during traffic - split operations (e.g. [a canary release](/2/tasks/flagger/)). + split operations (e.g. [a canary release](/docs/tasks/flagger/)). - A new Kubernetes cluster monitoring Grafana dashboard. - Handy new CLI flags like `--as` and `--all-namespaces`. - New pod anti-affinity rules in high availability (HA) mode. @@ -73,8 +73,8 @@ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh ``` Upgrading from a previous release? See our handy -[Linkerd upgrade guide](/2/tasks/upgrade/) for how to use the `linkerd upgrade` -command. +[Linkerd upgrade guide](/docs/tasks/upgrade/) for how to use the +`linkerd upgrade` command. Linkerd is a community project and is hosted by the [Cloud Native Computing Foundation](https://cncf.io/). If you have feature diff --git a/linkerd.io/content/blog/2019/1010-announcing-linkerd-2-6/index.md b/linkerd.io/content/blog/2019/1010-announcing-linkerd-2-6/index.md index 979c283476..a11b93b6ca 100644 --- a/linkerd.io/content/blog/2019/1010-announcing-linkerd-2-6/index.md +++ b/linkerd.io/content/blog/2019/1010-announcing-linkerd-2-6/index.md @@ -45,8 +45,8 @@ which is already causing some excitement in the community: Finally, following up on our Helm work from the previous release, we're happy to announce that **Linkerd now has a public Helm repo**! We've published a -[guide to installing Linkerd with Helm](/2/tasks/install-helm/) from this public -repo. +[guide to installing Linkerd with Helm](/docs/tasks/install-helm/) from this +public repo. Linkerd 2.6 additionally brings a tremendous list of other improvements, performance enhancements, and bug fixes, including: @@ -95,8 +95,8 @@ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh ``` Upgrading from a previous release? See our handy -[Linkerd upgrade guide](/2/tasks/upgrade/) for how to use the `linkerd upgrade` -command. +[Linkerd upgrade guide](/docs/tasks/upgrade/) for how to use the +`linkerd upgrade` command. Linkerd is a community project and is hosted by the [Cloud Native Computing Foundation](https://cncf.io/). If you have feature diff --git a/linkerd.io/content/blog/2020/0210-announcing-linkerd-2-7/index.md b/linkerd.io/content/blog/2020/0210-announcing-linkerd-2-7/index.md index f688fac21c..700b8eb505 100644 --- a/linkerd.io/content/blog/2020/0210-announcing-linkerd-2-7/index.md +++ b/linkerd.io/content/blog/2020/0210-announcing-linkerd-2-7/index.md @@ -66,7 +66,7 @@ and fixed many other smaller issues. ## And lots more Linkerd 2.7 brings some big improvements to Linkerd's Helm charts (though with -[some breaking changes](/2/tasks/upgrade/#upgrade-notice-stable-270)): we've +[some breaking changes](/docs/tasks/upgrade/#upgrade-notice-stable-270)): we've split the CNI template into a separate chart, fixed several issues, and generally updated the chart to follow community best practices. Linkerd 2.7 also has a tremendous list of other improvements, performance enhancements, and bug @@ -107,10 +107,10 @@ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh ``` Using Helm? See our -[guide to installing Linkerd with Helm](/2/tasks/install-helm/). Upgrading from -a previous release? We've got you covered: see our -[Linkerd upgrade guide](/2/tasks/upgrade/) for how to use the `linkerd upgrade` -command. +[guide to installing Linkerd with Helm](/docs/tasks/install-helm/). Upgrading +from a previous release? We've got you covered: see our +[Linkerd upgrade guide](/docs/tasks/upgrade/) for how to use the +`linkerd upgrade` command. ## Linkerd is for everyone diff --git a/linkerd.io/content/blog/2020/0319-welcome-hema/index.md b/linkerd.io/content/blog/2020/0319-welcome-hema/index.md index b532f988ba..ae0e1bfa58 100644 --- a/linkerd.io/content/blog/2020/0319-welcome-hema/index.md +++ b/linkerd.io/content/blog/2020/0319-welcome-hema/index.md @@ -7,8 +7,8 @@ params: --- 🎉 We are pleased to announce that [Hema Lee](https://github.com/hemakl) is now -a maintainer of the [CNI plugin](/2/features/cni/) for Linkerd. 🎉 She will be -helping improve and address community issues encountered with the plugin, a +a maintainer of the [CNI plugin](/docs/features/cni/) for Linkerd. 🎉 She will +be helping improve and address community issues encountered with the plugin, a requirement for many security conscious organizations. Hema has been part of the team operating Linkerd (including the CNI plugin) in diff --git a/linkerd.io/content/blog/2020/0323-serverless-service-mesh-with-knative/index.md b/linkerd.io/content/blog/2020/0323-serverless-service-mesh-with-knative/index.md index 2fef1f0483..997ebea375 100644 --- a/linkerd.io/content/blog/2020/0323-serverless-service-mesh-with-knative/index.md +++ b/linkerd.io/content/blog/2020/0323-serverless-service-mesh-with-knative/index.md @@ -28,14 +28,14 @@ workloads, but we also get the telemetry that we need to make sure that the components are healthy. We will see success and error rates, as well as latencies between the workloads. -![system-components](system_components.png "Knative System Components") +![system-components](system_components.png 'Knative System Components') At the application level, we'll add those same features to the application running on Knative itself. In this post, we'll be using the sample service that is included in the Knative repository to show off Linkerd's metrics and mTLS functionality. -![application-system-components](application_components.png "Application Level Components") +![application-system-components](application_components.png 'Application Level Components') At the end of this example, we'll see each of these components running as workloads, with all traffic proxied by the Linkerd service mesh. @@ -68,11 +68,11 @@ this example, we will use Ambassador as a simple Kubernetes [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) to the KinD cluster. Once deployed, the complete architecture will look like this: -![full-architecture](full_arch_ambassador.png "Full System Architecture") +![full-architecture](full_arch_ambassador.png 'Full System Architecture') Note that Ambassador is necessary because Linkerd doesn't provide an ingress by itself. Instead, Linkerd is designed to work with any ingress solution— -[Linkerd's design principles](/2/design-principles/) focus on simplicity and +[Linkerd's design principles](/design-principles/) focus on simplicity and composability—and Knative already offers five options for the gateway networking layer: Ambassador, Contour, Gloo, Istio, and Kourier. In this case we've chosen Ambassador, but the other choices would work just as well! @@ -166,7 +166,7 @@ operation. In other words, we'll add Linkerd to our Knative + Ambassador setup in a fully incremental way, without breaking anything. Linkerd control plane installation is a two step process. First, we'll install -the [Linkerd CLI](/2/reference/cli/) and then we'll use it to install the +the [Linkerd CLI](/docs/reference/cli/) and then we'll use it to install the Linkerd control plane. #### Get the Linkerd CLI @@ -230,7 +230,7 @@ returns without errors, you're ready to get meshing. But first, let's look at the Linkerd control plane architecture that was just deployed and the pods running in the control plane: -![linkerd-control-plane](control-plane.png "Linkerd Control Plane") +![linkerd-control-plane](control-plane.png 'Linkerd Control Plane') To view the control plane pods, run: @@ -260,8 +260,8 @@ linkerd-web-7bc875dc7f-jthxd 2/2 Running 2 4d2h The next step is to "mesh" the components by injecting the Linkerd sidecar proxy into their containers. Linkerd features an -[auto-injection feature](/2/features/proxy-injection/) to make it simple to add -your services to the service mesh. In the next steps, we'll annotate the +[auto-injection feature](/docs/features/proxy-injection/) to make it simple to +add your services to the service mesh. In the next steps, we'll annotate the `default`, `ambassador` and `knative-serving` namespaces to add their components which will instruct the `proxy-injector` to inject the Linkerd proxy. @@ -359,9 +359,9 @@ while true; do curl -H "HOST: helloworld-go.default.example.com" http://localhos The Linkerd CLI has subcommands that you can use to view the rich metrics that Linkerd collects about each of the services that have the Linkerd proxy -injected. For example, the [stat](/2/reference/cli/stat/) command will show you -the high level details of the resources in your cluster. Try running this -command: +injected. For example, the [stat](/docs/reference/cli/viz/#stat) command will +show you the high level details of the resources in your cluster. Try running +this command: ```bash linkerd stat deploy --all-namespaces @@ -401,9 +401,9 @@ linkerd linkerd-tap 1/1 100.00% 0.3rps linkerd linkerd-web 1/1 100.00% 0.3rps 1ms 5ms 5ms 2 ``` -Another example is the [tap](/2/reference/cli/tap/) command, where you can see -real-time requests being sent to resources. This command streams the requests -that are being sent to and from the helloworld-go pod in the `default` +Another example is the [tap](/docs/reference/cli/viz/#tap) command, where you +can see real-time requests being sent to resources. This command streams the +requests that are being sent to and from the helloworld-go pod in the `default` namespace: ```bash @@ -478,9 +478,9 @@ resource. For example, this will output all the metrics collected for the linkerd metrics --namespace knative-serving deploy/activator ``` -I encourage you to play with both of the [top](/2/reference/cli/metrics/) and -[edges](/2/reference/cli/edges/) commands to get an idea of how much information -they can provide. +I encourage you to play with both of the [top](/docs/reference/cli/viz/#top) and +[edges](/docs/reference/cli/viz/#edges) commands to get an idea of how much +information they can provide. #### Dashboard @@ -496,7 +496,7 @@ Your browser should open with the dashboard and you'll see that the `linkerd`, `ambassador`, and `knative-serving` namespaces all have deployments that are included in the service mesh: -![linkerd-dashboard](knative-4.png "Linkerd Dashboard") +![linkerd-dashboard](knative-4.png 'Linkerd Dashboard') Clicking on any of the namespace will display the pods and deployments in the namespace, with the metrics for each of them. @@ -518,7 +518,7 @@ running your applications, as well as the applications themselves. ## Summary In this walkthrough, we showed you how to add Linkerd to your Knative deployment -to transparently add [mutual TLS](/2/features/automatic-mtls/), metrics, and +to transparently add [mutual TLS](/docs/features/automatic-mtls/), metrics, and more. One of Linkerd's goals is to fit into the ecosystem and play well with other projects, and we think this is a great example of augmenting both Knative and Kubernetes with functionality that the service mesh can provide. We'd love diff --git a/linkerd.io/content/blog/2020/0609-announcing-linkerd-2-8/index.md b/linkerd.io/content/blog/2020/0609-announcing-linkerd-2-8/index.md index 08da519638..afb1537de8 100644 --- a/linkerd.io/content/blog/2020/0609-announcing-linkerd-2-8/index.md +++ b/linkerd.io/content/blog/2020/0609-announcing-linkerd-2-8/index.md @@ -72,7 +72,7 @@ datacenter or VPC, or across the public Internet, Linkerd will establish a connection between clusters that's encrypted and authenticated on both sides with mTLS. -![Linkerd multi-cluster Kubernetes example](multicluster.png "Linkerd multi-cluster Kubernetes example") +![Linkerd multi-cluster Kubernetes example](multicluster.png 'Linkerd multi-cluster Kubernetes example') This new multi-cluster functionality unlocks a whole host of use cases for Linkerd, including failover (transitioning traffic across datacenters or cloud @@ -150,9 +150,9 @@ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh ``` Using Helm? See our -[guide to installing Linkerd with Helm](/2/tasks/install-helm/). Upgrading from -a previous release? We've got you covered: see our -[Linkerd upgrade guide](/2/tasks/upgrade/) for how to use the linkerd upgrade +[guide to installing Linkerd with Helm](/docs/tasks/install-helm/). Upgrading +from a previous release? We've got you covered: see our +[Linkerd upgrade guide](/docs/tasks/upgrade/) for how to use the linkerd upgrade command. ## Linkerd is for everyone diff --git a/linkerd.io/content/blog/2020/0723-under-the-hood-of-linkerd-s-state-of-the-art-rust-proxy-linkerd2-proxy/index.md b/linkerd.io/content/blog/2020/0723-under-the-hood-of-linkerd-s-state-of-the-art-rust-proxy-linkerd2-proxy/index.md index dc613edef3..baabdf9e5c 100644 --- a/linkerd.io/content/blog/2020/0723-under-the-hood-of-linkerd-s-state-of-the-art-rust-proxy-linkerd2-proxy/index.md +++ b/linkerd.io/content/blog/2020/0723-under-the-hood-of-linkerd-s-state-of-the-art-rust-proxy-linkerd2-proxy/index.md @@ -202,9 +202,9 @@ configuration, while remaining transparent to the meshed application? The first step is _protocol detection._ For zero config to be a reality, when the proxy gets a request, we need to -[determine the protocol](/2/features/protocol-detection/) that's in use. So the -first thing we do is read a couple bytes from the client side of the connection -and ask a few questions: +[determine the protocol](/docs/features/protocol-detection/) that's in use. So +the first thing we do is read a couple bytes from the client side of the +connection and ask a few questions: - "Is this an HTTP request?" - "Is this a TLS @@ -222,11 +222,11 @@ there's nothing else we can do. Similarly, TCP traffic in an unknown protocol will be transparently forwarded to its original destination. On the other hand, what if the encrypted connection _is_ for us, as part of -Linkerd's [automatic mutual TLS](/2/features/automatic-mtls/) feature? Each +Linkerd's [automatic mutual TLS](/docs/features/automatic-mtls/) feature? Each proxy in the mesh has its own unique cryptographic identity, the key material for which is generated by the proxy on startup and never leaves the pod boundary or gets written to disk. These identities are -[signed by the control plane's Identity service](/2/features/automatic-mtls/#how-does-it-work) +[signed by the control plane's Identity service](/docs/features/automatic-mtls/#how-does-it-work) to indicate that the proxy is authenticated to serve traffic for the [Kubernetes ServiceAccount](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) of the pod that the proxy is injected into. If the SNI name matches the proxy's @@ -248,17 +248,17 @@ canonical form of that name. Once the proxy knows the request's target authority, it performs service discovery by looking up authority from the Linkerd -[control plane's Destination service](/2/reference/architecture/#destination). +[control plane's Destination service](/docs/reference/architecture/#destination). Whether or not the control plane is consulted is decided based on a set of search suffixes: by default, the proxy is configured to query the Destination service for authorities which are within the default [Kubernetes cluster local domain, `.cluster.local`](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/), but -[this can be overridden for clusters which use a custom domain](/2/tasks/using-custom-domain/). +[this can be overridden for clusters which use a custom domain](/docs/tasks/using-custom-domain/). The Destination service provides the proxy with the addresses of all the endpoints that make up the Kubernetes Service for that authority, along with Linkerd-specific metadata, and the -[Service Profile](/2/features/service-profiles/) that configures retries, +[Service Profile](/docs/features/service-profiles/) that configures retries, timeouts, and other policies. All this data is streamed to the proxy, so if anything changes—e.g. if a service is scaled up or down, or the Service Profile configuration is edited—the control plane will push the new state to the proxies @@ -280,8 +280,7 @@ each load balancing decision by picking the less loaded of two randomly-chosen available endpoints. Although it may seem counterintuitive, this has been [mathematically proven](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.57.4019&rep=rep1&type=pdf) to be at least as effective at scale as always picking the least loaded, and it -[avoids the problem of multiple load balancers all sending traffic to the least -loaded replica, overloading it](https://www.nginx.com/blog/nginx-power-of-two-choices-load-balancing-algorithm/). +[avoids the problem of multiple load balancers all sending traffic to the least loaded replica, overloading it](https://www.nginx.com/blog/nginx-power-of-two-choices-load-balancing-algorithm/). Also, this shortcut is significantly more efficient, making for a crucial difference in speed. diff --git a/linkerd.io/content/blog/2020/0902-the-roadmap-for-linkerd-proxy/index.md b/linkerd.io/content/blog/2020/0902-the-roadmap-for-linkerd-proxy/index.md index 745d327f5c..5fd1eca7f8 100644 --- a/linkerd.io/content/blog/2020/0902-the-roadmap-for-linkerd-proxy/index.md +++ b/linkerd.io/content/blog/2020/0902-the-roadmap-for-linkerd-proxy/index.md @@ -46,9 +46,9 @@ mesh use case, it's dramatically simpler to operate than general-purpose proxies such as NGINX and Envoy. This is not a knock on those proxies; it's simply a reflection that by shedding all the non-service-mesh use cases, Linkerd2-proxy gains the leeway to "just work" without tuning or tweaking, through features -like [protocol detection](/2/features/protocol-detection/) and -[Kubernetes-native service discovery](/2/features/load-balancing/). In fact, at -this point Linkerd-proxy doesn't even have a config file. +like [protocol detection](/docs/features/protocol-detection/) and +[Kubernetes-native service discovery](/docs/features/load-balancing/). In fact, +at this point Linkerd-proxy doesn't even have a config file. ## The future of the Linkerd2-proxy @@ -65,8 +65,8 @@ and well-contained to the big and hairy. These include: working on extend this to non-HTTP protocols, so that they have the same guarantees of workload identity and confidentiality that Linkerd provides for HTTP traffic today. As an added bonus, this feature will also extend - Linkerd's [seamless multi-cluster capabilities](/2/features/multicluster/) to - non-HTTP traffic. + Linkerd's [seamless multi-cluster capabilities](/docs/features/multicluster/) + to non-HTTP traffic. 2. **Revisiting latency bucketing**. As part of its instrumentation, Linkerd2-proxy records the latency of all traffic that passes through it and reports these values in a set of fixed buckets, with a specific latency range diff --git a/linkerd.io/content/blog/2020/1109-announcing-linkerd-2-9/index.md b/linkerd.io/content/blog/2020/1109-announcing-linkerd-2-9/index.md index 1116091dc5..ace1ec2103 100644 --- a/linkerd.io/content/blog/2020/1109-announcing-linkerd-2-9/index.md +++ b/linkerd.io/content/blog/2020/1109-announcing-linkerd-2-9/index.md @@ -160,9 +160,9 @@ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh ``` Using Helm? See our -[guide to installing Linkerd with Helm](/2/tasks/install-helm/). Upgrading from -a previous release? We've got you covered: see our -[Linkerd upgrade guide](/2/tasks/upgrade/) for how to use the linkerd upgrade +[guide to installing Linkerd with Helm](/docs/tasks/install-helm/). Upgrading +from a previous release? We've got you covered: see our +[Linkerd upgrade guide](/docs/tasks/upgrade/) for how to use the linkerd upgrade command. ## Linkerd is for everyone diff --git a/linkerd.io/content/blog/2020/1203-why-linkerd-doesnt-use-envoy/index.md b/linkerd.io/content/blog/2020/1203-why-linkerd-doesnt-use-envoy/index.md index c9bbfa8df8..f74117bb40 100644 --- a/linkerd.io/content/blog/2020/1203-why-linkerd-doesnt-use-envoy/index.md +++ b/linkerd.io/content/blog/2020/1203-why-linkerd-doesnt-use-envoy/index.md @@ -260,7 +260,7 @@ libraries that power Linkerd. I never thought you'd ask. You can install Linkerd in about 5 minutes, including mutual TLS, with zero configuration required. Start with our -[getting started guide](/2/getting-started/). +[getting started guide](/docs/getting-started/). ## Linkerd is for everyone diff --git a/linkerd.io/content/blog/2021/0223-protocol-detection-and-opaque-ports/index.md b/linkerd.io/content/blog/2021/0223-protocol-detection-and-opaque-ports/index.md index 6a6b73437e..da856a7a9c 100644 --- a/linkerd.io/content/blog/2021/0223-protocol-detection-and-opaque-ports/index.md +++ b/linkerd.io/content/blog/2021/0223-protocol-detection-and-opaque-ports/index.md @@ -1,7 +1,7 @@ --- date: 2021-02-23T00:00:00Z title: Protocol Detection and Opaque Ports in Linkerd -keywords: [linkerd, proxy, "Protocol Detection"] +keywords: [linkerd, proxy, 'Protocol Detection'] params: author: charles showCover: true @@ -14,11 +14,11 @@ There have been quite a few questions from the Linkerd community in [GitHub](https://github.com/linkerd/linkerd2) about this feature, so this article focuses on one of the most important underlying features that enables Linkerd to perform this feat: -[Protocol Detection](/2/features/protocol-detection/). +[Protocol Detection](/docs/features/protocol-detection/). Protocol detection, as the name suggests, allows Linkerd to automatically detect the protocol in use in a TCP connection. One of -[Linkerd's design principles](/2/design-principles/) is to "just work," and +[Linkerd's design principles](/design-principles/) is to "just work," and protocol detection is an important part of how Linkerd achieves that goal. In this article, you'll learn what protocol detection is, how this subtle @@ -30,7 +30,7 @@ will bring to Linkerd. Put simply, _protocol detection is the ability to determine the protocol in use on a TCP connection by inspecting the traffic on the connection._ -![Protocol Detection](protocol_detection_base.png "Protocol Detection") +![Protocol Detection](protocol_detection_base.png 'Protocol Detection') Linkerd uses protocol detection to avoid asking the user to specify the protocol. Rather than requiring the user to configure what protocol each port @@ -124,7 +124,7 @@ port). Similarly, the annotation `config.linkerd.io/skip-inbound-ports: 3306` will write an iptables rule so that the proxy never handles MySQL traffic that is sent to it. -![Skip Ports Configuration](protocol_detection_skip.png "Skip Ports Configuration") +![Skip Ports Configuration](protocol_detection_skip.png 'Skip Ports Configuration') These options provide a workaround for the inability of protocol detection to handle server-speaks-first protocols. However, they have one significant @@ -140,7 +140,7 @@ performing protocol detection. While this approach still requires configuration, marking a port as opaque allows Linkerd to apply mTLS and report TCP-level metrics—a big improvement over skipping it entirely. -![Opaque Ports Configuration](protocol_detection_opaque.png "Opaque Ports Configuration") +![Opaque Ports Configuration](protocol_detection_opaque.png 'Opaque Ports Configuration') Linkerd 2.10 will also improve how protocol detection works by making it "fail open": if the protocol detection code sees no client-side bytes after 10 diff --git a/linkerd.io/content/blog/2021/0301-linkerd-210-and-extensions/index.md b/linkerd.io/content/blog/2021/0301-linkerd-210-and-extensions/index.md index 73f0b4c9bd..3913703288 100644 --- a/linkerd.io/content/blog/2021/0301-linkerd-210-and-extensions/index.md +++ b/linkerd.io/content/blog/2021/0301-linkerd-210-and-extensions/index.md @@ -66,9 +66,10 @@ our focus on simplicity. Thus far, we've tackled this in a somewhat ad-hoc manner, including a custom install flow for the multi-cluster components, a specialized -"[Bring Your Own Prometheus](/2/tasks/external-prometheus/)" feature, and so on. -Moving all this machinery to the extensions framework allows for consistency: -each of these feature extensions can now be treated exactly the same way. +"[Bring Your Own Prometheus](/docs/tasks/external-prometheus/)" feature, and so +on. Moving all this machinery to the extensions framework allows for +consistency: each of these feature extensions can now be treated exactly the +same way. Finally, we're excited about the idea of allowing features in Linkerd that feel just like the rest of Linkerd but don't require modifying the core project. diff --git a/linkerd.io/content/blog/2021/1228-linkerd-service-account-tokens/index.md b/linkerd.io/content/blog/2021/1228-linkerd-service-account-tokens/index.md index d0a23ca7d5..725551b449 100644 --- a/linkerd.io/content/blog/2021/1228-linkerd-service-account-tokens/index.md +++ b/linkerd.io/content/blog/2021/1228-linkerd-service-account-tokens/index.md @@ -132,7 +132,7 @@ is even wired into Linkerd's metrics: whenever a meshed request is received or being sent, the relevant metrics also include the service account with which that peer was associated with. -Here is an example metric from the [emojivoto](/2/getting-started/) example: +Here is an example metric from the [emojivoto](/docs/getting-started/) example: ```promql request_total{..., client_id="web.emojivoto.serviceaccount.identity.linkerd.cluster.local", authority="emoji-svc.emojivoto.svc.cluster.local:8080", namespace="emojivoto", pod="emoji-696d9d8f95-5sj4j"} 14532 diff --git a/linkerd.io/content/blog/2021/1229-the-service-mesh-in-2022/index.md b/linkerd.io/content/blog/2021/1229-the-service-mesh-in-2022/index.md index 5881875afa..62791dbd76 100644 --- a/linkerd.io/content/blog/2021/1229-the-service-mesh-in-2022/index.md +++ b/linkerd.io/content/blog/2021/1229-the-service-mesh-in-2022/index.md @@ -57,8 +57,8 @@ meshed pods. Client-side policy covers a vast set of features, including: - ... and lots more. Linkerd actually already has a basic form of client-side policy in the form of -[ServiceProfiles](/2/reference/service-profiles/), which allow you (among other -things) to control the retry behavior of callers to a service. In 2.12 and +[ServiceProfiles](/docs/reference/service-profiles/), which allow you (among +other things) to control the retry behavior of callers to a service. In 2.12 and beyond, we'll be revisiting ServiceProfiles and tackling this important class of features in a more systematic way. diff --git a/linkerd.io/content/blog/2022/0309-linkerd-multi-cluster-failover/index.md b/linkerd.io/content/blog/2022/0309-linkerd-multi-cluster-failover/index.md index c944961923..c80f97ec22 100644 --- a/linkerd.io/content/blog/2022/0309-linkerd-multi-cluster-failover/index.md +++ b/linkerd.io/content/blog/2022/0309-linkerd-multi-cluster-failover/index.md @@ -18,7 +18,7 @@ boundaries separated by the open Internet. Implemented as a Kubernetes operator that can be added to an existing Linkerd deployment, the failover strategy can be applied to a single cluster but is particularly useful for multi-cluster deployments. Linkerd already provides -[powerful cross-cluster communication capabilities](/2/features/multicluster/) +[powerful cross-cluster communication capabilities](/docs/features/multicluster/) that work with any cluster topology, including multi-cloud and hybrid cloud; are completely transparent to the application; are zero-trust compatible; and do not introduce any single points of failure (SPOF) to the system. To this feature @@ -34,10 +34,10 @@ rounds out Linkerd's existing reliability features, providing a complete solution for ultra-high-reliability deployments that covers: - Failure of individual nodes: handled via - [retries](/2/features/retries-and-timeouts/) and - [request balancing](/2/features/load-balancing/) + [retries](/docs/features/retries-and-timeouts/) and + [request balancing](/docs/features/load-balancing/) - Failures due to bad code changes: (handled via - [canary deployments](/2/features/traffic-split/)) + [canary deployments](/docs/features/traffic-split/)) - Failures due to service unavailability in general: handled with the failover operator - Failures due to whole-cluster outages: handed with the failover operator diff --git a/linkerd.io/content/blog/2023/0613-dynamic-request-routing-circuit-breaking/index.md b/linkerd.io/content/blog/2023/0613-dynamic-request-routing-circuit-breaking/index.md index 4ea5a7bcfd..4789b0c4d4 100644 --- a/linkerd.io/content/blog/2023/0613-dynamic-request-routing-circuit-breaking/index.md +++ b/linkerd.io/content/blog/2023/0613-dynamic-request-routing-circuit-breaking/index.md @@ -148,8 +148,8 @@ spec: rules: - matches: - headers: - - name: "x-faces-user" # X-Faces-User: testuser goes to smiley2 - value: "testuser" + - name: 'x-faces-user' # X-Faces-User: testuser goes to smiley2 + value: 'testuser' backendRefs: - name: smiley2 port: 80 @@ -165,8 +165,8 @@ mean that you need to be careful to propagate headers through the various workloads of your application. You can find more details about -[dynamic request routing](/2/tasks/configuring-dynamic-request-routing/) in its -documentation. +[dynamic request routing](/docs/tasks/configuring-dynamic-request-routing/) in +its documentation. ## Circuit Breaking @@ -228,8 +228,8 @@ Don’t ever wait more than 120 seconds between retries: balancer.linkerd.io/failure-accrual-consecutive-max-penalty: 120s ``` -More information on [circuit breaking](/2/tasks/circuit-breakers/) is available -in its documentation. +More information on [circuit breaking](/docs/tasks/circuit-breakers/) is +available in its documentation. ## Gotchas diff --git a/linkerd.io/content/blog/2023/0713-linkerd-in-production/index.md b/linkerd.io/content/blog/2023/0713-linkerd-in-production/index.md index 0f2fb0f6d2..596b415740 100644 --- a/linkerd.io/content/blog/2023/0713-linkerd-in-production/index.md +++ b/linkerd.io/content/blog/2023/0713-linkerd-in-production/index.md @@ -7,7 +7,7 @@ keywords: - linkerd - helm - production - - "high availability" + - 'high availability' - debug - debugging - alerts @@ -192,7 +192,7 @@ that. After you've vetted `values-ha.yaml`, you'll run `helm install` with the `-f path/to/your/values-ha.yaml` option. The -[Linkerd documentation on installing with Helm](/2/tasks/install-helm/) goes +[Linkerd documentation on installing with Helm](/docs/tasks/install-helm/) goes into much more detail here. #### Upgrades @@ -202,7 +202,7 @@ and always test in non-production environments. **Upgrade the control plane first** with `helm upgrade`, then gradually roll out data-plane upgrades by restarting workloads and allowing the control plane to inject the new version of the proxy. (There are more details on this process in the -[Linkerd upgrade documentation](/2/tasks/upgrade/)). +[Linkerd upgrade documentation](/docs/tasks/upgrade/)). Order matters here: doing the control plane first is always supported, as the data plane is designed to handle the temporary skew – but **don't skip major @@ -297,7 +297,7 @@ This gets a little complex: Last but not least, there's the Linkerd debug sidecar, which comes equipped with `tshark`, `tcpdump`, `lsof`, and `iproute2`. If your runtime allows it, it can be very useful for debugging: check out the -[documentation on using the debug sidecar](/2/tasks/using-the-debug-container) +[documentation on using the debug sidecar](/docs/tasks/using-the-debug-container/) for the details here. ## Linkerd in Production diff --git a/linkerd.io/content/blog/2023/0720-flat-networks/index.md b/linkerd.io/content/blog/2023/0720-flat-networks/index.md index 7399f41b5b..db4b7324cf 100644 --- a/linkerd.io/content/blog/2023/0720-flat-networks/index.md +++ b/linkerd.io/content/blog/2023/0720-flat-networks/index.md @@ -33,11 +33,11 @@ boundaries that is: 1. **Fully secured**. This means that traffic between clusters is encrypted, authenticated, and authorized using mutual TLS, workload identities (not network identities!) and Linkerd's fine-grained, - [zero-trust authorization policies](/2/features/server-policy/). + [zero-trust authorization policies](/docs/features/server-policy/). 2. **Transparent to the application.** This means that the application is totally decoupled from cluster topology, which allows the operator to take advantage of powerful networking capabilities such as - [dynamically failover traffic to other clusters](/2/tasks/automatic-failover/). + [dynamically failover traffic to other clusters](/docs/tasks/automatic-failover/). 3. **Observable and reliable**. Linkerd's powerful L7 instrospection and reliability mechanisms, including golden metrics, retries, timeouts, distributed tracing, circuit breaking, and more, are all available to @@ -45,7 +45,7 @@ boundaries that is: Linkerd has supported multi-cluster Kubernetes deployments since the release of Linkerd 2.8 in 2020. That release introduced -[a simple and elegant design](/2/features/multicluster/) that involves the +[a simple and elegant design](/docs/features/multicluster/) that involves the addition of a service mirror component to handle service discovery, and a multi-cluster gateway component to handle traffic from other clusters. diff --git a/linkerd.io/content/blog/2024/0206-linkerd-certificates-with-vault/index.md b/linkerd.io/content/blog/2024/0206-linkerd-certificates-with-vault/index.md index 2d70757fa6..6654588931 100644 --- a/linkerd.io/content/blog/2024/0206-linkerd-certificates-with-vault/index.md +++ b/linkerd.io/content/blog/2024/0206-linkerd-certificates-with-vault/index.md @@ -3,7 +3,7 @@ date: 2024-02-06T00:00:00Z slug: linkerd-certificates-with-vault title: |- Workshop Recap: Linkerd Certificate Management with Vault -keywords: [linkerd, "2.14", features, vault] +keywords: [linkerd, '2.14', features, vault] params: author: flynn showCover: true @@ -95,7 +95,7 @@ one network, but you do administration from a different network. You'll need several CLI tools for this: -- `linkerd`, from `/2/getting-started/`; +- `linkerd`, from `/docs/getting-started/`; - `kubectl`, from `https://kubernetes.io/docs/tasks/tools/`; - `helm`, from `https://helm.sh/docs/intro/quickstart/`; - `jq`, from `https://jqlang.github.io/jq/download/`; diff --git a/linkerd.io/content/blog/2024/0813-announcing-linkerd-2.16/index.md b/linkerd.io/content/blog/2024/0813-announcing-linkerd-2.16/index.md index 609b9f5c9b..cd13f43eaa 100644 --- a/linkerd.io/content/blog/2024/0813-announcing-linkerd-2.16/index.md +++ b/linkerd.io/content/blog/2024/0813-announcing-linkerd-2.16/index.md @@ -4,7 +4,7 @@ slug: announcing-linkerd-2.16 title: |- Announcing Linkerd 2.16! Metrics, retries, and timeouts for HTTP and gRPC routes; IPv6 support; policy audit mode; and lots more -keywords: [linkerd, "2.16", features] +keywords: [linkerd, '2.16', features] params: author: william showCover: true @@ -51,7 +51,7 @@ metadata: namespace: myns annotations: retry.linkerd.io/http: 5xx - retry.linkerd.io/limit: "2" + retry.linkerd.io/limit: '2' retry.linkerd.io/timeout: 300ms spec: parentRefs: @@ -63,14 +63,14 @@ spec: - matches: - path: type: PathPrefix - value: "/foo/" + value: '/foo/' ``` In short, Linkerd's new implementation of per-route metrics, retries, and timeouts are now provided in a principled, future-proof way that is composable with existing features such as circuit breaking, and configured using the Gateway API resources that we believe are the future of service mesh -configuration. [Learn more](/2/features/retries-and-timeouts/). +configuration. [Learn more](/docs/features/retries-and-timeouts/). ## Audit mode for security policies @@ -119,7 +119,7 @@ default. Linkerd 2.16 adds support for IPv6 on IPv6-only and dual-stack clusters. (When enabled on dual stack clusters, Linkerd will only use IPv6 endpoints.) For backwards compatibility, this feature is disabled by default, but enabling it is -a simple boolean. [Learn more](/2/features/ipv6/). +a simple boolean. [Learn more](/docs/features/ipv6/). ## Other noteworthy changes diff --git a/linkerd.io/content/blog/2024/1127-edge-release-roundup/index.md b/linkerd.io/content/blog/2024/1127-edge-release-roundup/index.md index 412fdd1bea..ad83188551 100644 --- a/linkerd.io/content/blog/2024/1127-edge-release-roundup/index.md +++ b/linkerd.io/content/blog/2024/1127-edge-release-roundup/index.md @@ -146,7 +146,7 @@ Installing the latest edge release needs just a single command. curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install-edge | sh ``` -You can also [install edge releases with Helm](/2/tasks/install-helm/). +You can also [install edge releases with Helm](/docs/tasks/install-helm/). ## Linkerd is for everyone diff --git a/linkerd.io/content/blog/2024/1205-announcing-linkerd-2.17/index.md b/linkerd.io/content/blog/2024/1205-announcing-linkerd-2.17/index.md index d736af9564..ae9980ee1e 100644 --- a/linkerd.io/content/blog/2024/1205-announcing-linkerd-2.17/index.md +++ b/linkerd.io/content/blog/2024/1205-announcing-linkerd-2.17/index.md @@ -3,7 +3,7 @@ date: 2024-12-05T00:00:00Z slug: announcing-linkerd-2.17 title: |- Announcing Linkerd 2.17: Egress, Rate Limiting, and Federated Services -keywords: [linkerd, "2.17", features] +keywords: [linkerd, '2.17', features] params: author: william showCover: true @@ -68,8 +68,8 @@ See [egress docs](/2.17/features/egress/) for more. Rate limiting is a reliability mechanism that protects services from being overloaded. In contrast to -[Linkerd's circuit breaking feature](/2/reference/circuit-breaking/), which is -client-side behavior designed to protect clients from failing services, rate +[Linkerd's circuit breaking feature](/docs/reference/circuit-breaking/), which +is client-side behavior designed to protect clients from failing services, rate limiting is server-side behavior: it is enforced by the service receiving the traffic and designed to protect it from misbehaving clients. @@ -131,7 +131,7 @@ Linkerd 2.8, was designed for the ad-hoc, pair-to-pair connectivity that was common at the time. However, modern Kubernetes platforms are often much more intentional in their multicluster usage, sometimes ranging into the hundreds or thousands of clusters. Federated services join features such as -[flat network / pod-to-pod multicluster](/2/tasks/pod-to-pod-multicluster/) +[flat network / pod-to-pod multicluster](/docs/tasks/pod-to-pod-multicluster/) (introduced in Linkerd 2.14) in the toolbox for this new class of Kubernetes adoption. diff --git a/linkerd.io/content/blog/2025/0110-edge-release-roundup/index.md b/linkerd.io/content/blog/2025/0110-edge-release-roundup/index.md index 21a806252e..691765617c 100644 --- a/linkerd.io/content/blog/2025/0110-edge-release-roundup/index.md +++ b/linkerd.io/content/blog/2025/0110-edge-release-roundup/index.md @@ -153,7 +153,7 @@ Installing the latest edge release needs just a single command. curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install-edge | sh ``` -You can also [install edge releases with Helm](/2/tasks/install-helm/). +You can also [install edge releases with Helm](/docs/tasks/install-helm/). ## Linkerd is for everyone diff --git a/linkerd.io/content/blog/2025/0411-edge-release-roundup/index.md b/linkerd.io/content/blog/2025/0411-edge-release-roundup/index.md index 79394aa386..6dfa3dc701 100644 --- a/linkerd.io/content/blog/2025/0411-edge-release-roundup/index.md +++ b/linkerd.io/content/blog/2025/0411-edge-release-roundup/index.md @@ -78,7 +78,6 @@ Linkerd 2.18: Therefore, starting with [edge-25.3.1], Linkerd will not install Gateway API CRDs for you unless you specifically ask for them and they're not already on your cluster. - - **If you are upgrading** from an earlier version of Linkerd, you shouldn't need to take any action -- Linkerd can tell that you have the CRDs installed and do the right thing for you. @@ -337,7 +336,7 @@ Installing the latest edge release needs just a single command. curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install-edge | sh ``` -You can also [install edge releases with Helm](/2/tasks/install-helm/). +You can also [install edge releases with Helm](/docs/tasks/install-helm/). ## Linkerd is for everyone diff --git a/linkerd.io/content/blog/2025/0423-announcing-linkerd-2.18/index.md b/linkerd.io/content/blog/2025/0423-announcing-linkerd-2.18/index.md index 7f500d472e..bf26adb3f2 100644 --- a/linkerd.io/content/blog/2025/0423-announcing-linkerd-2.18/index.md +++ b/linkerd.io/content/blog/2025/0423-announcing-linkerd-2.18/index.md @@ -3,7 +3,7 @@ date: 2025-04-23T00:00:00Z slug: announcing-linkerd-2.18 title: |- Announcing Linkerd 2.18: Battlescars, lessons learned, and preliminary Windows support -keywords: [linkerd, "2.18", features] +keywords: [linkerd, '2.18', features] params: author: william showCover: true @@ -89,7 +89,7 @@ hundreds or thousands of clusters, and will typically manage their infrastructure via GitOps. In Linkerd 2.14, we took a first step to supporting new multicluster patterns by -introducing [pod-to-pod multicluster](/2/tasks/pod-to-pod-multicluster/) for +introducing [pod-to-pod multicluster](/docs/tasks/pod-to-pod-multicluster/) for platforms with flat networks. In Linkerd 2.17, we introduced federated services, a new model for services that span many clusters. In Linkerd 2.18, we've further improved multicluster by allowing the creation of all Link resources in a @@ -111,7 +111,7 @@ Linkerd 2.18 will be the last release that installs Gateway API types by default. In this release, we've bumped the installed versions of these types; we've added support for Gateway API version 1.2.1, the latest available; and we've improved our documentation with recommendations for how users should -[manage the Gateway API](/2/features/gateway-api/#managing-the-gateway-api) +[manage the Gateway API](/docs/features/gateway-api/#managing-the-gateway-api) across Linkerd and other projects. ### Experimental Windows build diff --git a/linkerd.io/content/blog/2025/0424-linkerd-benchmarks/index.md b/linkerd.io/content/blog/2025/0424-linkerd-benchmarks/index.md index 4d0be4713c..a39f0cc303 100644 --- a/linkerd.io/content/blog/2025/0424-linkerd-benchmarks/index.md +++ b/linkerd.io/content/blog/2025/0424-linkerd-benchmarks/index.md @@ -27,7 +27,7 @@ are a good option for some use cases, sidecars are here to stay. Zero trust security is more important than ever and has been one of the major topics of the past year. We can now mesh our non-Kubernetes workloads running on VMs and even bare metal (shoutout to Linkerd’s -[mesh expansion](/2/tasks/adding-non-kubernetes-workloads/) feature). The +[mesh expansion](/docs/tasks/adding-non-kubernetes-workloads/) feature). The service mesh market has a 41.3% compound annual growth rate[^1], which continues to drive innovation and development in the service mesh domain. Most importantly, service mesh adoption is now at an all-time high, with 70% of the @@ -94,7 +94,7 @@ Linkerd consistently showed better performance, especially at 200 and 2000 RPS, even outperforming Ambient in some cases. While Ambient showed strong performance, Linkerd maintained a lead. As shown in the results above, the performance gap has narrowed compared to 2021, making additional features—such -as [Multi-cluster Federated Services](/2/tasks/federated-services/), +as [Multi-cluster Federated Services](/docs/tasks/federated-services/), [FIPS compliance](https://www.buoyant.io/linkerd-enterprise), and the upcoming Windows support in Linkerd’s case—increasingly important for gaining an advantage over other service meshes. diff --git a/linkerd.io/content/blog/2025/0604-edge-release-roundup/index.md b/linkerd.io/content/blog/2025/0604-edge-release-roundup/index.md index 31a5e8de2b..b514368e66 100644 --- a/linkerd.io/content/blog/2025/0604-edge-release-roundup/index.md +++ b/linkerd.io/content/blog/2025/0604-edge-release-roundup/index.md @@ -69,7 +69,7 @@ default port for tracing is the OpenTelemetry port (4317) rather than the OpenCensus port (55678). [April 2025 Roundup]: /2025/04/11/linkerd-edge-release-roundup/ -[Linkerd Gateway API documentation]: /2/features/gateway-api/ +[Linkerd Gateway API documentation]: /docs/features/gateway-api/ [edge-25.4.1]: https://github.com/linkerd/linkerd2/releases/tag/edge-25.4.1 [edge-25.4.3]: https://github.com/linkerd/linkerd2/releases/tag/edge-25.4.3 [edge-25.4.4]: https://github.com/linkerd/linkerd2/releases/tag/edge-25.4.4 @@ -147,7 +147,7 @@ Installing the latest edge release needs just a single command. curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install-edge | sh ``` -You can also [install edge releases with Helm](/2/tasks/install-helm/). +You can also [install edge releases with Helm](/docs/tasks/install-helm/). ## Linkerd is for everyone diff --git a/linkerd.io/content/blog/2025/0725-tilt-linkerd-nginx-part-2/index.md b/linkerd.io/content/blog/2025/0725-tilt-linkerd-nginx-part-2/index.md index dc79544aa8..f8c8fb10c5 100644 --- a/linkerd.io/content/blog/2025/0725-tilt-linkerd-nginx-part-2/index.md +++ b/linkerd.io/content/blog/2025/0725-tilt-linkerd-nginx-part-2/index.md @@ -258,7 +258,7 @@ To enable server-level authorization for `foo > baz` communication, we first need to identify the gRPC routes on the baz service. This requires creating both inbound and outbound route definitions. For a detailed explanation of the differences between these route types, see the -[official documentation](/2/reference/grpcroute/). +[official documentation](/docs/reference/grpcroute/). ##### Outbound @@ -311,7 +311,7 @@ metadata: namespace: default spec: identities: - - "foo.default.serviceaccount.identity.linkerd.cluster.local" + - 'foo.default.serviceaccount.identity.linkerd.cluster.local' --- apiVersion: policy.linkerd.io/v1alpha1 kind: AuthorizationPolicy @@ -354,7 +354,7 @@ metadata: namespace: default spec: identities: - - "bar.default.serviceaccount.identity.linkerd.cluster.local" + - 'bar.default.serviceaccount.identity.linkerd.cluster.local' --- apiVersion: policy.linkerd.io/v1alpha1 kind: AuthorizationPolicy diff --git a/linkerd.io/content/blog/2025/0801-edge-release-roundup/index.md b/linkerd.io/content/blog/2025/0801-edge-release-roundup/index.md index 6dbd4b7716..6c57f091b4 100644 --- a/linkerd.io/content/blog/2025/0801-edge-release-roundup/index.md +++ b/linkerd.io/content/blog/2025/0801-edge-release-roundup/index.md @@ -73,7 +73,6 @@ As always, we have a couple of breaking changes to note: - Also in [edge-25.6.3], we updated the port names used in many Linkerd deployments to avoid Kubernetes 1.33's new warnings if port names aren't unique across all containers in the same pod: - - In `destination`, `grpc` becomes `dest-grpc` and `admin-http` becomes `dest-admin` - In `sp-validator`, `admin-http` becomes `spval-admin` @@ -227,7 +226,7 @@ Installing the latest edge release needs just a single command. curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install-edge | sh ``` -You can also [install edge releases with Helm](/2/tasks/install-helm/). +You can also [install edge releases with Helm](/docs/tasks/install-helm/). ## Linkerd is for everyone diff --git a/linkerd.io/content/blog/2025/0905-edge-release-roundup/index.md b/linkerd.io/content/blog/2025/0905-edge-release-roundup/index.md index 42235e95b3..a199c9d7c3 100644 --- a/linkerd.io/content/blog/2025/0905-edge-release-roundup/index.md +++ b/linkerd.io/content/blog/2025/0905-edge-release-roundup/index.md @@ -53,7 +53,6 @@ As always, we have a couple of breaking changes to note: default, rather than with the older `iptables-legacy`. If your nodes don't support `iptables-nft`, you can revert to using `iptables-legacy` by setting the `iptablesMode` value: - - If you're using the init container, set `proxyInit.iptablesMode: legacy` in the `linkerd2-control-plane` chart. @@ -125,7 +124,7 @@ Installing the latest edge release needs just a single command. curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install-edge | sh ``` -You can also [install edge releases with Helm](/2/tasks/install-helm/). +You can also [install edge releases with Helm](/docs/tasks/install-helm/). ## Linkerd is for everyone diff --git a/linkerd.io/content/blog/2025/0909-linkerd-with-opentelemetry/index.md b/linkerd.io/content/blog/2025/0909-linkerd-with-opentelemetry/index.md index 2103bfc532..9278fdb2af 100644 --- a/linkerd.io/content/blog/2025/0909-linkerd-with-opentelemetry/index.md +++ b/linkerd.io/content/blog/2025/0909-linkerd-with-opentelemetry/index.md @@ -121,8 +121,8 @@ And you're in! ## Installing Linkerd Installing Linkerd is pretty straightforward. Just follow the -[Getting Started Guide](/2/getting-started/), and you'll have Linkerd running in -~2–5 minutes; you can skip the linkerd-viz part—we won't need that. +[Getting Started Guide](/docs/getting-started/), and you'll have Linkerd running +in ~2–5 minutes; you can skip the linkerd-viz part—we won't need that. Next, inject Linkerd into your workloads (Deployments) by running (use the next command to inject it into an entire namespace): @@ -190,9 +190,9 @@ kind: ClusterRole metadata: name: otel-collector-read rules: - - apiGroups: [""] - resources: ["pods", "endpoints", "services", "namespaces", "nodes"] - verbs: ["get", "list", "watch"] + - apiGroups: [''] + resources: ['pods', 'endpoints', 'services', 'namespaces', 'nodes'] + verbs: ['get', 'list', 'watch'] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding @@ -363,7 +363,7 @@ spec: containers: - name: otel-collector image: otel/opentelemetry-collector:latest - args: ["--config=/etc/otelcol/config.yaml"] + args: ['--config=/etc/otelcol/config.yaml'] volumeMounts: - name: otel-config mountPath: /etc/otelcol @@ -394,7 +394,7 @@ exporters: otlp/openobserve: endpoint: openobserve-openobserve-standalone.openobserve.svc.cluster.local:5081 headers: - Authorization: "Basic cm9vdEBleGFtcGxlLmNvbTpDb21wbGV4cGFzcyMxMjM=" + Authorization: 'Basic cm9vdEBleGFtcGxlLmNvbTpDb21wbGV4cGFzcyMxMjM=' organization: default stream-name: default tls: @@ -406,7 +406,7 @@ exporters: The token: ```yaml -Authorization: "Basic cm9vdEBleGFtcGxlLmNvbTpDb21wbGV4cGFzcyMxMjM=" +Authorization: 'Basic cm9vdEBleGFtcGxlLmNvbTpDb21wbGV4cGFzcyMxMjM=' ``` Is just base64 encoding of the default username and password we used above. @@ -438,10 +438,10 @@ how you can get Linkerd's metrics and ingest them in your favorite solution. ## Bonus: Linkerd's Auth policy Linkerd comes with a handy feature called -[Authorization Policies](/2/reference/authorization-policy/). With it, you can -enable or prevent workloads from talking to each other (think firewall) based on -different parameters. For example, which services, meshed or unmeshed, on the -same or separate clusters, which port is being used, etc.? +[Authorization Policies](/docs/reference/authorization-policy/). With it, you +can enable or prevent workloads from talking to each other (think firewall) +based on different parameters. For example, which services, meshed or unmeshed, +on the same or separate clusters, which port is being used, etc.? It's worth noting that right now, all the data is visible to anyone in the cluster, which isn't great in many cases. In production, you might want to diff --git a/linkerd.io/content/blog/2025/1020-hands-off-linkerd-certificate-rotation/index.md b/linkerd.io/content/blog/2025/1020-hands-off-linkerd-certificate-rotation/index.md index f315ed26cb..ca02753772 100644 --- a/linkerd.io/content/blog/2025/1020-hands-off-linkerd-certificate-rotation/index.md +++ b/linkerd.io/content/blog/2025/1020-hands-off-linkerd-certificate-rotation/index.md @@ -1,7 +1,7 @@ --- date: 2025-10-20T00:00:00Z title: Hands off Linkerd certificate rotation -keywords: [linkerd, "Cert Manager", automation] +keywords: [linkerd, 'Cert Manager', automation] params: author: name: Matthew McLane @@ -27,7 +27,7 @@ service mesh, and I’d like to take you along for the ride. Linkerd largely manages its own certificates, but it needs a trusted foundation: a root anchor and an identity issuer certificate. Linkerd’s own documentation on -**“[Automatically Rotating Control Plane TLS Credentials](/2/tasks/automatically-rotating-control-plane-tls-credentials/)”** +**“[Automatically Rotating Control Plane TLS Credentials](/docs/tasks/automatically-rotating-control-plane-tls-credentials/)”** explains this in detail. My goal was to build a completely automated solution for our clusters, bypassing the need for manual `kubectl` commands. I wanted to leverage our existing ArgoCD infrastructure to handle everything, including @@ -38,7 +38,7 @@ intervention. The first step in my solution was to create a simple **Helm chart** to lay down the required certificates. Following the -[documentation](/2/tasks/automatically-rotating-control-plane-tls-credentials/), +[documentation](/docs/tasks/automatically-rotating-control-plane-tls-credentials/), this chart creates three key certificates in the namespace using cert-manager: `linkerd-trust-root-issuer`, `linkerd-trust-anchor`, and `linkerd-identity-issuer`. @@ -66,7 +66,7 @@ As stated in the documentation: I really didn’t want to rely on anything with manual intervention. The solution to this problem was fairly simple to workout. All the heavy lifting was provided in the -[documentation](/2/tasks/automatically-rotating-control-plane-tls-credentials/)! +[documentation](/docs/tasks/automatically-rotating-control-plane-tls-credentials/)! First I started by creating a set of shell scripts. First is a script to rotate the certificates: diff --git a/linkerd.io/content/blog/2025/1208-edge-release-roundup/index.md b/linkerd.io/content/blog/2025/1208-edge-release-roundup/index.md index cc39ee8669..6325bc34d2 100644 --- a/linkerd.io/content/blog/2025/1208-edge-release-roundup/index.md +++ b/linkerd.io/content/blog/2025/1208-edge-release-roundup/index.md @@ -13,22 +13,20 @@ params: images: [social.jpg] # Open graph image --- -Welcome to the excessively-large December 2025 Edge Release Roundup -posts, where we dive into the most recent edge releases to help keep -everyone up to date on the latest and greatest! This post covers edge -releases from September through November 2025 (the runup to KubeCon was -hectic around here). +Welcome to the excessively-large December 2025 Edge Release Roundup posts, where +we dive into the most recent edge releases to help keep everyone up to date on +the latest and greatest! This post covers edge releases from September through +November 2025 (the runup to KubeCon was hectic around here). ## How to give feedback Edge releases are a snapshot of our current development work on `main`; by definition, they always have the most recent features but they may have incomplete features, features that end up getting rolled back later, or (like -all software) even bugs. That said, edge releases _are_ intended for -production use, and go through a rigorous set of automated and manual tests -before being released. Once released, we also document whether the release is -recommended for broad use -- and when needed, we go back and update the -recommendations. +all software) even bugs. That said, edge releases _are_ intended for production +use, and go through a rigorous set of automated and manual tests before being +released. Once released, we also document whether the release is recommended for +broad use -- and when needed, we go back and update the recommendations. We would be delighted to hear how these releases work out for you! You can open [a GitHub issue](https://github.com/linkerd/linkerd2/issues/) or @@ -39,59 +37,54 @@ reach us. ## Recommendations and breaking changes -**Spoiler alert**: if you're looking at edge releases from the latter -chunk of 2025, we recommend you skip straight to [edge-25.11.3] to take -full advantage of fixes along the way. However, any RECOMMENDED release -is fair game. +**Spoiler alert**: if you're looking at edge releases from the latter chunk of +2025, we recommend you skip straight to [edge-25.11.3] to take full advantage of +fixes along the way. However, any RECOMMENDED release is fair game. As usual, we have some breaking changes to call out: -* As of [edge-25.11.3], the `proxy-init` image has been merged with the - `proxy` image to simplify image management. Since we no longer ship a - separate `proxy-init` image, if you were explicitly referencing that - image you'll need to update your references to use the `proxy` image - instead. +- As of [edge-25.11.3], the `proxy-init` image has been merged with the `proxy` + image to simplify image management. Since we no longer ship a separate + `proxy-init` image, if you were explicitly referencing that image you'll need + to update your references to use the `proxy` image instead. -* As of [edge-25.10.3], the `ip_port_subscribers` metric has been removed - and replaced with the lower-cardinality `workload_subscribers` metric. - This change is intended to reduce the cardinality of metrics and - improve performance. +- As of [edge-25.10.3], the `ip_port_subscribers` metric has been removed and + replaced with the lower-cardinality `workload_subscribers` metric. This change + is intended to reduce the cardinality of metrics and improve performance. -* As of [edge-25.10.2], support for the (long-deprecated) OpenCensus - trace protocol has been removed. The OpenTelemetry protocol is now the - only supported tracing protocol. +- As of [edge-25.10.2], support for the (long-deprecated) OpenCensus trace + protocol has been removed. The OpenTelemetry protocol is now the only + supported tracing protocol. -* As of [edge-25.10.1], the `linkerd-jaeger` extension has been removed: +- As of [edge-25.10.1], the `linkerd-jaeger` extension has been removed: instead, Linkerd supports directly configuring OpenTelemetry tracing by setting `controller.tracing.enable` and `controller.tracing.collector.endpoint` when installing Linkerd. -* As of [edge-25.9.4], the `linkerd-crds` Helm chart will no longer - install the Gateway API CRDs by default. **This change may require - attention when upgrading**; see below for details. +- As of [edge-25.9.4], the `linkerd-crds` Helm chart will no longer install the + Gateway API CRDs by default. **This change may require attention when + upgrading**; see below for details. -Also as of [edge-25.10.2], native sidecar support moves to beta, adding -the `config.beta.linkerd.io/proxy-enable-native-sidecar` annotation and -deprecating the alpha annotation (although that will continue to -function). +Also as of [edge-25.10.2], native sidecar support moves to beta, adding the +`config.beta.linkerd.io/proxy-enable-native-sidecar` annotation and deprecating +the alpha annotation (although that will continue to function). ### Gateway API and Upgrading -As of [edge-25.9.4], the `linkerd-crds` Helm chart will no longer install -the Gateway API CRDs by default. To force the chart to install the -Gateway API CRDs, set `installGatewayAPI=true` when installing the chart. +As of [edge-25.9.4], the `linkerd-crds` Helm chart will no longer install the +Gateway API CRDs by default. To force the chart to install the Gateway API CRDs, +set `installGatewayAPI=true` when installing the chart. -If you're upgrading from a previous release of Linkerd, and you -originally used the `linkerd-crds` chart to install the Gateway API CRDs, -you _may_ need to take extra action: +If you're upgrading from a previous release of Linkerd, and you originally used +the `linkerd-crds` chart to install the Gateway API CRDs, you _may_ need to take +extra action: -* If you're already running Linkerd 2.18/edge-25.4.4 or higher, you're - good to go. The Gateway API CRDs that you originally installed with Helm - will stay on the cluster when you do the upgrade. +- If you're already running Linkerd 2.18/edge-25.4.4 or higher, you're good to + go. The Gateway API CRDs that you originally installed with Helm will stay on + the cluster when you do the upgrade. -* If you're running something older, you'll need to set `--reuse-values` - when upgrading, to make sure that the existing Gateway API CRDs stay - installed. +- If you're running something older, you'll need to set `--reuse-values` when + upgrading, to make sure that the existing Gateway API CRDs stay installed. [edge-25.11.3]: https://github.com/linkerd/linkerd2/releases/tag/edge-25.11.3 [edge-25.10.7]: https://github.com/linkerd/linkerd2/releases/tag/edge-25.10.7 @@ -108,73 +101,68 @@ list here. You can find them in the full release notes for each release. ### edge-25.11.3 (November 27, 2025) -This release merges the `proxy-init` image into the `proxy` image, -simplifying image management and reducing overall image size -- make sure -to update any explicit references to the `proxy-init` image. It also -correctly honors the `timeouts.request` value of HTTPRoutes in the -`gateway.networking.k8s.io` API group, to match the behavior of -`policy.linkerd.io` HTTPRoutes. +This release merges the `proxy-init` image into the `proxy` image, simplifying +image management and reducing overall image size -- make sure to update any +explicit references to the `proxy-init` image. It also correctly honors the +`timeouts.request` value of HTTPRoutes in the `gateway.networking.k8s.io` API +group, to match the behavior of `policy.linkerd.io` HTTPRoutes. ### edge-25.11.2 (November 20, 2025) _This release is **not recommended**; use [edge-25.11.3] instead._ -This release fixes broken documentation URLs in CLI commands (thanks, -[beza]!) and corrects a typo in the EgressNetwork and ExternalWorkload -CRD definitions (thanks, [YY]!). Unfortunately, it also fails to -correctly handle SPIRE when using mesh expansion; we recommend using -[edge-25.11.3] instead. +This release fixes broken documentation URLs in CLI commands (thanks, [beza]!) +and corrects a typo in the EgressNetwork and ExternalWorkload CRD definitions +(thanks, [YY]!). Unfortunately, it also fails to correctly handle SPIRE when +using mesh expansion; we recommend using [edge-25.11.3] instead. [beza]: https://github.com/bezarsnba [YY]: https://github.com/yy-ofms ### edge-25.11.1 (November 06, 2025) -This release correctly includes pod metadata in OpenTelemetry traces, -fixes the `workload_subscribers` metric to correctly track the total -number of subscribers across all IP and port combinations (rather than -only the most recent combination), and prevents a task leak when using -federated services. +This release correctly includes pod metadata in OpenTelemetry traces, fixes the +`workload_subscribers` metric to correctly track the total number of subscribers +across all IP and port combinations (rather than only the most recent +combination), and prevents a task leak when using federated services. ### edge-25.10.7 (October 29, 2025) -This release includes more guardrails for tracing configuration in the -Helm chart, notably including fixing a possible crash if the tracing -collector endpoint is not set. +This release includes more guardrails for tracing configuration in the Helm +chart, notably including fixing a possible crash if the tracing collector +endpoint is not set. ### edge-25.10.6 (October 23, 2025) _This release is **not recommended**; use [edge-25.10.7] instead, or just go straight to [edge-25.11.3]._ -This release includes more guardrails for tracing configuration in the -Helm chart, but we recommend skipping it in favor of [edge-25.10.7] or -[edge-25.11.3] to avoid a possible crash if the tracing collector -endpoint is not set. +This release includes more guardrails for tracing configuration in the Helm +chart, but we recommend skipping it in favor of [edge-25.10.7] or [edge-25.11.3] +to avoid a possible crash if the tracing collector endpoint is not set. ### edge-25.10.5 (October 20, 2025) _This release is **not recommended**; use [edge-25.10.7] instead, or just go straight to [edge-25.11.3]._ -This release includes more guardrails for tracing configuration in the -Helm chart, but we recommend skipping it in favor of [edge-25.10.7] or -[edge-25.11.3] to avoid a possible crash if the tracing collector -endpoint is not set. +This release includes more guardrails for tracing configuration in the Helm +chart, but we recommend skipping it in favor of [edge-25.10.7] or [edge-25.11.3] +to avoid a possible crash if the tracing collector endpoint is not set. ### edge-25.10.4 (October 16, 2025) _This release is **not recommended**; use [edge-25.10.7] instead, or just go straight to [edge-25.11.3]._ -This release prevents a possible crash when tracing isn't configured at -all, but we recommend skipping it in favor of [edge-25.10.7] or -[edge-25.11.3] to avoid a possible crash if the tracing is configured but -the tracing collector endpoint is not set. +This release prevents a possible crash when tracing isn't configured at all, but +we recommend skipping it in favor of [edge-25.10.7] or [edge-25.11.3] to avoid a +possible crash if the tracing is configured but the tracing collector endpoint +is not set. It also adds semantic convention labels `user-agent.original`, -`http.request.header.content-length`, `http.request.header.content-type`, -and `http.request.header.l5d-orig-proto` to OpenTelemetry spans. +`http.request.header.content-length`, `http.request.header.content-type`, and +`http.request.header.l5d-orig-proto` to OpenTelemetry spans. ### edge-25.10.3 (October 13, 2025) @@ -183,79 +171,75 @@ straight to [edge-25.11.3]._ This release removes the `linkerd.io/proxy-root-parent` and `linkerd.io/proxy-root-parent-kind` labels added to injected pods in -[edge-25.10.2], and also fixes a potential deadlock that could result in -leaked tasks and unneeded memory consumption. However, we recommend -skipping it in favor of [edge-25.10.7] or [edge-25.11.3] to avoid a -possible crash if the tracing collector endpoint is not set. - -This release includes one breaking change: it removes the -`ip_port_subscribers` metric and replaces it with the lower-cardinality -`workload_subscribers` metric, and it also allows configuring the -destination controller's stream queue capacity. The default remains 100 -for the moment; lower values may be better for improving responsiveness -to readiness and liveness issues. +[edge-25.10.2], and also fixes a potential deadlock that could result in leaked +tasks and unneeded memory consumption. However, we recommend skipping it in +favor of [edge-25.10.7] or [edge-25.11.3] to avoid a possible crash if the +tracing collector endpoint is not set. + +This release includes one breaking change: it removes the `ip_port_subscribers` +metric and replaces it with the lower-cardinality `workload_subscribers` metric, +and it also allows configuring the destination controller's stream queue +capacity. The default remains 100 for the moment; lower values may be better for +improving responsiveness to readiness and liveness issues. ### edge-25.10.2 (October 09, 2025) _This release is **not recommended**; use [edge-25.10.7] instead, or just go straight to [edge-25.11.3]._ -This release drops support for the (long-deprecated) OpenCensus trace -protocol. Additionally, it adds the `linkerd.io/proxy-root-parent` and -`linkerd.io/proxy-root-parent-kind` labels to injected pods, but this -change was reverted in edge-25.10.3 due to unforeseen issues. We -recommend skipping this release and going straight to [edge-25.10.7] or -[edge-25.11.3]. - -Additionally, this release adds support for setting OpenTelemetry tracing -values via `resource.opentelemetry.io/