Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
38 changes: 18 additions & 20 deletions linkerd.io/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,8 +153,8 @@ list page. If a cover image is not present, or you would like to use a different
image than the cover image, you can name it `feature` and place it in the blog
post folder.

**Note:** When a blog post is featured on the blog listing it will be
cropped into a 4x1 ratio.
**Note:** When a blog post is featured on the blog listing it will be cropped
into a 4x1 ratio.

#### Thumbnail images

Expand Down Expand Up @@ -289,31 +289,29 @@ bash {class=disable-copy}
### Creating a new major version

To create a new major version for the Linkerd docs, follow the steps below. As
an example we suppose the latest major is `2.18` and we'd like to create docs
for the upcoming `2.19` version, that will appear at `https://linkerd.io/2.19`.
an example we suppose the latest major is `2.19` and we'd like to create docs
for the upcoming `2.20` version.

- Clone the `https://github.com/linkerd/website` repo
- Create a new branch `yourusername/2.19`
- Create a new branch `yourusername/2.20`
- Update the latest version in `linkerd.io/config/_default/params.yaml`:
`latestMajorVersion: "2.19"`
- Update the `docs` menu in `linkerd.io/config/_default/menu.yaml` to include a
menu item for `2.19`.
`latestMajorVersion: "2.20"`
- Update the `docs` menu in `linkerd.io/config/_default/menu.yaml` changing the
2.19 `pageRef` to `/2.19/`. Then include a new menu item for `2.20` with
`pageRef` set to `/docs/`.
- Make sure all the links in the edge version (`2-edge`) are relative and don't
have the version hard-coded. E.g. `(/../cli/install/#)` instead of
`(/2-edge/reference/cli/install/#)`.
- Add a row to the Supported Kubernetes Versions table for `2.19` in
- Add a row to the Supported Kubernetes Versions table for `2.20` in
`linkerd.io/content/2-edge/reference/k8s-versions.md`.
- Create an entire new directory, copying the edge docs:
`cp -r linkerd.io/content/2-edge linkerd.io/content/2.19`. Any upcoming doc
changes pertaining to `2.19` should be pushed against that new directory and
- Add a row to the Gateway API compatibility table for `2.20` in
`linkerd.io/content/2-edge/features/gateway-api.md`.
- Rename the `docs` directory `2.19`.
Comment thread
travisbeckham marked this conversation as resolved.
- Create a new directory, copying the edge docs:
`cp -r linkerd.io/content/2-edge linkerd.io/content/docs`. Any upcoming doc
changes pertaining to `2.20` should be pushed against that new directory and
the `2-edge` directory.
- Generate the CLI docs with `linkerd doc > linkerd.io/data/cli/2-19.yaml`. Just
- Generate the CLI docs with `linkerd doc > linkerd.io/data/cli/2-20.yaml`. Just
to make sure the edge data is up to date, copy the contents from this newly
genereated file to `linkerd.io/data/cli/2-edge.yaml`.
- Push, and hold the merge till after `2.19` is out.
- After merging, update the Cloudflare redirection rule so `/2` points to
`/2.19`:
- Click on the `linkerd.io` site
- Click on the `Rules`section
- Update the rule `https://linkerd.io/2/*` so that it points to
`https://linkerd.io/2.19/$1`
- Push, and hold the merge till after `2.20` is out.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a step we need in here to make the per-file 2.20 -> docs redirects? Or is that only needed as a one-time thing for 2.19?

Copy link
Copy Markdown
Collaborator Author

@travisbeckham travisbeckham May 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of the {latestVersion} -> docs redirects are generated at build time. You can see this in alias-latest-docs.html‎

(or am I not understanding your question?)

13 changes: 13 additions & 0 deletions linkerd.io/assets/alias.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
<!DOCTYPE html>
<html lang="{{ site.Language.LanguageCode }}">
<head>
<title>{{ .Permalink }}</title>
<link rel="canonical" href="{{ .Permalink }}">
<meta name="robots" content="noindex">
<meta charset="utf-8">
<meta http-equiv="refresh" content="0; url={{ .Permalink }}">
<script>
window.location.href = '{{ .Permalink }}' + window.location.hash;
</script>
</head>
</html>
3 changes: 1 addition & 2 deletions linkerd.io/config/_default/menu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ docs:
pageRef: /2-edge/
weight: 99
- name: Linkerd 2.19
pageRef: /2.19/
pageRef: /docs/
weight: 19
- name: Linkerd 2.18
pageRef: /2.18/
Expand Down Expand Up @@ -108,7 +108,6 @@ community:
image: logos/forum.png

follow:

- name: Linkedin
url: https://www.linkedin.com/company/linkerd/
weight: 1
Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2-edge/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,5 @@ cascade:
type: _default
layout: redirect
params:
redirect: ./overview
redirect: ./getting-started
---
2 changes: 1 addition & 1 deletion linkerd.io/content/2-edge/checks/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,5 @@ type: _default
layout: redirect
params:
unlisted: true
redirect: /2/tasks/troubleshooting/
redirect: /docs/tasks/troubleshooting/
---
10 changes: 5 additions & 5 deletions linkerd.io/content/2-edge/features/dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ health of specific service routes.
One way to pull it up is by running `linkerd viz dashboard` from the command
line.

![Top Line Metrics](/docs/images/architecture/stat.png "Top Line Metrics")
![Top Line Metrics](/images/docs/architecture/stat.png 'Top Line Metrics')

## Grafana

Expand Down Expand Up @@ -96,13 +96,13 @@ linkerd -n emojivoto viz tap deploy/web
All of this functionality is also available in the dashboard, if you would like
to use your browser instead:

![Top Line Metrics](/docs/images/getting-started/stat.png "Top Line Metrics")
![Top Line Metrics](/images/docs/getting-started/stat.png 'Top Line Metrics')

![Deployment Detail](/docs/images/getting-started/inbound-outbound.png "Deployment Detail")
![Deployment Detail](/images/docs/getting-started/inbound-outbound.png 'Deployment Detail')

![Top](/docs/images/getting-started/top.png "Top")
![Top](/images/docs/getting-started/top.png 'Top')

![Tap](/docs/images/getting-started/tap.png "Tap")
![Tap](/images/docs/getting-started/tap.png 'Tap')

## Futher reading

Expand Down
4 changes: 2 additions & 2 deletions linkerd.io/content/2-edge/features/distributed-tracing.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,13 +26,13 @@ For example, Linkerd can display a live topology of all incoming and outgoing
dependencies for a service, without requiring distributed tracing or any other
such application modification:

![The Linkerd dashboard showing an automatically generated topology graph](/docs/images/books/webapp-detail.png "The Linkerd dashboard showing an automatically generated topology graph")
![The Linkerd dashboard showing an automatically generated topology graph](/images/docs/books/webapp-detail.png 'The Linkerd dashboard showing an automatically generated topology graph')

Likewise, Linkerd can provide golden metrics per service and per _route_, again
without requiring distributed tracing or any other such application
modification:

![Linkerd dashboard showing an automatically generated route metrics](/docs/images/books/webapp-routes.png "Linkerd dashboard showing an automatically generated route metrics")
![Linkerd dashboard showing an automatically generated route metrics](/images/docs/books/webapp-routes.png 'Linkerd dashboard showing an automatically generated route metrics')

## Using distributed tracing

Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2-edge/features/multicluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ the _Foo_ service as if it were on the local cluster.
Linkerd supports three basic forms of multi-cluster communication: hierarchical,
flat, and federated.

![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png)
![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png)

### Hierarchical networks

Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2-edge/getting-started/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -255,7 +255,7 @@ linkerd viz dashboard &

You should see a screen like this:

![The Linkerd dashboard in action](/docs/images/getting-started/viz-empty-dashboard.png "The Linkerd dashboard in action")
![The Linkerd dashboard in action](/images/docs/getting-started/viz-empty-dashboard.png 'The Linkerd dashboard in action')

Click around, explore, and have fun! For extra credit, see if you can find the
live metrics for each Emojivoto component, and determine which one has a partial
Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2-edge/reference/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ with the control plane for configuration.
Linkerd also provides a **CLI** that can be used to interact with the control
and data planes.

![Linkerd's architecture](/docs/images/architecture/control-plane.png "Linkerd's architecture")
![Linkerd's architecture](/images/docs/architecture/control-plane.png "Linkerd's architecture")

## CLI

Expand Down
8 changes: 4 additions & 4 deletions linkerd.io/content/2-edge/reference/iptables.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ The redirect chain will be configured with two more rules:
Based on these two rules, there are two possible paths that an inbound packet
can take, both of which are outlined below.

![Inbound iptables chain traversal](/docs/images/iptables/iptables-fig2-1.png "Inbound iptables chain traversal")
![Inbound iptables chain traversal](/images/docs/iptables/iptables-fig2-1.png 'Inbound iptables chain traversal')

The packet will arrive on the `PREROUTING` chain and will be immediately routed
to the redirect chain. If its destination port matches any of the inbound ports
Expand Down Expand Up @@ -79,7 +79,7 @@ configured:
been produced by the service, so it should be forwarded to its destination by
the proxy.

![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-2.png "Outbound iptables chain traversal")
![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-2.png 'Outbound iptables chain traversal')

A packet produced by the service will first hit the `OUTPUT` chain; from here,
it will be sent to our own output chain for processing. The first rule it
Expand Down Expand Up @@ -113,7 +113,7 @@ in the pod. This scenario would typically apply when:
- The destination is a port bound on localhost (regardless of which container it
belongs to).

![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-3.png "Outbound iptables chain traversal")
![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-3.png 'Outbound iptables chain traversal')

When the application targets itself through its pod's IP (or loopback address),
the packets will traverse the two output chains. The first rule will be skipped,
Expand All @@ -138,7 +138,7 @@ inbound side to account for outbound packets that are sent locally.
is not guaranteed that the destination will be local. The packet follows an
unusual path, as depicted in the diagram below.

![Outbound iptables chain traversal](/docs/images/iptables/iptables-fig2-4.png "Outbound iptables chain traversal")
![Outbound iptables chain traversal](/images/docs/iptables/iptables-fig2-4.png 'Outbound iptables chain traversal')

When the packet first traverses the output chains, it will follow the same path
an outbound packet would normally take. In such a scenario, the packet's
Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2-edge/reference/multicluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ modes: hierarchical (using a gateway), flat (without a gateway), and federated.

These modes can be mixed and matched.

![Architectural diagram comparing hierarchical and flat network modes](/docs/images/multicluster/flat-network.png)
![Architectural diagram comparing hierarchical and flat network modes](/images/docs/multicluster/flat-network.png)

Hierarchical mode places a bare minimum of requirements on the underlying
network, as it only requires that the gateway IP be reachable. However, flat
Expand Down
6 changes: 3 additions & 3 deletions linkerd.io/content/2-edge/tasks/books.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ the other services. There are three services:
For demo purposes, the app comes with a simple traffic generator. The overall
topology looks like this:

![Topology](/docs/images/books/topology.png "Topology")
![Topology](/images/docs/books/topology.png 'Topology')

## Prerequisites

Expand Down Expand Up @@ -71,7 +71,7 @@ connection" messages for the rest of the exercise.)
Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the
frontend.

![Frontend](/docs/images/books/frontend.png "Frontend")
![Frontend](/images/docs/books/frontend.png 'Frontend')

Unfortunately, there is an error in the app: if you click _Add Book_, it will
fail 50% of the time. This is a classic case of non-obvious, intermittent
Expand All @@ -80,7 +80,7 @@ debug. Kubernetes itself cannot detect or surface this error. From Kubernetes's
perspective, it looks like everything's fine, but you know the application is
returning errors.

![Failure](/docs/images/books/failure.png "Failure")
![Failure](/images/docs/books/failure.png 'Failure')

## Add Linkerd to the service

Expand Down
10 changes: 5 additions & 5 deletions linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ $ kubectl -n booksapp port-forward svc/webapp 7000 &
Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the
frontend.

![Frontend](/docs/images/books/frontend.png "Frontend")
![Frontend](/images/docs/books/frontend.png 'Frontend')

## Creating a Server resource

Expand Down Expand Up @@ -330,7 +330,7 @@ web UI, we may notice that something is amiss.

Attempting to delete an author results in a "not found" error in the web UI:

![Not found](/docs/images/books/delete-404.png "Not found")
![Not found](/images/docs/books/delete-404.png 'Not found')

and similarly, adding a new author takes us to an error page.

Expand Down Expand Up @@ -375,7 +375,7 @@ EOF
What happens if we try to delete an author _now_? We still see a failure, but a
different one:

![Internal server error](/docs/images/books/delete-503.png "Internal server error")
![Internal server error](/images/docs/books/delete-503.png 'Internal server error')

This is because we have created a _route_ matching `DELETE`, `PUT`, and `POST`
requests, but we haven't _authorized_ requests to that route. Running the
Expand Down Expand Up @@ -432,11 +432,11 @@ in this case, we only authenticate the `webapp` deployment's `ServiceAccount`

Now, if we attempt to delete an author in the frontend once again, we can:

![Author deleted](/docs/images/books/delete-ok.png "Author deleted")
![Author deleted](/images/docs/books/delete-ok.png 'Author deleted')

Similarly, we can now create a new author successfully, as well:

![Author created](/docs/images/books/create-ok.png "Author created")
![Author created](/images/docs/books/create-ok.png 'Author created')

Running the `linkerd viz authz` command one last time, we now see that all
traffic is authorized:
Expand Down
8 changes: 4 additions & 4 deletions linkerd.io/content/2-edge/tasks/debugging-your-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,12 @@ command), you should see all the resources in the `emojivoto` namespace,
including the deployments. Each deployment running Linkerd shows success rate,
requests per second and latency percentiles.

![Top Level Metrics](/docs/images/debugging/stat.png "Top Level Metrics")
![Top Level Metrics](/images/docs/debugging/stat.png 'Top Level Metrics')

That's pretty neat, but the first thing you might notice is that the success
rate is well below 100%! Click on `web` and let's dig in.

![Deployment Detail](/docs/images/debugging/octopus.png "Deployment Detail")
![Deployment Detail](/images/docs/debugging/octopus.png 'Deployment Detail')

You should now be looking at the Deployment page for the web deployment. The
first thing you'll see here is that the web deployment is taking traffic from
Expand All @@ -38,7 +38,7 @@ returning.
Let's scroll a little further down the page, we'll see a live list of all
traffic that is incoming to _and_ outgoing from `web`. This is interesting:

![Top](/docs/images/debugging/web-top.png "Top")
![Top](/images/docs/debugging/web-top.png 'Top')

There are two calls that are not at 100%: the first is vote-bot's call to the
`/api/vote` endpoint. The second is the `VoteDoughnut` call from the web
Expand All @@ -54,7 +54,7 @@ the requests are failing with a
is a common error response as you can see from [the code][code]. Linkerd is
aware of gRPC's response classification without any other configuration!

![Tap](/docs/images/debugging/web-tap.png "Tap")
![Tap](/images/docs/debugging/web-tap.png 'Tap')

At this point, we have everything required to get the endpoint fixed and restore
the overall health of our applications.
Expand Down
10 changes: 6 additions & 4 deletions linkerd.io/content/2-edge/tasks/distributed-tracing.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ To use distributed tracing, you'll need to:
In the case of emojivoto, once all these steps are complete there will be a
topology that looks like this:

![Topology](/docs/images/tracing/tracing-topology.svg "Topology")
![Topology](/images/docs/tracing/tracing-topology.svg 'Topology')

{{< warning >}}

Expand Down Expand Up @@ -180,20 +180,22 @@ kubectl port-forward -n jaeger-system svc/jaeger-query 16686
```

<!-- markdownlint-disable MD034 -->

Then, open http://127.0.0.1:16686 in your browser.

<!-- markdownlint-enable MD034 -->

![Jaeger](/docs/images/tracing/jaeger-empty.png "Jaeger")
![Jaeger](/images/docs/tracing/jaeger-empty.png 'Jaeger')

You can search for any service in the dropdown and click Find Traces. `vote-bot`
is a great way to get started.

![Search](/docs/images/tracing/jaeger-search.png "Search")
![Search](/images/docs/tracing/jaeger-search.png 'Search')

Clicking on a specific trace will provide all the details, you'll be able to see
the spans for every proxy!

![Search](/docs/images/tracing/example-trace.png "Search")
![Search](/images/docs/tracing/example-trace.png 'Search')

Note the large number of `linkerd-proxy` spans in the output. Internally, the
proxy has a server and client side. When a request goes through the proxy, it is
Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2-edge/tasks/fault-injection.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ return whatever responses you want - 500s, timeouts or even crazy payloads.
The [books demo](books/) is a great way to show off this behavior. The overall
topology looks like:

![Topology](/docs/images/books/topology.png "Topology")
![Topology](/images/docs/books/topology.png 'Topology')

In this guide, you will split some of the requests from `webapp` to `books`.
Most requests will end up at the correct `books` destination, however some of
Expand Down
6 changes: 3 additions & 3 deletions linkerd.io/content/2-edge/tasks/flagger.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ orchestrates it. A load generator simply makes it easier to execute the rollout
as there needs to be some kind of active traffic to complete the operation.
Together, these components have a topology that looks like:

![Topology](/docs/images/canary/simple-topology.svg "Topology")
![Topology](/images/docs/canary/simple-topology.svg 'Topology')

To add these components to your cluster and include them in the Linkerd
[data plane](../reference/architecture/#data-plane), run:
Expand Down Expand Up @@ -213,7 +213,7 @@ podinfo-primary ClusterIP 10.7.249.63 <none> 9898/TCP 23m

At this point, the topology looks a little like:

![Initialized](/docs/images/canary/initialized.svg "Initialized")
![Initialized](/images/docs/canary/initialized.svg 'Initialized')

{{< note >}}

Expand Down Expand Up @@ -259,7 +259,7 @@ kubectl -n test get ev --watch
While an update is occurring, the resources and traffic will look like this at a
high level:

![Ongoing](/docs/images/canary/ongoing.svg "Ongoing")
![Ongoing](/images/docs/canary/ongoing.svg 'Ongoing')

After the update is complete, this picture will go back to looking just like the
figure from the previous section.
Expand Down
Loading
Loading