Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH#1102: Fix spelling errors #1383

Merged
merged 54 commits into from
Mar 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
82217fc
spelling: access
jsoref Nov 15, 2023
aa4e91a
spelling: account
jsoref Nov 15, 2023
2f5d6fc
spelling: administrators
jsoref Nov 15, 2023
0cc0d4e
whitespace: aggregation
jsoref Nov 15, 2023
04a4279
spelling: anymore
jsoref Nov 15, 2023
8ab36f7
spelling: associate
jsoref Nov 15, 2023
1a8d5c0
spelling: calicoctl
jsoref Nov 15, 2023
acdac19
spelling: cannot
jsoref Nov 15, 2023
334fe34
spelling: case-insensitive
jsoref Nov 15, 2023
8c03635
spelling: conntrack
jsoref Nov 15, 2023
75f4397
spelling: dashboard
jsoref Mar 21, 2024
6c3bbf3
spelling: default
jsoref Mar 21, 2024
a26f218
spelling: detection
jsoref Mar 21, 2024
9eb20a4
whitespace: detection
jsoref Nov 15, 2023
d89f3be
spelling: docusaurus
jsoref Nov 15, 2023
57a0163
spelling: drop
jsoref Mar 21, 2024
6fc96cb
spelling: enterprise
jsoref Nov 15, 2023
989ee03
spelling: env
jsoref Mar 21, 2024
2dc6646
spelling: examined
jsoref Mar 21, 2024
1b2896d
spelling: explicitly
jsoref Nov 15, 2023
1d12d6b
spelling: github
jsoref Nov 15, 2023
8fb45c7
spelling: hotspots
jsoref Nov 16, 2023
23d08f7
whitespace: include
jsoref Nov 16, 2023
2136315
whitespace: individually
jsoref Nov 16, 2023
d91c55a
spelling: install
jsoref Mar 21, 2024
7ec9b6c
spelling: integration
jsoref Mar 21, 2024
b72f4e8
spelling: investigate
jsoref Mar 21, 2024
3dcd449
spelling: is disconnected
jsoref Mar 21, 2024
7f62582
spelling: its
jsoref Mar 21, 2024
c65009c
spelling: kubernetes
jsoref Nov 16, 2023
2a4dd9f
spelling: loadbalancer
jsoref Nov 16, 2023
455b0dd
spelling: macos
jsoref Nov 15, 2023
3f02bd8
spelling: maintenance
jsoref Nov 16, 2023
a67d610
spelling: nelljerram
jsoref Nov 16, 2023
24914e2
spelling: nonexistent
jsoref Nov 15, 2023
c8201f5
spelling: openstack
jsoref Mar 21, 2024
11bb674
spelling: overridden
jsoref Nov 16, 2023
1d8d338
spelling: partner
jsoref Nov 16, 2023
61fdd2f
spelling: policies
jsoref Mar 21, 2024
ca42697
spelling: preexisting
jsoref Mar 21, 2024
83047fd
spelling: prefix
jsoref Mar 21, 2024
30ca110
spelling: rapidly
jsoref Mar 21, 2024
d94dc67
spelling: running
jsoref Nov 16, 2023
78c9ef1
spelling: separately
jsoref Nov 16, 2023
835b640
spelling: specified
jsoref Mar 21, 2024
a925726
spelling: style
jsoref Nov 16, 2023
e8fde2f
spelling: support
jsoref Mar 21, 2024
0d8cafd
whitespace: successfully
jsoref Nov 16, 2023
b0618f9
spelling: suspicious
jsoref Mar 21, 2024
85db462
spelling: than
jsoref Nov 15, 2023
1b470a7
spelling: this
jsoref Mar 21, 2024
bd468fd
spelling: typescript
jsoref Nov 15, 2023
0b289bb
whitespace: unknown
jsoref Nov 16, 2023
d0e3ae7
spelling: updating
jsoref Nov 16, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -157,9 +157,9 @@ run-update-cloud-image-list:

# This allow generating the components version for a specific product
# NOTE: currently only implemented for calico-enterprise; there is validation in the script to check this
# If you want to use a different product branch from the dafault, specify GIT_VERSION_REF
# If you want to use a different product branch from the default, specify GIT_VERSION_REF
# e.g. for new versions of v3.18.0-1, GIT_VERSION_REF=3.18-1
# If you want to use a different doc folder from the dafault, specify DOCS_VERSION_STREAM
# If you want to use a different doc folder from the default, specify DOCS_VERSION_STREAM
# e.g. for new versions of v3.18.0-2, DOCS_VERSION_STREAM=3.18-2
# If the version to updates is the latest version for the product, specify IS_LATEST=true
# e.g. if 3,18,1 is the latest version, IS_LATEST=true
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
- In BPF dataplane mode, Felix now handles single-block IPAM pools. Previously single-block pools resulted in a collision when programming the dataplane routes. [felix #2245](https://github.com/projectcalico/felix/pull/2245) (@fasaxc)
- None required [felix #2233](https://github.com/projectcalico/felix/pull/2233) (@tomastigera)
- None required [felix #2232](https://github.com/projectcalico/felix/pull/2232) (@tomastigera)
- [Openstack] Allow DHCP from the workload, on kernels where rp_filter doesn't already [felix #2231](https://github.com/projectcalico/felix/pull/2231) (@neiljerram)
- [OpenStack] Allow DHCP from the workload, on kernels where rp_filter doesn't already [felix #2231](https://github.com/projectcalico/felix/pull/2231) (@nelljerram)
- all-interfaces host endpoints now supports normal network policy in addition to pre-dnat policy [felix #2228](https://github.com/projectcalico/felix/pull/2228) (@lmm)
- Add FelixConfiguration option for setting route information source [libcalico-go #1222](https://github.com/projectcalico/libcalico-go/pull/1222) (@caseydavenport)
- Added Wireguard configuration. [libcalico-go #1215](https://github.com/projectcalico/libcalico-go/pull/1215) (@realgaurav)
Expand All @@ -34,7 +34,7 @@
- auto host endpoints have a default allow profile [kube-controllers #470](https://github.com/projectcalico/kube-controllers/pull/470) (@lmm)
- Fix IPAM garbage collection in etcd mode on clusters where node name does not match Kubernetes node name. [kube-controllers #467](https://github.com/projectcalico/kube-controllers/pull/467) (@caseydavenport)
- Use KubeControllersConfiguration resource for config [kube-controllers #464](https://github.com/projectcalico/kube-controllers/pull/464) (@spikecurtis)
- Fix kube-controllers attempting to clean up non-existent node resources [kube-controllers #461](https://github.com/projectcalico/kube-controllers/pull/461) (@fcuello-fudo)
- Fix kube-controllers attempting to clean up nonexistent node resources [kube-controllers #461](https://github.com/projectcalico/kube-controllers/pull/461) (@fcuello-fudo)
- kube-controllers can now automatically provision host endpoints for nodes in the cluster [kube-controllers #458](https://github.com/projectcalico/kube-controllers/pull/458) (@lmm)
- Kubernetes network tutorials updated for v1.18. [calico #3447](https://github.com/projectcalico/calico/pull/3447) (@tmjd)
- With OpenShift install time resources can be created. This means Calico resources can be created before the Calico components are started. [calico #3338](https://github.com/projectcalico/calico/pull/3338) (@tmjd)
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Calico now supports BGP communities! Check out the BGP configuration resource [r
- In BPF mode, Felix now rate-limits stale BPF map cleanup to save CPU. [felix #2428](https://github.com/projectcalico/felix/pull/2428) (@fasaxc)
- In BPF mode, Felix now detects BPF support on Red Hat kernels with backports as well as generic kernels. [felix #2409](https://github.com/projectcalico/felix/pull/2409) (@sridhartigera)
- In BPF mode, Felix now uses a more efficient algorithm to resync the Kubernetes services with the dataplane. This speeds up the initial sync (especially with large numbers of services). [felix #2401](https://github.com/projectcalico/felix/pull/2401) (@tomastigera)
- eBPF dataplane support for encryption via Wireguard [felix #2389](https://github.com/projectcalico/felix/pull/2389) (@neiljerram)
- eBPF dataplane support for encryption via Wireguard [felix #2389](https://github.com/projectcalico/felix/pull/2389) (@nelljerram)
- Reject connections to services with no backends [felix #2380](https://github.com/projectcalico/felix/pull/2380) (@sridhartigera)
- Implementation to handle setting source-destination-check for AWS EC2 instances. [felix #2381](https://github.com/projectcalico/felix/pull/2381) (@realgaurav)
- In BPF mode, Felix now applies policy updates without reapplying the BPF programs; this gives a performance boost and closes a window where traffic was not policed. [felix #2363](https://github.com/projectcalico/felix/pull/2363) (@fasaxc)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
### Bug fixes

- Fix population of etcd certificates in CNI config [cni-plugin #949](https://github.com/projectcalico/cni-plugin/pull/949) (@caseydavenport)
- Resolves an issue on nodes whose Kubernetes node name does not exactly match the system hostname [cni-plugin #943](https://github.com/projectcalico/cni-plugin/pull/943) (@neiljerram)
- Resolves an issue on nodes whose Kubernetes node name does not exactly match the system hostname [cni-plugin #943](https://github.com/projectcalico/cni-plugin/pull/943) (@nelljerram)
- Fix flannel migration issues when running on Rancher [kube-controllers #506](https://github.com/projectcalico/kube-controllers/pull/506) (@songjiang)
- Fix `kubectl exec` format for migration controller [kube-controllers #504](https://github.com/projectcalico/kube-controllers/pull/504) (@songjiang)
- Fix flannel migration for clusters with multiple control plane nodes. [kube-controllers #503](https://github.com/projectcalico/kube-controllers/pull/503) (@caseydavenport)
Expand Down
2 changes: 1 addition & 1 deletion calico-cloud/get-started/connect/operator-checklist.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ kubectl get tigerastatus
| 2 | calico | TRUE | FALSE | FALSE | 11m |
| 3 | cloud-core | TRUE | FALSE | FALSE | 11m |
| 4 | compliance | TRUE | FALSE | FALSE | 9m39s |
| 5 | intrusion-detection | TRUE | FALSE | FALSE | 9m49s |
| 5 | intrusion-detection | TRUE | FALSE | FALSE | 9m49s |
| 6 | log-collector | TRUE | FALSE | FALSE | 9m29s |
| 7 | management-cluster-connection | TRUE | FALSE | FALSE | 9m54s |
| 8 | monitor | TRUE | FALSE | FALSE | 11m |
Expand Down
4 changes: 2 additions & 2 deletions calico-cloud/image-assurance/scanners/pipeline-scanner.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ If you change the name of above heading, open a ticket to update the hardcoded C
curl -Lo tigera-scanner {{clouddownloadbase}}/tigera-scanner/{{cloudversion}}/image-assurance-scanner-cli-linux-amd64
```

**MacOS**
**macOS**

```shell
curl -Lo tigera-scanner {{clouddownloadbase}}/tigera-scanner/{{cloudversion}}/image-assurance-scanner-cli-darwin-amd64
Expand All @@ -77,7 +77,7 @@ You must download and set the executable flag each time you get a new version of
```
### Integrate the scanner into your build pipeline

You can include the CLI scanner in your CI/CD pipelines (for example, Jenkins, Github actions). Ensure the following:
You can include the CLI scanner in your CI/CD pipelines (for example, Jenkins, GitHub actions). Ensure the following:

- Download the CLI scanner binary onto your CI runner
- If you are running an ephemeral environment in the pipeline, include the download, and update the executable steps in your pipeline to download the scanner on every execution
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ metadata:
name: allow-tcp-port-6379
```

Because global network policies use **kind: GlobalNetworkPolicy**, they are grouped seperately from **kind: NetworkPolicy**. For example, global network policies will not be returned from `kubectl get networkpolicy.p`, and are rather returned from `kubectl get globalnetworkpolicy`.
Because global network policies use **kind: GlobalNetworkPolicy**, they are grouped separately from **kind: NetworkPolicy**. For example, global network policies will not be returned from `kubectl get networkpolicy.p`, and are rather returned from `kubectl get globalnetworkpolicy`.

### Ingress and egress

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ spec:
- 22
```

Save this as allow-ssh-maintenace.yaml.
Save this as allow-ssh-maintenance.yaml.

Apply the policy to the cluster:

Expand Down
2 changes: 1 addition & 1 deletion calico-cloud/operations/ebpf/enabling-ebpf.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ resource to `"BPF"`.
kubectl patch installation.operator.tigera.io default --type merge -p '{"spec":{"calicoNetwork":{"linuxDataplane":"BPF"}}}'
```

When enabling eBPF mode, pre-existing connections continue to use the non-BPF datapath; such connections should
When enabling eBPF mode, preexisting connections continue to use the non-BPF datapath; such connections should
not be disrupted, but they do not benefit from eBPF mode’s advantages.

:::note
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ This section provides metrics recommendations for maintaining optimal cluster op
| Metric | <strong>Note</strong>: Syncer (type) is Typha's internal name for a client (type).<br /><strong>Individual syncer values</strong>:<br /><code>(typha_cache_size\{syncer="bgp"\})</code> <br /><code>(typha_cache_size\{syncer="dpi"\})</code><br /><code>typha_cache_size\{syncer="felix"\})</code><br /><code>(typha_cache_size\{syncer="node-status"\})</code><br /><code> (typha_cache_size\{syncer="tunnel-ip-allocation"\})</code><br /><br /><strong>Sum of all syncers</strong>:<br />The sum of all cache sizes (each syncer type has a cache).<br /><code>sum by (instance)</code> <code>(typha_cache_size)</code><br /><br /><strong>Largest syncer</strong>:<br /><code>max by (instance)</code> <code>(typha_cache_size)</code> |
| Example value | Example of: <code>max by (instance)</code> <code>(typha_cache_size\{syncer="felix"\})</code><br /><br /><code>\{instance="10.0.1.20:9093"\} 661</code><br /><code>\{instance="10.0.1.31:9093"\} 661</code> |
| Explanation | The total number of key/value pairs in Typha's in-memory cache.This metric represents the scale of the {{prodname}} datastore as it tracks how many WEPs (pods and services), HEPs (hostendpoints), networksets, globalnetworksets, {{prodname}} Network Policies etc that Typha is aware of across the entire Calico Federation.You can use this metric to monitor individual syncers to Typha (like Felix, BGP etc), or to get a sum of all syncers. We recommend that you monitor the largest syncer but it is completely up to you. This is a good metric to understand how much data is in Typha. <strong>Note</strong>: If all Typhas are in sync then they should have the same value for this metric. |
| Threshold value recommendation | The value of this metric will depend on the scale of the Calico Federation and will always increase as WEPs, {{prodname}} network policie,s and clusters are added. Achieve a baseline first, then monitor for any unexpected increases from the baseline. |
| Threshold value recommendation | The value of this metric will depend on the scale of the Calico Federation and will always increase as WEPs, {{prodname}} network policies and clusters are added. Achieve a baseline first, then monitor for any unexpected increases from the baseline. |
| Threshold breach symptoms | Unexpected increases may indicate memory leaks and performance issues with Typha. |
| Threshold breach recommendations | Check CPU usage on Typha pods and Kubernetes nodes. Increase resources if needed, rollout and restart Typha(s) if needed. |
| Priority level | Optional. |
Expand Down Expand Up @@ -261,7 +261,7 @@ The following metrics are applicable only if you have implemented [Cluster mesh]
| Example value | <code>\{instance="10.0.1.20:9093"\} NaN</code> |
| Explanation | The median time to stream the initial datastore snapshot to each client. It is useful to know the time it takes for a client to receive the data when it connects; it does not include time to process the data. |
| Threshold value recommendation | Investigate if this value is moving towards 10s of seconds. |
| Threshold breach symptoms | High values of this metric could indicate that newly-started clients are taking a long time to get the latest snapshot of the datastore, increasing the window of time where networking/policy updates are not being applied to the dataplane during a restart/upgrade. Typha has a write timeout for writing the snapshot; if a client cannot receive the snapshot within that timeout, it isdisconnected. Clients falling behind on information and updates contained in the datastore (for example, {{prodname}} network policy object may not be current). |
| Threshold breach symptoms | High values of this metric could indicate that newly-started clients are taking a long time to get the latest snapshot of the datastore, increasing the window of time where networking/policy updates are not being applied to the dataplane during a restart/upgrade. Typha has a write timeout for writing the snapshot; if a client cannot receive the snapshot within that timeout, it is disconnected. Clients falling behind on information and updates contained in the datastore (for example, {{prodname}} network policy object may not be current). |
| Threshold breach recommendations | Check Typha and calico-node logs and resource usage. Check for network congestion. Investigate why a particular calico-node is slow; it is likely on an overloaded node with insufficient CPU). |
| Priority level | Optional. |

Expand Down Expand Up @@ -352,7 +352,7 @@ The following policy metrics are a separate endpoint exposed by Felix that are u
| Metric | <code>rate(process_cpu_seconds_total\{30s\}) \* 100</code> |
| Example value | <code>\{<strong>endpoint</strong>="metrics-port", instance="10.0.1.20:9091", <strong>job</strong>="felix-metrics-svc", namespace="calico-system", <strong>pod</strong>="calico-node-qzpkt", <strong>service=</strong>"felix-metrics-svc"\}3.1197504199664072</code> |
| Explanation | CPU in use by calico-node represented as a percentage of a core. |
| Threshold value recommendation | A spike at startup is normal. It is recommended to first achieve a baseline and then monitor for any unexpected increases from this baseline. Investigage if maintained CPU usage goes above 90%. |
| Threshold value recommendation | A spike at startup is normal. It is recommended to first achieve a baseline and then monitor for any unexpected increases from this baseline. Investigate if maintained CPU usage goes above 90%. |
| Threshold breach symptoms | Unexpected maintained CPU usage could cause Felix to fall behind and could cause delays to policy updates. |
| Threshold breach recommendations | Check CPU usage on Kubernetes nodes. Increase resources if needed, rollout restart calico-node(s) if needed. |
| Priority level | Recommended. |
Expand Down Expand Up @@ -459,7 +459,7 @@ The following policy metrics are a separate endpoint exposed by Felix that are u
| Metric | <code>felix_logs_dropped</code> |
| Example value | <code>felix_logs_dropped\{<strong>endpoint</strong>="metrics-port", <strong>instance</strong>="10.0.1.20:9091", <strong>job</strong>="felix-metrics-svc", <strong>namespace</strong>="calico-system", <strong>pod</strong>="calico-node-qzpkt", <strong>service</strong>="felix-metrics-svc"\} 0</code> |
| Explanation | The number of logs Felix has dropped. Note that this metric does not count flow-logs; it counts logs to stdout. |
| Threshold value recommendation | Occasional drops are normal. Investigate if frop counters rapidily rise. |
| Threshold value recommendation | Occasional drops are normal. Investigate if drop counters rapidly rise. |
| Threshold breach symptoms | Felix will drop logs if it cannot keep up with writing them out. These are ordinary code logs, not flow logs. Calico-node may be under resource constraints. |
| Threshold breach recommendations | Check CPU usage on calico-nodes and Kubernetes nodes. Increase resources if needed, and rollout restart calico-node(s) if needed. |
| Priority level | Optional. |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ Each plane would constitute an IP network, so the blue plane would be
orange and red planes would be 2001:db8:3000::/36 and 2001:db8:4000::/36
respectively. [^3]

Each IP network (plane) requires it's own BGP route reflectors. Those
Each IP network (plane) requires its own BGP route reflectors. Those
route reflectors need to be peered with each other within the plane, but
the route reflectors in each plane do not need to be peered with one
another. Therefore, a fabric of four planes would have four route
Expand Down
Loading
Loading