Skip to content

Commit

Permalink
fix typos
Browse files Browse the repository at this point in the history
  • Loading branch information
zhangzujian committed Sep 10, 2021
1 parent b1a61a7 commit 42fed92
Show file tree
Hide file tree
Showing 26 changed files with 133 additions and 133 deletions.
2 changes: 1 addition & 1 deletion ARCHITECTURE.MD
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ All the OVN components in Kube-OVN are packed into the docker image and can run

#### ovn-central

It's a deployment that runs OVN management components including `ovn-nb`, `ovn-sb`, and `ovn-norhtd`.
It's a deployment that runs OVN management components including `ovn-nb`, `ovn-sb`, and `ovn-northd`.
`ovn-nb` stores the logical network and provides API for controllers to manage the logical network.
It's the main components that the Kube-OVN controller will interact with.

Expand Down
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@
* Restart ovn-controller to force ovn-ic flows update
* Update usingips check when update finalizer for subnet
* Livenessprobe fail if ovn nb/ovn sb not running
* Release norhtd lock when power off
* Release northd lock when power off
* Fix chassis check for node
* Pod terminating not recycle ip when controller not ready

Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ The Switch, Router and Firewall showed in the diagram below are all distributed

## Monitoring Dashboard

Kube-OVN offers prometheus integration with grafana dashboards to visualise network quality.
Kube-OVN offers prometheus integration with grafana dashboards to visualize network quality.

![dashboard](docs/pinger-grafana.png)

Expand Down
2 changes: 1 addition & 1 deletion cmd/controller/controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ func checkPermission(config *controller.Configuration) error {
return err
}
if !ssar.Status.Allowed {
return fmt.Errorf("no permission to wath resource %s, %s", res, ssar.Status.Reason)
return fmt.Errorf("no permission to watch resource %s, %s", res, ssar.Status.Reason)
}
}
return nil
Expand Down
4 changes: 2 additions & 2 deletions docs/cluster-interconnection.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ From v1.4.0, two or more Kubernetes clusters can be connected with each other. P
communicate directly using Pod IP. Kub-OVN uses tunnel to encapsulate traffic between clusters gateways,
only L3 connectivity for gateway nodes is required.

## Prerequest
* To use route auto advertise, subnet CIDRs in different clusters *MUST NOT* be overlapped with each other,including ovn-default and join subnets CIDRs. Otherwise, you should disable the auto route and add routes mannually.
## Prerequisite
* To use route auto advertise, subnet CIDRs in different clusters *MUST NOT* be overlapped with each other,including ovn-default and join subnets CIDRs. Otherwise, you should disable the auto route and add routes manually.
* The Interconnection Controller *SHOULD* be deployed in a region that every cluster can access by IP.
* Every cluster *SHOULD* have at least one node(work as gateway later) that can access other gateway nodes in different clusters by IP.
* Cluster interconnection network now *CANNOT* work together with SNAT and EIP functions.
Expand Down
4 changes: 2 additions & 2 deletions docs/dpdk.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This document describes how to run Kube-OVN with OVS-DPDK.

*Note*: Kube-OVN with OVS-DPDK provides a vhost-user raw socket to container for DPDK applications like Testpmd and L2fwd to manipulate network traffic in userspace. Regular applications like Nginx will not work.

## Prerequest
## Prerequisite
- Kubernetes >= 1.11
- Docker >= 1.12.6
- OS: CentOS 7.5/7.6/7.7, Ubuntu 16.04/18.04
Expand Down Expand Up @@ -96,7 +96,7 @@ With Multus installed, additional Network interfaces can now be requested within


## Userspace CNI
There is now a containerized instance of OVS-DPDK running on the node. Kube-OVN can provide all of its regular (kernal) functionality. Multus is in place to enable pods request the additional OVS-DPDK interfaces. However, OVS-DPDK does provide regular Netdev interfaces, but vhost-user sockets. These sockets cannot be attached to a pod in the usual manner where the Netdev is moved to the pod network namespace. These sockets must be mounted into the pod. Kube-OVN (at least currently) does not have this socket-mounting ability. For this functionality we can use the [Userspace CNI Network Plugin](https://github.com/intel/userspace-cni-network-plugin).
There is now a containerized instance of OVS-DPDK running on the node. Kube-OVN can provide all of its regular (kernel) functionality. Multus is in place to enable pods request the additional OVS-DPDK interfaces. However, OVS-DPDK does provide regular Netdev interfaces, but vhost-user sockets. These sockets cannot be attached to a pod in the usual manner where the Netdev is moved to the pod network namespace. These sockets must be mounted into the pod. Kube-OVN (at least currently) does not have this socket-mounting ability. For this functionality we can use the [Userspace CNI Network Plugin](https://github.com/intel/userspace-cni-network-plugin).


### Download, build and install Userspace CNI
Expand Down
2 changes: 1 addition & 1 deletion docs/high-available.md → docs/high-availability.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# High available for ovn db
# High availability for ovn db

OVN support clustered database. If want to use high-available database in kube-ovn,
modify ovn-central deployment in yamls/ovn.yaml.
Expand Down
30 changes: 15 additions & 15 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Kube-OVN includes two parts:
- Native OVS and OVN components
- Controller and CNI plugins that integrate OVN with Kubernetes

## Prerequest
## Prerequisite
- Kubernetes >= 1.11, version 1.16 and later is recommended
- Docker >= 1.12.6
- OS: CentOS 7/8, Ubuntu 16.04/18.04
Expand All @@ -14,27 +14,27 @@ Kube-OVN includes two parts:

*NOTE*
1. Users using Ubuntu 16.04 should build the OVS kernel module and replace the built-in one to avoid kernel NAT issues.
2. CentOS users should make sure kernel version is greater than 3.10.0-898 to avoid a kernel conntrack bug, see [here](https://bugs.launchpad.net/neutron/+bug/1776778)
3. Kernel must boot with ipv6 enabled, otherwise geneve tunnel will not be established due to a kernel bug, see [here](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1794232)
2. CentOS users should make sure kernel version is greater than 3.10.0-898 to avoid a kernel conntrack bug, see [here](https://bugs.launchpad.net/neutron/+bug/1776778).
3. Kernel must boot with IPv6 enabled, otherwise geneve tunnel will not be established due to a kernel bug, see [here](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1794232).

## To Install

### One Script Installer

Kube-OVN provides a one script install to easily install a high-available, production-ready Kube-OVN
Kube-OVN provides a one script install to easily install a high-available, production-ready Kube-OVN.

1. Download the stable release installer scripts
1. Download the stable release installer scripts.

For Kubernetes version>=1.16
For Kubernetes version>=1.16:
`wget https://raw.githubusercontent.com/alauda/kube-ovn/release-1.8/dist/images/install.sh`

For Kubernetes version<1.16
For Kubernetes version<1.16:
`wget https://raw.githubusercontent.com/alauda/kube-ovn/release-1.8/dist/images/install-pre-1.16.sh`

If you want to try the latest developing Kube-OVN, try the script below
If you want to try the latest developing Kube-OVN, try the script below:
`wget https://raw.githubusercontent.com/alauda/kube-ovn/master/dist/images/install.sh`

2. Use vim to edit the script variables to meet your requirement
2. Use vim to edit the script variables to meet your requirement:
```bash
REGISTRY="kubeovn"
POD_CIDR="10.16.0.0/16" # Default subnet CIDR, Do NOT overlap with NODE/SVC/JOIN CIDR
Expand All @@ -45,7 +45,7 @@ If you want to try the latest developing Kube-OVN, try the script below
VERSION="v1.8.0"
```

This basic setup works for default overlay network. If you are using default underlay/vlan network, please refer [Vlan/Underlay Support](vlan.md)
This basic setup works for default overlay network. If you are using default underlay/vlan network, please refer [Vlan/Underlay Support](vlan.md).

3. Run the script

Expand All @@ -57,14 +57,14 @@ That's all! You can now create some pods and test connectivity.

The one-script installer is recommended. If you want to change the default options, follow the steps below.

For Kubernetes version before 1.17 please use the following command to add the node label
For Kubernetes version before 1.17 please use the following command to add the node label:

`kubectl label no -lbeta.kubernetes.io/os=linux kubernetes.io/os=linux --overwrite`

1. Add the following label to the Node which will host the OVN DB and the OVN Control Plane:

`kubectl label node <Node on which to deploy OVN DB> kube-ovn/role=master`
2. Install Kube-OVN related CRDs
2. Install Kube-OVN related CRDs:

`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/release-1.8/yamls/crd.yaml`
3. Get ovn.yaml and replace `$addresses` in the file with IP address of the node that will host the OVN DB and the OVN Control Plane:
Expand All @@ -79,7 +79,7 @@ For Kubernetes version before 1.17 please use the following command to add the n

`kubectl apply -f https://raw.githubusercontent.com/alauda/kube-ovn/release-1.8/yamls/kube-ovn.yaml`

For high-available ovn db, see [high available](high-available.md)
For high-available ovn db, see [High Availability](high-availability.md).

If you want to enable IPv6 on default subnet and node subnet, please apply https://raw.githubusercontent.com/alauda/kube-ovn/release-1.8/yamls/kube-ovn-ipv6.yaml on Step 3.

Expand Down Expand Up @@ -167,13 +167,13 @@ You can use `--default-cidr` flags below to config default Pod CIDR or create a
By default, Kube-OVN uses in-cluster config to init kube client. In this way, Kube-OVN relies on kube-proxy to provide service discovery to connect to Kubernetes apiserver.
To use an external or high available Kubernetes apiserver, users can use self customized kubeconfig to connect to apiserver.

1. Generate configmap from an existed kubeconfig
1. Generate configmap from an existing kubeconfig:

```bash
kubectl create -n kube-system configmap admin-conf --from-file=config=admin.conf
```

1. Edit `kube-ovn-controller`, `kube-ovn-cni` to use the above kubeconfig
1. Edit `kube-ovn-controller`, `kube-ovn-cni` to use the above kubeconfig:

```yaml
- args:
Expand Down
2 changes: 1 addition & 1 deletion docs/internal-port.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,4 @@ metadata:

## Some limitation
- The internal port name must be unique on a host and kubelet always checks the `eth0` interface in the Pod. To bypass this issue, Kube-OVN creates a dummy interface named eth0 in the Pod's netns, assigns the same IP address(es), and sets it down. It works well for most scenarios, however if applications rely on network interface name, it will bring confusions.
- After OVS restarts, internal ports will be deattach from the pod. Pods on the same node with internal-port interfaces should be recreated manually.
- After OVS restarts, internal ports will be detached from the pod. Pods on the same node with internal-port interfaces should be recreated manually.
2 changes: 1 addition & 1 deletion docs/snat-and-eip.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ From v1.5.0, Kube-OVN take use of the L3 gateways from OVN to implement Pod SNAT
By using snat, a group of pods can share one same ip address to communicate with external services.
By using eip, external services can visit a pod with a stable ip and pod will visit external services using the same ip.

## Prerequest
## Prerequisite
* To take use of OVN L3 Gateway, a dedicated nic *MUST* be bridged into ovs to act as the gateway between overlay and underlay, ops should use other nics to manage the host server.
* As the nic will emit packets with nat ip directly into underlay network, administrators *MUST* make sure that theses packets will not be denied by security rules.
* SNAT and EIP functions *CANNOT* work together with Cluster interconnection network
Expand Down
File renamed without changes
4 changes: 2 additions & 2 deletions docs/vlan.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ By default, Kube-OVN use Geneve to encapsulate packets between hosts, which will
Kube-OVN also supports underlay Vlan mode networking for better performance and throughput.
In Vlan mode, packets from pods will be sent directly to physical switches with vlan tags.

![topology](vlan-topolgy.png "vlan network topology")
![topology](vlan-topology.png "vlan network topology")

To enable Vlan mode, a ~~dedicated~~ network interface is required by container network. Mac address, MTU, IP addresses and routes attached to the interface will be copied/transferred to an OVS bridge named `br-PROVIDER` where `PROVIDER` is name of the provider network.
The related switch port must work in trunk mode to accept 802.1q packets. For underlay network with no vlan tag, you need
Expand All @@ -24,7 +24,7 @@ From v1.7.1 on, Kube-OVN supports dynamic underlay/VLAN networking management.

In the Vlan/Underlay mode, OVS sends origin Pods packets directly to the physical network and uses physical switch/router to transmit the traffic, so it relies on the capabilities of network infrastructure.

1. For K8s running on VMs provided by OpenStack, `PortSecuriity` of the network ports MUST be `disabled`;
1. For K8s running on VMs provided by OpenStack, `PortSecurity` of the network ports MUST be `disabled`;
2. For K8s running on VMs provided by VMware, the switch security options `MAC Address Changes`, `Forged Transmits` and `Promiscuous Mode Operation` MUST be `allowed`;
3. The Vlan/Underlay mode can not run on public IaaS providers like AWS/GCE/Alibaba Cloud as their network can not provide the capability to transmit this type packets;
4. When Kube-OVN creates network it checks the connectivity to the subnet gateway through ICMP, so the gateway MUST respond the ICMP messages;
Expand Down
4 changes: 2 additions & 2 deletions pkg/controller/client_go_adapter.go
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ func registerClientMetrics() {
clientmetrics.Register(opts)
}

// registerReflectorMetrics sets up reflector (reconile) loop metrics
// registerReflectorMetrics sets up reflector (reconcile) loop metrics
func registerReflectorMetrics() {
prometheus.MustRegister(listsTotal)
prometheus.MustRegister(listsDuration)
Expand All @@ -140,7 +140,7 @@ func registerReflectorMetrics() {
reflectormetrics.SetReflectorMetricsProvider(reflectorMetricsProvider{})
}

// this section contains adapters, implementations, and other sundry organic, artisinally
// this section contains adapters, implementations, and other sundry organic, artisanally
// hand-crafted syntax trees required to convince client-go that it actually wants to let
// someone use its metrics.

Expand Down
8 changes: 4 additions & 4 deletions pkg/controller/node.go
Original file line number Diff line number Diff line change
Expand Up @@ -698,25 +698,25 @@ func (c *Controller) checkPodsChangedOnNode(pgName string, ports []string) (bool
return false, err
}

pordIds := make([]string, 0, len(ports))
portIds := make([]string, 0, len(ports))
for _, port := range ports {
portId, err := c.ovnClient.ConvertLspNameToUuid(port)
if err != nil {
klog.Errorf("failed to convert lsp name to uuid, %v", err)
continue
}
pordIds = append(pordIds, portId)
portIds = append(portIds, portId)
}

for _, portId := range pordIds {
for _, portId := range portIds {
if !util.IsStringIn(portId, pgPorts) {
klog.Infof("new added pod %v should add to node port group %v", portId, pgName)
return true, nil
}
}

for _, pgPort := range pgPorts {
if !util.IsStringIn(pgPort, pordIds) {
if !util.IsStringIn(pgPort, portIds) {
klog.Infof("can not find match pod for port %v in node port group %v", pgPort, pgName)
return true, nil
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/controller/pod.go
Original file line number Diff line number Diff line change
Expand Up @@ -806,7 +806,7 @@ func getNodeTunlIP(node *v1.Node) ([]net.IP, error) {
var nodeTunlIPAddr []net.IP
nodeTunlIP := node.Annotations[util.IpAddressAnnotation]
if nodeTunlIP == "" {
return nil, fmt.Errorf("node has no tunl ip annotation")
return nil, fmt.Errorf("node has no tunnel ip annotation")
}

for _, ip := range strings.Split(nodeTunlIP, ",") {
Expand Down
2 changes: 1 addition & 1 deletion pkg/controller/subnet.go
Original file line number Diff line number Diff line change
Expand Up @@ -896,7 +896,7 @@ func (c *Controller) reconcileGateway(subnet *kubeovnv1.Subnet) error {
}
nodeIP, err := getNodeTunlIP(node)
if err != nil {
klog.Errorf("failed to get node %s tunl ip, %v", node.Name, err)
klog.Errorf("failed to get node %s tunnel ip, %v", node.Name, err)
return err
}

Expand Down
2 changes: 1 addition & 1 deletion pkg/controller/vpc_nat_gateway.go
Original file line number Diff line number Diff line change
Expand Up @@ -579,7 +579,7 @@ func (c *Controller) handleUpdateVpcDnat(natGwKey string) error {

func (c *Controller) handleUpdateNatGwSubnetRoute(natGwKey string) error {
if vpcNatEnabled != "true" {
return fmt.Errorf("failed to updat subnet route, vpcNatEnabled='%s'", vpcNatEnabled)
return fmt.Errorf("failed to update subnet route, vpcNatEnabled='%s'", vpcNatEnabled)
}
c.vpcNatGwKeyMutex.Lock(natGwKey)
defer c.vpcNatGwKeyMutex.Unlock(natGwKey)
Expand Down
4 changes: 2 additions & 2 deletions pkg/daemon/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ type Configuration struct {
KubeOvnClient clientset.Interface
NodeName string
ServiceClusterIPRange string
NodeLocalDNSIP string
NodeLocalDnsIP string
EncapChecksum bool
PprofPort int
NetworkType string
Expand Down Expand Up @@ -104,7 +104,7 @@ func ParseFlags() (*Configuration, error) {
PprofPort: *argPprofPort,
NodeName: nodeName,
ServiceClusterIPRange: *argServiceClusterIPRange,
NodeLocalDNSIP: *argNodeLocalDnsIP,
NodeLocalDnsIP: *argNodeLocalDnsIP,
EncapChecksum: *argEncapChecksum,
NetworkType: *argsNetworkType,
DefaultProviderName: *argsDefaultProviderName,
Expand Down
22 changes: 11 additions & 11 deletions pkg/daemon/controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -63,8 +63,8 @@ type Controller struct {

recorder record.EventRecorder

iptable map[string]*iptables.IPTables
ipset map[string]*ipsets.IPSets
iptables map[string]*iptables.IPTables
ipsets map[string]*ipsets.IPSets
ipsetLock sync.Mutex

protocol string
Expand Down Expand Up @@ -111,23 +111,23 @@ func NewController(config *Configuration, podInformerFactory informers.SharedInf
}
controller.protocol = util.CheckProtocol(node.Annotations[util.IpAddressAnnotation])

controller.iptable = make(map[string]*iptables.IPTables)
controller.ipset = make(map[string]*ipsets.IPSets)
controller.iptables = make(map[string]*iptables.IPTables)
controller.ipsets = make(map[string]*ipsets.IPSets)
if controller.protocol == kubeovnv1.ProtocolIPv4 || controller.protocol == kubeovnv1.ProtocolDual {
iptable, err := iptables.NewWithProtocol(iptables.ProtocolIPv4)
iptables, err := iptables.NewWithProtocol(iptables.ProtocolIPv4)
if err != nil {
return nil, err
}
controller.iptable[kubeovnv1.ProtocolIPv4] = iptable
controller.ipset[kubeovnv1.ProtocolIPv4] = ipsets.NewIPSets(ipsets.NewIPVersionConfig(ipsets.IPFamilyV4, IPSetPrefix, nil, nil))
controller.iptables[kubeovnv1.ProtocolIPv4] = iptables
controller.ipsets[kubeovnv1.ProtocolIPv4] = ipsets.NewIPSets(ipsets.NewIPVersionConfig(ipsets.IPFamilyV4, IPSetPrefix, nil, nil))
}
if controller.protocol == kubeovnv1.ProtocolIPv6 || controller.protocol == kubeovnv1.ProtocolDual {
iptable, err := iptables.NewWithProtocol(iptables.ProtocolIPv6)
iptables, err := iptables.NewWithProtocol(iptables.ProtocolIPv6)
if err != nil {
return nil, err
}
controller.iptable[kubeovnv1.ProtocolIPv6] = iptable
controller.ipset[kubeovnv1.ProtocolIPv6] = ipsets.NewIPSets(ipsets.NewIPVersionConfig(ipsets.IPFamilyV6, IPSetPrefix, nil, nil))
controller.iptables[kubeovnv1.ProtocolIPv6] = iptables
controller.ipsets[kubeovnv1.ProtocolIPv6] = ipsets.NewIPSets(ipsets.NewIPVersionConfig(ipsets.IPFamilyV6, IPSetPrefix, nil, nil))
}

providerNetworkInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
Expand Down Expand Up @@ -971,7 +971,7 @@ func (c *Controller) handlePod(key string) error {
if err != nil {
return err
}
// set multis-nic bandwidth
// set multus-nic bandwidth
attachNets, err := util.ParsePodNetworkAnnotation(pod.Annotations[util.AttachmentNetworkAnnotation], pod.Namespace)
if err != nil {
return err
Expand Down
Loading

0 comments on commit 42fed92

Please sign in to comment.