-
Notifications
You must be signed in to change notification settings - Fork 433
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Perf/libovsdb #1978
Perf/libovsdb #1978
Conversation
382a679
to
c279c45
Compare
c4c09de
to
6f14514
Compare
pkg/controller/controller.go
Outdated
@@ -823,8 +831,6 @@ func (c *Controller) startWorkers(stopCh <-chan struct{}) { | |||
go wait.Until(c.CheckNodePortGroup, time.Duration(c.config.NodePgProbeTime)*time.Minute, stopCh) | |||
} | |||
|
|||
go wait.Until(c.syncVmLiveMigrationPort, 15*time.Second, stopCh) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this deleted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
syncVmLiveMigrationPort 函数的目的是当虚拟机 liveMigration 完成后,把原虚拟机的 mac 和 ip 地址给新虚拟机。
这个函数要和 CreatePort(新版本函数名变为 CreateLogicalSwitchPort) 一起看才有意义,具体分为两种场景(假设虚拟机名字为 test-vm,对应迁移前和迁移后 vm pod 名字分别为:test-vm-1-xxx 和 test-vm-2-xxx):
-
keep-vm-ip = true(意味着要保留 vm 的 ip) 时,test-vm-1-xxx 和 test-vm-2-xxx 的 logical_switch_port 的名字都为 test-vm.namespace.providerName,意味着当 test-vm-2-xxx 创建时,同时存在两个 pod 拥有同一个 lsp(包括 mac 和 ip),经过热迁移测试,发现不需要设置 lsp liveMigration=1,然后再 syncVmLiveMigrationPort 处理。
-
keep-vm-ip = false (意味着要不保留 vm 的 ip)时,test-vm-1-xxx 和 test-vm-2-xxx 分别拥有自己 ip 和 mac,和上述过程无关。
在新版 CreateLogicalSwitchPort 中,取消了设置 liveMigration=1,所以 syncVmLiveMigrationPort 函数也就不需要了。
NAT outgoing for centralized subnets failed caused by missing logical router policy: ❯ kubectl ko nbctl lr-policy-list ovn-cluster
Routing Policies
31000 ip4.dst == 10.16.0.0/16 allow
31000 ip4.dst == 100.64.0.0/16 allow
30000 ip4.dst == 172.20.0.2 reroute 100.64.0.3
30000 ip4.dst == 172.20.0.3 reroute 100.64.0.2
# missing the following policy:
# 29000 ip4.src == 10.16.0.0/16 reroute 100.64.0.3 kube-ovn-controller logs: E1027 07:59:19.851029 10 node.go:983] get logical router policy: not found policy priority 29000 match ip4.src == 10.16.0.0/16 in logical router ovn-cluster
E1027 07:59:19.851044 10 node.go:993] failed to get policy route paras, not found policy priority 29000 match ip4.src == 10.16.0.0/16 in logical router ovn-cluster
E1027 07:59:19.851051 10 subnet.go:1615] check ecmp policy route exist for subnet ovn-default, error not found policy priority 29000 match ip4.src == 10.16.0.0/16 in logical router ovn-cluster |
daf91f7
to
8c685b7
Compare
https://github.com/kubeovn/kube-ovn/blob/perf/libovsdb/pkg/ovs/ovn-nb-acl.go#L43 |
15b27e8
to
94b0b01
Compare
9496769
to
f764a92
Compare
e32fdfa
to
ea14e91
Compare
6e359f1
to
53fc96d
Compare
if lb.Vips == nil { | ||
lb.Vips = make(map[string]string) | ||
} | ||
|
||
for vip, backends := range vips { | ||
lb.Vips[vip] = backends | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if lb.Vips == nil { | |
lb.Vips = make(map[string]string) | |
} | |
for vip, backends := range vips { | |
lb.Vips[vip] = backends | |
} | |
updatedVips := make(map[string]string, len(lb.Vips)+len(vips)) | |
for vip, backends := range lb.Vips { | |
updatedVips[vip] = backends | |
} | |
for vip, backends := range vips { | |
updatedVips[vip] = backends | |
} | |
lb.Vips = updatedVips |
Updating the filed directly will cause unexpected result.
* 1. replace most of logical_router_port table with libovsdb 2. add ovsdb unit test in tagert 'ut' * go fmt
411ee67
to
a999e1e
Compare
cc3d7fc
to
a1a528b
Compare
What type of this PR
Examples of user facing changes:
Which issue(s) this PR fixes:
Fixes #1675
replace ovn-nbctl function call with libovsdb