Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpf, sock: support v4-in-v6 mapped addresses #9923

Merged
merged 1 commit into from Jan 22, 2020

Conversation

borkmann
Copy link
Member

@borkmann borkmann commented Jan 21, 2020

See commit msg. /cc @brb

Support v4-in-v6 mapped addresses in BPF host reachable services.

This change is Reviewable

@borkmann borkmann added pending-review sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages. labels Jan 21, 2020
@borkmann borkmann requested review from brb and a team January 21, 2020 21:19
@maintainer-s-little-helper
Copy link

Release note label not set, please set the appropriate release note.

@maintainer-s-little-helper
Copy link

Release note label not set, please set the appropriate release note.

@maintainer-s-little-helper maintainer-s-little-helper bot added this to In progress in 1.7.0 Jan 21, 2020
@borkmann borkmann added the release-note/minor This PR changes functionality that users may find relevant to operating Cilium. label Jan 21, 2020
@borkmann
Copy link
Member Author

test-me-please

@borkmann
Copy link
Member Author

test-me-please

@coveralls
Copy link

coveralls commented Jan 21, 2020

Coverage Status

Coverage increased (+0.03%) to 46.008% when pulling f50adb2 on pr/v4-in-v6-host-reachable into 3d42090 on master.

Implement support for v4-in-v6 mapped addresses in host reachable services
in order to connect v6-only type programs making use of the mapping, e.g.
as the case in Java applications.

  # ./cilium/cilium service list
  ID   Frontend             Service Type   Backend
  1    193.99.144.88:1000   ClusterIP      1 => 193.99.144.80:80

Implement support for TCP and UDP. Example for TCP socket via connect:

  # strace -f ./a.out
  [...]
  socket(AF_INET6, SOCK_STREAM, IPPROTO_TCP) = 3
  connect(3, {sa_family=AF_INET6, sin6_port=htons(1000), inet_pton(AF_INET6, "::ffff:193.99.144.88", &sin6_addr), sin6_flowinfo=htonl(0), sin6_scope_id=0}, 28) = 0
  write(3, "a", 1)                        = 1

tcpdump trace from application showing that bpf_sock selects the right v4
based backend:

  [...]
  22:08:35.703384 IP 192.168.178.29.46374 > 193.99.144.80.80: Flags [S], seq 3832600274, win 29200, options [mss 1460,sackOK,TS val 712816014 ecr 0,nop,wscale 7], length 0
  22:08:35.715239 IP 193.99.144.80.80 > 192.168.178.29.46374: Flags [S.], seq 2634692580, ack 3832600275, win 4200, options [mss 1400,nop,nop,TS val 3387918079 ecr 712816014,sackOK,eol], length 0
  22:08:35.715255 IP 192.168.178.29.46374 > 193.99.144.80.80: Flags [.], ack 1, win 29200, options [nop,nop,TS val 712816026 ecr 3387918079], length 0
  [...]

DNS / UDP example:

  # ./cilium/cilium service list
  ID   Frontend             Service Type   Backend
  3    193.99.144.90:53     ClusterIP      1 => 8.8.8.8:53

  # dig ipv6.google.com @"::ffff:193.99.144.90"

  ; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> ipv6.google.com @::ffff:193.99.144.90
  ;; global options: +cmd
  ;; Got answer:
  ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26196
  ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

  ;; OPT PSEUDOSECTION:
  ; EDNS: version: 0, flags:; udp: 512
  ;; QUESTION SECTION:
  ;ipv6.google.com.		IN	A

  ;; ANSWER SECTION:
  ipv6.google.com.	21588	IN	CNAME	ipv6.l.google.com.

  ;; AUTHORITY SECTION:
  l.google.com.		48	IN	SOA	ns1.google.com. dns-admin.google.com. 290586609 900 900 1800 60

  ;; Query time: 6 msec
  ;; SERVER: ::ffff:193.99.144.90#53(::ffff:193.99.144.90)
  ;; WHEN: Tue Jan 21 22:15:08 CET 2020
  ;; MSG SIZE  rcvd: 115

  # tcpdump -i any port 53 -n
  [...]
  22:15:08.506594 IP 192.168.178.29.55071 > 8.8.8.8.53: 26196+ [1au] A? ipv6.google.com. (56)
  22:15:08.513540 IP 8.8.8.8.53 > 192.168.178.29.55071: 26196 1/1/1 CNAME ipv6.l.google.com. (115)
  [...]

Reported-by: Osvald Ivarsson via Slack
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
@borkmann
Copy link
Member Author

test-me-please

@borkmann borkmann moved this from In progress (1.7) to Done in 1.9 kube-proxy removal & general dp optimization Jan 21, 2020
@borkmann
Copy link
Member Author

vm provision fail in ci, retrying...

@borkmann
Copy link
Member Author

test-me-please

@joestringer
Copy link
Member

VM provisioning failure again.

@joestringer
Copy link
Member

Test-me-please

@borkmann
Copy link
Member Author

test-me-please

@borkmann
Copy link
Member Author

Hit #9903

@borkmann
Copy link
Member Author

test-me-please

@borkmann
Copy link
Member Author

borkmann commented Jan 22, 2020

Same CI flake (code is not involved in the failure), retrying.

@borkmann
Copy link
Member Author

test-me-please

@borkmann
Copy link
Member Author

Hit same issue.

@borkmann
Copy link
Member Author

test-me-please

Copy link
Member

@aanm aanm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any tests covering this case so we know we will never regress?

@aanm aanm added this to the 1.8 milestone Jan 22, 2020
Copy link
Member

@gandro gandro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm no expert in this code, but it looks like a very reasonable approach to me. I did not see any divergence to the current v4 logic. Thanks for the heads up!

bpf/bpf_sock.c Show resolved Hide resolved
@borkmann
Copy link
Member Author

Are there any tests covering this case so we know we will never regress?

This would depend on full IPv6 support in our CI.

@aanm aanm modified the milestones: 1.8, 1.7 Jan 22, 2020
@borkmann
Copy link
Member Author

Are there any tests covering this case so we know we will never regress?

This would depend on full IPv6 support in our CI.

Just discussed offline that we should enable the v6 hook in bpf_sock when we have v4 enabled in order to avoid users having to enable v6 as a whole. Also this enables adding CI tests. Will add both changes in a follow-up.

@borkmann
Copy link
Member Author

test-me-please

@borkmann borkmann merged commit 7f1142e into master Jan 22, 2020
1.7.0 automation moved this from In progress to Merged Jan 22, 2020
@borkmann borkmann deleted the pr/v4-in-v6-host-reachable branch January 22, 2020 15:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release-note/minor This PR changes functionality that users may find relevant to operating Cilium. sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages.
Projects
No open projects
1.7.0
  
Merged
Development

Successfully merging this pull request may close these issues.

None yet

5 participants