Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SCTP support for Services, Pod, Endpoint, and NetworkPolicy #614

Closed
janosi opened this issue Sep 11, 2018 · 81 comments
Closed

SCTP support for Services, Pod, Endpoint, and NetworkPolicy #614

janosi opened this issue Sep 11, 2018 · 81 comments
Assignees
Labels
kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. sig/network Categorizes an issue or PR as relevant to SIG Network. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status
Milestone

Comments

@janosi
Copy link
Contributor

janosi commented Sep 11, 2018

Feature Description

@zparnold
Copy link
Member

zparnold commented Sep 11, 2018

/milestone v1.12

@k8s-ci-robot k8s-ci-robot added this to the v1.12 milestone Sep 11, 2018
@justaugustus
Copy link
Member

justaugustus commented Sep 11, 2018

/assign @janosi
/sig network
/kind feature

@k8s-ci-robot k8s-ci-robot added sig/network Categorizes an issue or PR as relevant to SIG Network. kind/feature Categorizes issue or PR as related to a new feature. labels Sep 11, 2018
@justaugustus
Copy link
Member

justaugustus commented Sep 11, 2018

/stage alpha

@k8s-ci-robot k8s-ci-robot added the stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status label Sep 11, 2018
@justaugustus justaugustus added the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Sep 11, 2018
@ameukam
Copy link
Member

ameukam commented Oct 5, 2018

Hi folks,
Kubernetes 1.13 is going to be a 'stable' release since the cycle is only 10 weeks. We encourage no big alpha features and only consider adding this feature if you have a high level of confidence it will make code slush by 11/09. Are there plans for this enhancement to graduate to alpha/beta/stable within the 1.13 release cycle? If not, can you please remove it from the 1.12 milestone or add it to 1.13?

We are also now encouraging that every new enhancement aligns with a KEP. If a KEP has been created, please link to it in the original post. Please take the opportunity to develop a KEP.

@janosi
Copy link
Contributor Author

janosi commented Oct 5, 2018

Hello @ameukam,

I am not sure I understand :) The feature's implementation has been merged into 1.12 with the PR @justaugustus referenced above, and it is indeed in that release as alpha feature, including the documentation
PR: kubernetes/kubernetes#64973
KEP: kubernetes/community#2276
Doc: kubernetes/website#10279

Thanks!

@ameukam
Copy link
Member

ameukam commented Oct 5, 2018

Hi @janosi, apologies for your confusion. The idea is to identify what is the target of this enhancement
for the next milestone ? Do you want to keep as alpha for v1.13 ? or graduate it to beta ?
As i said earlier, the cycle for 1.13 will be only 10 weeks. So it's up to you to decide if it can make it to the Code Slush as beta feature.

@guineveresaenger
Copy link

guineveresaenger commented Oct 5, 2018

@janosi if you are saying that #64973 has fixed this issue, should this issue be closed?

@justaugustus
Copy link
Member

justaugustus commented Oct 5, 2018

Feature issues remain open through to GA stage.

@janosi
Copy link
Contributor Author

janosi commented Oct 6, 2018

@ameukam Thank you for the clarification! I am fine to keep it as alpha in 1.13 - thought it would be great to understand how much time/releases I have to mature to beta/GA before my feature is removed. Thank you!

@kacole2
Copy link
Member

kacole2 commented Oct 8, 2018

/milestone clear

@k8s-ci-robot k8s-ci-robot removed this from the v1.12 milestone Oct 8, 2018
@kacole2 kacole2 added tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team and removed tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team labels Oct 8, 2018
@fejta-bot
Copy link

fejta-bot commented Jan 6, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 6, 2019
@ameukam
Copy link
Member

ameukam commented Jan 6, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 6, 2019
@bowei bowei added this to KEPs in SIG-Network KEPs Jan 30, 2019
@fejta-bot
Copy link

fejta-bot commented Apr 6, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 6, 2019
@kacole2
Copy link
Member

kacole2 commented Apr 12, 2019

I'm the Enhancement Lead for 1.15. Is this feature going to be graduating alpha/beta/stable stages in 1.15? Please let me know so it can be tracked properly and added to the spreadsheet.

Once coding begins, please list all relevant k/k PRs in this issue so they can be tracked properly.

@fejta-bot
Copy link

fejta-bot commented May 12, 2019

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 12, 2019
@palnabarun
Copy link
Member

palnabarun commented Jun 18, 2020

Hi @janosi -- just wanted to check in about the progress of the enhancement.

I saw that kubernetes/kubernetes#88932 has been merged 🎉 . Do you have any other PRs for the graduation of this enhancement? Or, is the graduation criteria is complete for this cycle?

The release timeline has been revised recently, more details of which can be found here.

Please let me know if you have any questions. 🙂


The revised release schedule is:

  • Thursday, July 9th: Week 13 - Code Freeze
  • Thursday, July 16th: Week 14 - Docs must be completed and reviewed
  • Tuesday, August 25th: Week 20 - Kubernetes v1.19.0 released

@janosi
Copy link
Contributor Author

janosi commented Jun 22, 2020

Hello @palnabarun. There is the doc PR only waiting for further possible comments or merge, nothing else. Thank you!

@palnabarun
Copy link
Member

palnabarun commented Jun 28, 2020

Hi @janosi 👋, thank you for the update. 🙂

@kikisdeliveryservice kikisdeliveryservice removed the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Sep 11, 2020
@kikisdeliveryservice kikisdeliveryservice removed this from the v1.19 milestone Sep 11, 2020
@kikisdeliveryservice
Copy link
Member

kikisdeliveryservice commented Sep 13, 2020

Hi @janosi

Enhancements Lead here. Are there any plans to graduate this to stable in 1.20?

Thanks!
Kirsten

@danwinship
Copy link
Contributor

danwinship commented Sep 14, 2020

Yes; all the e2e tests have merged now, so per the KEP we just need to get some passing results from network plugins and then this can move to GA

@kikisdeliveryservice kikisdeliveryservice added tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status and removed stage/beta Denotes an issue tracking an enhancement targeted for Beta status labels Sep 14, 2020
@kikisdeliveryservice
Copy link
Member

kikisdeliveryservice commented Sep 14, 2020

Great thanks for the update!

/milestone v1.20

@antoninbas
Copy link

antoninbas commented Sep 25, 2020

@danwinship here are the sonobuoy results for the Antrea plugin when running [Feature:SCTPConnectivity] e2e tests:

https://downloads.antrea.io/tmp/sonobuoy-results/202009252249_sonobuoy_7979918d-3e1b-4fb4-80a1-c56933312b88.tar.gz

with some extra details on how I ran the tests:

> kubectl version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:32:58Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

> ./sonobuoy version
Sonobuoy Version: v0.19.0
MinimumKubeVersion: 1.17.0
MaximumKubeVersion: 1.19.99
GitSHA: e03f9ee353717ccc5f58c902633553e34b2fe46a

> kubectl apply -f https://github.com/vmware-tanzu/antrea/releases/download/v0.10.0/antrea.yml

> kubectl get nodes -o wide
NAME                STATUS   ROLES    AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-node-master     Ready    master   13m     v1.19.2   192.168.77.100   <none>        Ubuntu 18.04.3 LTS   4.15.0-72-generic   docker://19.3.13
k8s-node-worker-1   Ready    <none>   10m     v1.19.2   192.168.77.101   <none>        Ubuntu 18.04.3 LTS   4.15.0-72-generic   docker://19.3.13
k8s-node-worker-2   Ready    <none>   7m32s   v1.19.2   192.168.77.102   <none>        Ubuntu 18.04.3 LTS   4.15.0-72-generic   docker://19.3.13

> ./sonobuoy run --e2e-focus="SCTPConnectivity" --e2e-skip="" --kube-conformance-image-version=v1.20.0-alpha.1 --wait
INFO[0000] created object                                name=sonobuoy namespace= resource=namespaces
INFO[0000] created object                                name=sonobuoy-serviceaccount namespace=sonobuoy resource=serviceaccounts
INFO[0000] created object                                name=sonobuoy-serviceaccount-sonobuoy namespace= resource=clusterrolebindings
INFO[0000] created object                                name=sonobuoy-serviceaccount-sonobuoy namespace= resource=clusterroles
INFO[0000] created object                                name=sonobuoy-config-cm namespace=sonobuoy resource=configmaps
INFO[0000] created object                                name=sonobuoy-plugins-cm namespace=sonobuoy resource=configmaps
INFO[0001] created object                                name=sonobuoy namespace=sonobuoy resource=pods
INFO[0001] created object                                name=sonobuoy-aggregator namespace=sonobuoy resource=services

> ./sonobuoy retrieve
202009252249_sonobuoy_7979918d-3e1b-4fb4-80a1-c56933312b88.tar.gz

> ./sonobuoy results 202009252249_sonobuoy_7979918d-3e1b-4fb4-80a1-c56933312b88.tar.gz
Plugin: e2e
Status: passed
Total: 5230
Passed: 5
Failed: 0
Skipped: 5225

Plugin: systemd-logs
Status: passed
Total: 3
Passed: 3
Failed: 0
Skipped: 0

I used the v1.20.0-alpha.1 conformance image for sonobuoy to ensure that the SCTP tests which are not part of 1.19 are picked up correctly. The following tests were run and passed:

    - name: '[sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive]
        NetworkPolicy between server and client using SCTP should enforce policy to
        allow traffic only from a pod in a different namespace based on PodSelector
        and NamespaceSelector [Feature:NetworkPolicy]'
      status: passed
    - name: '[sig-network] Networking Granular Checks: Services should function for
        pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]'
      status: passed
    - name: '[sig-network] Networking should function for pod-pod: sctp [Feature:SCTPConnectivity][Disruptive]'
      status: passed
    - name: '[sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive]
        NetworkPolicy between server and client using SCTP should enforce policy based
        on Ports [Feature:NetworkPolicy]'
      status: passed
    - name: '[sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive]
        NetworkPolicy between server and client using SCTP should support a ''default-deny''
        policy [Feature:NetworkPolicy]'
      status: passed

Let me know if I need to upload these somewhere else. I am happy to provide more information / run additional tests if necessary.

@danwinship
Copy link
Contributor

danwinship commented Oct 4, 2020

Output of running the current git master e2e.test --ginkgo.focus=SCTP against openshift-sdn is at https://gist.github.com/danwinship/d7e6918bd15cc46f3c4c6181f38a00aa.

{"msg":"Test Suite starting","total":9,"completed":0,"skipped":0,"failed":0}
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]","total":9,"completed":1,"skipped":391,"failed":0}
{"msg":"PASSED [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]","total":9,"completed":2,"skipped":1575,"failed":0}
{"msg":"PASSED [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]","total":9,"completed":3,"skipped":2392,"failed":0}
{"msg":"PASSED [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Feature:SCTP]","total":9,"completed":4,"skipped":3208,"failed":0}
{"msg":"PASSED [sig-network] Networking should function for pod-pod: sctp [Feature:SCTPConnectivity][Disruptive]","total":9,"completed":5,"skipped":3217,"failed":0}
{"msg":"PASSED [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]","total":9,"completed":6,"skipped":4416,"failed":0}
{"msg":"PASSED [sig-network] SCTP [Feature:SCTP] [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints","total":9,"completed":7,"skipped":5080,"failed":0}
{"msg":"Test Suite completed","total":9,"completed":7,"skipped":5219,"failed":0}

(The two SCTP-related tests that got skipped were It should create a Pod with SCTP HostPort (which only works with kubenet) and It should create a ClusterIP Service with SCTP ports (which only works if you're using the default kube-proxy metrics port, which this cluster isn't). But those are both [Feature:SCTP] not [Feature:SCTPConnectivity] and are tested by the existing SCTP periodic job and aren't part of the "need at least two plugins to pass the SCTPConnectivity tests" requirement.)

@kinarashah
Copy link

kinarashah commented Oct 12, 2020

Hi all,

1.20 Enhancement shadow here 👋

Since this Enhancement is scheduled to be in 1.20, please keep in mind these important upcoming dates:
Friday, Nov 6th: Week 8 - Docs Placeholder PR deadline
Thursday, Nov 12th: Week 9 - Code Freeze

As a reminder, please link all of your k/k PR as well as docs PR to this issue so we can track them.

Thank you!

@danwinship
Copy link
Contributor

danwinship commented Oct 16, 2020

Updated to GA in k/k, kubernetes/kubernetes#95566.
Docs PR is kubernetes/website#24593

@danwinship
Copy link
Contributor

danwinship commented Oct 21, 2020

So code and docs are merged and I filed a PR (#2107) to update the KEP to "implemented". Should I make that PR be "Closes: #614" or does this issue stay open until the release team is done with it?

@somtochiama
Copy link
Member

somtochiama commented Oct 21, 2020

Sorry @danwinship! You are all good.

@kikisdeliveryservice
Copy link
Member

kikisdeliveryservice commented Oct 30, 2020

This is awesome @danwinship !!!

I see the PR: #2107 marking it as GA which looks good.

Even after that merges, we'll leave this issue open for tracking purposes and close it out when the release is finished.

Thanks again!!

@kikisdeliveryservice
Copy link
Member

kikisdeliveryservice commented Dec 10, 2020

Still waiting for that PR to merge marking this implementable, just pinged on it, hopefully we can get it in soon :)

@kikisdeliveryservice
Copy link
Member

kikisdeliveryservice commented Dec 10, 2020

Got the PR merged (#2107), @danwinship feel free to close this issue :)

@danwinship
Copy link
Contributor

danwinship commented Dec 22, 2020

/close

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Dec 22, 2020

@danwinship: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

SIG-Network KEPs automation moved this from KEP to Done Dec 22, 2020
@annajung annajung removed the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Jan 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. sig/network Categorizes an issue or PR as relevant to SIG Network. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status
Projects
None yet
Development

No branches or pull requests