Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

go get github.com/containernetworking/cni@v1.0.1 #3939

Conversation

hswong3i
Copy link

@hswong3i hswong3i commented Apr 13, 2022

go get github.com/containernetworking/cni@v1.0.1

With current github.com/containernetworking/cni@v0.5.2 depedency, our
generated json from weave-net setup return witout the newly required
cniVersion.

For github.com/containernetworking/cni@v1.0.1 such cniVersion is now
required or else will fall as default version number 0.1.0.

This PR update the vendor folder as below:

  • Update dependency with go get github.com/containernetworking/cni@v1.0.1
  • Update vendor directory with go mod verndor
  • Update API interface to current "github.com/containernetworking/cni/pkg/types/100"
  • Update QEMU with QEMU_VERSION=v6.1.0-8
  • Update build plan with golang:1.17.9-buster
  • Update base image with alpine:3.15

Test images could be found from:

See https://github.com/containernetworking/cni/blob/v1.0.1/Documentation/spec-upgrades.md
See containernetworking/cni@27a5b99
See containerd/containerd#6575 (comment)

Fixes #3936
Fixes #3938

Signed-off-by: Wong Hoi Sing Edison hswong3i@pantarei-design.com

@hswong3i
Copy link
Author

hswong3i commented Apr 13, 2022

Image temporarily upload as below:

@Vogtinator
Copy link

FYI, that's what I tried as well (but just v0.6.0) and the resulting plugin just crashes on start (#3936 (comment))

@hswong3i hswong3i force-pushed the github.com/containernetworking/cni-v1.0.1 branch 9 times, most recently from 5ffad10 to fdf3ff6 Compare April 13, 2022 16:29
@hswong3i
Copy link
Author

hswong3i commented Apr 13, 2022

@Vogtinator this fdf3ff6#diff-9681c1c1d0b257403b0997ba56db7dfd7a055cb1f59a15e161b3b3051429ded4R82 works:

diff --git a/prog/kube-utils/main.go b/prog/kube-utils/main.go
index 42414626..9569996c 100644
--- a/prog/kube-utils/main.go
+++ b/prog/kube-utils/main.go
@@ -79,7 +79,7 @@ func isLocalNodeIP(ip string) bool {
                return false
        }
        for _, addr := range addrs {
-               if addr.Peer.IP.String() == ip {
+               if addr.Peer != nil && addr.Peer.IP.String() == ip {
                        return true
                }
        }

And so this 2 images looks functioning:

  • docker pull alvistack/weave-kube:2.8.1-20220414fdf3ff6e
  • docker pull alvistack/weave-npc:2.8.1-20220414fdf3ff6e

hswong3i added a commit to alvistack/ansible-role-kube_weave that referenced this pull request Apr 13, 2022
@hswong3i hswong3i force-pushed the github.com/containernetworking/cni-v1.0.1 branch from fdf3ff6 to cc1f680 Compare April 14, 2022 05:27
@hswong3i
Copy link
Author

With current github.com/containernetworking/cni/pkg/types/100 we have following error in live kubernetes 1.23 cluster:

  Warning  FailedCreatePodSandBox  2m13s (x83 over 20m)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-64897985d-h5wft_kube-system_f701b74c-34d0-4994-bcc9-a75a2054d7e7_0(833caabdd1f92a56a8ef540054216b5d3d06aa811f41f317c9cfca58ea7a07a0): error adding pod kube-system_coredns-64897985d-h5wft to CNI network "weave": plugin type="weave-net" name="weave" failed (add): netplugin failed: "panic: interface conversion: types.Result is *types100.Result, not *types040.Result

I am now trying with current github.com/containernetworking/cni/pkg/types/040 see if could match ;-S

@hswong3i hswong3i force-pushed the github.com/containernetworking/cni-v1.0.1 branch 3 times, most recently from 6fdd8b2 to 2d571d9 Compare April 14, 2022 06:53
@Vogtinator
Copy link

I am now trying with current github.com/containernetworking/cni/pkg/types/040 see if could match ;-S

Yes, that's pretty much API compatible and what I did as well.

@hswong3i
Copy link
Author

Yes, that's pretty much API compatible and what I did as well.

IMHO the dependency packages should upgrade into latest stable, where we could still pin the API with legacy version in this PR.

With current `github.com/containernetworking/cni@v0.5.2` depedency, our
generated json from `weave-net setup` return witout the newly required
`cniVersion`.

For `github.com/containernetworking/cni@v1.0.1` such `cniVersion` is now
required or else will fall as default version number `0.1.0`.

This PR update the `vendor` folder as below:

  - Update dependency with `go get github.com/containernetworking/cni@v1.0.1`
  - Update `vendor` directory with `go mod verndor`
  - Update API interface to `current "github.com/containernetworking/cni/pkg/types/100"`
  - Update QEMU with `QEMU_VERSION=v6.1.0-8`
  - Update build plan with `golang:1.17.9-buster`
  - Update base image with `alpine:3.15`

Test images could be found from:

  - https://hub.docker.com/r/alvistack/weave-kube
  - https://hub.docker.com/r/alvistack/weave-npc

See https://github.com/containernetworking/cni/blob/v1.0.1/Documentation/spec-upgrades.md
See containernetworking/cni@27a5b99
See containerd/containerd#6575 (comment)

Fixes weaveworks#3936

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
@hswong3i hswong3i force-pushed the github.com/containernetworking/cni-v1.0.1 branch from 2d571d9 to 6b4cadf Compare April 14, 2022 08:42
@hswong3i
Copy link
Author

The images from 6b4cadf looks functioning:

  • docker pull alvistack/weave-kube:2.8.1-202204146b4cadfc
  • docker pull alvistack/weave-npc:2.8.1-202204146b4cadfc

But with some limitation:

  • PASSED: For a newly installed 3 node k8s, start upgrading from 1.21 -> 1.22 -> 1.23 (which originally failed), where runing k8s-conformance with sonobuoy run --mode=certified-conformance for each version, are all ok
  • FAILED: For an existing cluster with weave v2.8.1 and upgrade with above 2.8.1-202204146b4cadfc, the CNI looks crazy with existing interface and not able to upgrade it, so all pod stop working...

@hswong3i
Copy link
Author

hswong3i commented May 3, 2022

To whom it may concern about a modern CNI with rich community support, try https://github.com/cilium/cilium with my https://github.com/alvistack/ansible-role-kube_cilium, or simply install as below (extract from alvistack/ansible-role-kube_cilium@23ecfff):

helm template cilium cilium/cilium \
    --version 1.11.4 \
    --namespace kube-system \
    --set bpf.preallocateMaps=false \
    --set cluster.id=0 \
    --set cluster.name=4e8b0505-4c52-57ab-a7f4-481e7ed3a2e3 \
    --set cni.binPath=/usr/libexec/cni \
    --set cni.chainingMode=none \
    --set cni.exclusive=true \
    --set externalIPs.enabled=true \
    --set hostPort.enabled=true \
    --set hostServices.enabled=true \
    --set hubble.enabled=false \
    --set ipam.mode=cluster-pool \
    --set ipam.operator.clusterPoolIPv4MaskSize=24 \
    --set ipam.operator.clusterPoolIPv4PodCIDRList=10.233.64.0/18 \
    --set ipv4.enabled=true \
    --set ipv6.enabled=false \
    --set kubeProxyReplacement=probe \
    --set nodePort.enabled=true \
    --set nodeinit.enabled=true \
    --set operator.replicas=1 \
    --set tunnel=vxlan \
    > cilium.yml
kubectl apply -Rf cilium.yml

This come with vxlan + cluster-pool == k8s podCIDR + portmap (hostport) support, most likely portable and similar as Flannel or Weave default installation. Also passing CNCF Kubernetes Conformance Test (cncf/k8s-conformance#1958).

P.S. don't spend your time for a project that original author even no longer consider about it, but just shift their development and business focus to GitOps :-(

@cdemers
Copy link

cdemers commented May 9, 2022

Looks like the main guy was working on this project is now at Grafana Labs, unless I'm mistaken the latest commit he did on the latest version tag is more than one year ago. I dont like it but on our side I think we have no choice but to consider this project shelved and look into a replacement network overlay. Thanks for your efforts @hswong3i .

@kingdonb
Copy link
Contributor

I am sorry for the lack of reply on this, I use weave net and I appreciate that you've done this work. It is a mega-PR!

I should be able to complete this by the end of KubeCon next week. I'm only able to promise that I will try what you've contributed and check if it works with the next K8s release for me, I can't promise a release; I want to participate and help but I have limited time that I can spend on Weave net, it is not what I was hired to do here.

As you've correctly noted we are definitely more focused on GitOps as a company at this point. I would expect a lot of people to have noticed that development on Weave net has stopped and for most of those people to have moved on by now, as there haven't been any releases here in over a year.

I really do appreciate you. It will be great to get a release out that supports containerd and K8s 1.24. I cannot promise that, I'm not certain that I'm qualified to certify a next release given the time that has passed and potential that bit-rot will have set in one way, and I cannot comment on the status of the project with respect to being shelved, but if you still have any interest in collaborating on a next release, I can try to connect the dots and get your fix merged if it turns out that it is viable to produce another release. I don't want to lead anyone on in the wrong direction though. If you have already moved on, there will be no hard feelings about it.

Again I'm sorry that no one has been by to look at your contribution here, please do not consider this a personal slight! We have been very busy getting Flux to GA and working on graduation from the CNCF.

@kingdonb kingdonb self-assigned this May 10, 2022
@kingdonb
Copy link
Contributor

kingdonb commented Jun 1, 2022

Sorry that I haven't been able to give any attention to this yet. I will check in next week with this thread and other relevant threads, to figure out whether a release is again viable. 👍

(It looks like containernetworking/cni#895 has moved ahead, so perhaps a change is no longer necessary?) Still it would be nice to publish a release with fixes for any outstanding CVEs, I cannot commit to it at this time.

@hswong3i
Copy link
Author

hswong3i commented Jun 2, 2022

containernetworking/cni#896 merged and https://github.com/containernetworking/cni/releases/tag/v1.1.1 now released, so ideally the impact to weave may somehow improved...

BTW with the merge of this PR or any else additional enhancement with CVE fixes should be a good idea, especially weave didn't receive any else update since e371215 (2021-09-01...), and latest official release https://github.com/weaveworks/weave/releases/tag/v2.8.1 since 2021-01-25...

Personally I had move all of my client to cilium since #3939 (comment)...

@ReillyBrogan
Copy link

(It looks like containernetworking/cni#895 has moved ahead, so perhaps a change is no longer necessary?) Still it would be nice to publish a release with fixes for any outstanding CVEs, I cannot commit to it at this time.

Honestly, perhaps it's time to consider retiring Weave Net entirely? It doesn't seem like Weaveworks derives much (if any) business value from keeping Weave Net alive by doing requisite maintenance on it (such as updating dependencies, updating to new Golang releases, fixing ecosystem issues and adapting to ecosystem changes), all of which is required before one even looks at adding new features or fixing bugs. Looking into pull requests and issues opened over the last year it doesn't really seem like there is enough community critical mass for the community to take on those tasks either.

I think the best thing that you could do would be to have a discussion internally at Weaveworks whether the company wants to commit dev hours to Weave Net maintenance/development and if not you should make a post in the Github Issues asking if there is any community interest in partially or completely taking over development of the project (at minimum you would need community members with push access, ability to merge PRs, and to make releases) with announcements in whatever other discussion forums you may have (such as Slack, no idea what you are using nowadays) to create as much awareness of that discussion issue as you possibly can. Then after a few months you'll have a much better idea of the state of the community and can make a more informed decision on the future of the project. Who knows, maybe there's a company out there that provides a managed Kubernetes product that's heavily dependent on Weave for whatever reason and they'd be more than happy to take over primary development.

Now if this was a different kind of project (like a personal project or an open source game or whatever) I would have a very different perspective and would probably not be suggesting to end the project but as a networking plugin Weave Net is a critical part of peoples infrastructure and is by nature security critical. Yes, it's a sad day when open source projects die but it's far better to be explicit about it with a notice and suggested alternatives than allow it to go a year plus without any signs of life. It would frankly be the wrong decision to fix this MR, merge it and then release a new version if there's no real change in commitment to future development from Weaveworks or the community.

@monadic
Copy link
Member

monadic commented Jun 6, 2022

Reilly, thanks for your comment. We are interested in moving this project to a 100% community model, and would welcome a bit of help rallying folks around that. I have spent quite a bit of time looking for help from vendors but they would want to see a community model first. What are your thoughts?

alexis

@hswong3i
Copy link
Author

hswong3i commented Jun 6, 2022

Why not donate Weave Net to CNCF as sandbox project, and so asking for community support and maintenance?

@rajch
Copy link
Contributor

rajch commented Jun 6, 2022

Some ideas for a community model:

  1. Articulate the goals of the project (again).
  2. Document the design and technical decisions that have got us to v2.8.1.
  3. Create a steering committee and invite open membership. Their job is to ensure the project stays true to (1), and to update (2).
  4. Formally hand over control to the steering committee.

I would like to see Weave Net continue. Although I, too, have mostly moved to cilium, Weave remains a favourite because of the absolute ease of installation, and minimum maintenance.

@monadic
Copy link
Member

monadic commented Jun 6, 2022

@rajch yes - something like this would work well. agree 100%. previously @dholbach shepherded Kured this way. but weave net is bigger and needs more help. I have tried and tried to get the cilium folks to see how much value weave-cilium can add! maybe you could help me convince them

@hswong3i -- needs a plan first

@kingdonb
Copy link
Contributor

kingdonb commented Jun 6, 2022

Document the design and technical decisions that have got us to v2.8.1.

This would be asking a lot of people who are no longer employed at Weaveworks, the reality is we do not have most of those people on board anymore and would need to continue on without their leadership in order for this project to remain viable.

donate Weave Net to CNCF as sandbox project

This is not really how it works, the CNCF does not make resources available to maintain a project which is not already under their umbrella and they will not accept projects who are not vibrant and already stocked with active maintainers. I have seen some very helpful PRs from folks who would like to see Weave Net maintained and viable into the future, but so far none of those interested have risen to the level of assuming ownership that we'd expect to see from potential maintainers.

To some extent this may be due to lack of response on our part when PRs are submitted, and I am sorry about that. If there is anyone who would like to make an affirmative commitment to pursue supporting Weave net but feels disenfranchised or blocked by our lack of participation to-date over the past year, please consider this an open invitation to raise PRs to my attention and I will help you get them merged.

@rajch
Copy link
Contributor

rajch commented Jun 7, 2022

containernetworking/cni#896 merged and https://github.com/containernetworking/cni/releases/tag/v1.1.1 now released, so ideally the impact to weave may somehow improved...

I ran checks using Kubernetes 1.24, 1.23 and 1.22 with containerd 1.6.6 running on Debian 11.3. Weave once again works, as-in.

Still, I think merging this PR is a good idea.

@monadic
Copy link
Member

monadic commented Jun 7, 2022

@kingdonb @rajch maybe you two could sync about expanding maintainers.

@kingdonb
Copy link
Contributor

kingdonb commented Jun 7, 2022

In principle I agree that merging this PR is a great idea, but #3936 seems like a higher priority now if there is a backward-compatible fix for containerd and CNI upstream. I do not have a good understanding of which versions are broken and what components were responsible, @rajch if you know the details of this I'd definitely appreciate it if you can send a PR for the docs.

I'd love to merge a PR that ensures Weave net works on all versions of Kubernetes again (even those running with components that had this backward-incompatible change, since it sounds like the change on our end needed is relatively trivial).

This PR on the other hand is not trivial. Merging it as a drive-by with no maintainer commitment for more releases going forward seems like a poor plan. I really appreciate the work, I'm just not sure who should own it. (I think I can't own it.)

I am happy to R.O. with anyone via CNCF Slack or Weaveworks community slack who is interested in more information about becoming a Weave net maintainer. We've already received at least one private reply about it.

The call-outs in this thread about our priorities with respect to other products are all correct. We haven't prioritized putting resources into Weave net because it's not driving our business. But I find the one-liner install experience with zero config very compelling to this day, I expect that even without a maintainer Weave net will remain very popular until it does no longer work anymore, or whenever a better-maintained alternative can offer a similar experience. My surveying of competitive CNI plugins shows none which have straightforward zero-configuration experience.

My first interest from a triage perspective is updating the docs to reflect the current maintenance status and versions which are known not to work together.

The second priority which is critical before other priorities can be addressed, broadly, is expanding the maintainer pool. (To that end, we should figure out where we are going to rally this cause, somewhere other than inside of this PR thread.)

@cdemers
Copy link

cdemers commented Jun 8, 2022

If the solution really is just to skip up to containerd v1.6.6, this would be the most anticlimactic resolution I've ever seen.

(even if we are very grateful for it and all the work that was put into it)

@rajch
Copy link
Contributor

rajch commented Jun 9, 2022

It's a solution, thanks to helpful reporting by a number of people, and fast work over at containernetworking/cni and /containerd/containerd. The concerned issues/PRs mention weave by name, so looks like weave is actively used in a lot of places.

Thank you, @hswong3i. Your work on this PR helped me understand the issue clearly.

@monadic
Copy link
Member

monadic commented Jun 9, 2022

thank you @rajch

@ReillyBrogan
Copy link

My recommendation is to move the discussion on the future of the Weave Net project to a separate Github issue (and probably pin it). This PR is not really the place for it and has low visibility. Perhaps amend the Github readme to point it out so it's even more visible. It's been years since I've been a part of it but I think there was a channel for weave net on the Weaveworks Slack? Could probably post an announcement there.

At this point I think the best approach is to try to get as much visibility as possible so as to draw as many people who might be interested in becoming maintainers out of the woodworks as you can.

@monadic
Copy link
Member

monadic commented Jun 10, 2022

good idea @ReillyBrogan

@kingdonb would you mind doing this (GH+Slack) -- I'm happy to help offline

@hswong3i
Copy link
Author

hswong3i commented Jun 10, 2022

Shall we back to the origin of this PR?

We could dream big with community based support (later...), but at least please proof that someone have right to accept the PR and create a new release.... Else how could we believe that Weave Net is ready and able for re-activate?

@rajch
Copy link
Contributor

rajch commented Jun 10, 2022

My recommendation is to move the discussion on the future of the Weave Net project to a separate Github issue (and probably pin it).

+1

@monadic I have some ideas regarding vendors that can approached. Cilium is probably not a good idea, as they have a functionally equal product using different underlying technology. But let's not hijack this discussion - we'll continue elsewhere.

@rajch
Copy link
Contributor

rajch commented Jun 10, 2022

Shall we back to the origin of this PR?

I just ran some tests, as follows:

  • Modified the weave manifest downloaded with the one-line instruction to use the images you provided, listed above.
  • Created two two-node clusters. On one, ensured containerd is at v1.6.4 (where the unmodified weave manifest causes problems). On the other cluster, ensured that containerd is at v1.6.6 (where the weave manifest works unmodified)
  • Applied the modified manifest on both clusters

I can confirm that the modified manifest works on both clusters, with containerd 1.6.4 and 1.6.6. So yes, the changes do help.

Here is the modified manifest in case anyone else wants to test.

apiVersion: v1
kind: List
items:
  - apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.16/net.yaml",
              "date": "Fri Jun 10 2022 16:07:14 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
      namespace: kube-system
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.16/net.yaml",
              "date": "Fri Jun 10 2022 16:07:14 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
    rules:
      - apiGroups:
          - ''
        resources:
          - pods
          - namespaces
          - nodes
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - networking.k8s.io
        resources:
          - networkpolicies
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - ''
        resources:
          - nodes/status
        verbs:
          - patch
          - update
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.16/net.yaml",
              "date": "Fri Jun 10 2022 16:07:14 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
    roleRef:
      kind: ClusterRole
      name: weave-net
      apiGroup: rbac.authorization.k8s.io
    subjects:
      - kind: ServiceAccount
        name: weave-net
        namespace: kube-system
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.16/net.yaml",
              "date": "Fri Jun 10 2022 16:07:14 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
      namespace: kube-system
    rules:
      - apiGroups:
          - ''
        resourceNames:
          - weave-net
        resources:
          - configmaps
        verbs:
          - get
          - update
      - apiGroups:
          - ''
        resources:
          - configmaps
        verbs:
          - create
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.16/net.yaml",
              "date": "Fri Jun 10 2022 16:07:14 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
      namespace: kube-system
    roleRef:
      kind: Role
      name: weave-net
      apiGroup: rbac.authorization.k8s.io
    subjects:
      - kind: ServiceAccount
        name: weave-net
        namespace: kube-system
  - apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.16/net.yaml",
              "date": "Fri Jun 10 2022 16:07:14 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
      namespace: kube-system
    spec:
      minReadySeconds: 5
      selector:
        matchLabels:
          name: weave-net
      template:
        metadata:
          labels:
            name: weave-net
        spec:
          containers:
            - name: weave
              command:
                - /home/weave/launch.sh
              env:
                - name: HOSTNAME
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: spec.nodeName
                - name: INIT_CONTAINER
                  value: 'true'
              image: 'docker.io/alvistack/weave-kube:2.8.1-202204146b4cadfc'
              readinessProbe:
                httpGet:
                  host: 127.0.0.1
                  path: /status
                  port: 6784
              resources:
                requests:
                  cpu: 50m
                  memory: 100Mi
              securityContext:
                privileged: true
              volumeMounts:
                - name: weavedb
                  mountPath: /weavedb
                - name: dbus
                  mountPath: /host/var/lib/dbus
                - name: machine-id
                  mountPath: /host/etc/machine-id
                  readOnly: true
                - name: xtables-lock
                  mountPath: /run/xtables.lock
            - name: weave-npc
              env:
                - name: HOSTNAME
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: spec.nodeName
              image: 'docker.io/alvistack/weave-npc:2.8.1-202204146b4cadfc'
              resources:
                requests:
                  cpu: 50m
                  memory: 100Mi
              securityContext:
                privileged: true
              volumeMounts:
                - name: xtables-lock
                  mountPath: /run/xtables.lock
          dnsPolicy: ClusterFirstWithHostNet
          hostNetwork: true
          initContainers:
            - name: weave-init
              command:
                - /home/weave/init.sh
              image: 'docker.io/alvistack/weave-kube:2.8.1-202204146b4cadfc'
              securityContext:
                privileged: true
              volumeMounts:
                - name: cni-bin
                  mountPath: /host/opt
                - name: cni-bin2
                  mountPath: /host/home
                - name: cni-conf
                  mountPath: /host/etc
                - name: lib-modules
                  mountPath: /lib/modules
                - name: xtables-lock
                  mountPath: /run/xtables.lock
          priorityClassName: system-node-critical
          restartPolicy: Always
          securityContext:
            seLinuxOptions: {}
          serviceAccountName: weave-net
          tolerations:
            - effect: NoSchedule
              operator: Exists
            - effect: NoExecute
              operator: Exists
          volumes:
            - name: weavedb
              hostPath:
                path: /var/lib/weave
            - name: cni-bin
              hostPath:
                path: /opt
            - name: cni-bin2
              hostPath:
                path: /home
            - name: cni-conf
              hostPath:
                path: /etc
            - name: dbus
              hostPath:
                path: /var/lib/dbus
            - name: lib-modules
              hostPath:
                path: /lib/modules
            - name: machine-id
              hostPath:
                path: /etc/machine-id
                type: FileOrCreate
            - name: xtables-lock
              hostPath:
                path: /run/xtables.lock
                type: FileOrCreate
      updateStrategy:
        type: RollingUpdate

@ReillyBrogan
Copy link

We could dream big with community based support (later...), but at least please proof that someone have right to accept the PR and create a new release.... Else how could we believe that Weave Net is ready and able for re-activate?

#3946 was just merged, so rest assured that PRs can be merged. Both @monadic and @kingdonb are part of the Weaveworks Github organization and can presumably create releases too as necessary.

I know you think that creating a release now is more important than figuring out the maintenance going forward, but respectfully I disagree. Right now the containerd issue is resolved by updating to containerd 1.6.6 so for the moment merging this PR isn't as necessary.

Reading a bit between the lines with what @monadic said in response to my earlier comment it sounds like Weaveworks made the decision not to invest any more resources into this project beyond the bare minimum necessary to pass the torch to community maintainers and perhaps a bit extra to help it limp along until then. At this point the project is effectively unmaintained which is unacceptable for security-critical software like this.

And so for the moment the primary focus must be on trying to fix that unmaintained status. Once those new community maintainers are on board it's very likely that their first priority will be to merge this PR and get a new release out.

I think this PR is going to be merged whether new maintainers are found or not, but there's a big difference between the next release being "this release updated all deps, golang, base images etc but weave net is officially being retired so consider this your notification to migrate to another tool" or it being "this release updated all deps, golang, base image etc and weave net has officially restarted development! Expect further releases in the future". I don't think a new release should be made until we know which of those two it's going to be (unless some truly critical CVE is found affecting weave that should be patched ASAP).

@hswong3i
Copy link
Author

Totally agree with the on going roadmap, but sorry that I couldn’t spare extra time for waiting it.

I will close the PR. Anyone would like to follow up please feel free to reuse my code (may not need to credit my contribution).

Wish Weave Net could have a better future:-)

@dholbach
Copy link
Contributor

Let's continue the conversation here: #3948

@bboreham
Copy link
Contributor

Document the design and technical decisions that have got us to v2.8.1.

Whilst it’s not a document, I did talk through some of the major design decisions here: https://archive.fosdem.org/2020/schedule/event/weave_net_an_open_source_container_network/

rajch added a commit to rajch/weave that referenced this pull request Mar 22, 2023
Ran `go get github.com/containernetworking/cni@v1.1.2`
Ran `go get github.com/containernetworking/plugins@1.1.2`
modified:   ../../plugin/ipam/cni.go
modified:   ../../plugin/net/cni.go
modified:   ../../prog/kube-utils/main.go
modified:   ../../prog/weaveutil/cni.go
Ran go mod tidy -v
Ran go mod vendor

This was all work done previously by @hswong3i in
weaveworks#3939

All credit to them
rajch added a commit to rajch/weave that referenced this pull request Mar 29, 2023
This was all work done previously by @hswong3i in
weaveworks#3939

All credit to them
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Kubernetes weave-net pods raise CrashLoopBackOff error Document version compatibility (all dependencies)
10 participants