Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IP ranges no longer work #2221

Closed
8 tasks done
viceice opened this issue Dec 19, 2023 · 11 comments · Fixed by #2264
Closed
8 tasks done

IP ranges no longer work #2221

viceice opened this issue Dec 19, 2023 · 11 comments · Fixed by #2264
Assignees
Labels

Comments

@viceice
Copy link

viceice commented Dec 19, 2023

MetalLB Version

0.13.12

Deployment method

Charts

Main CNI

calico

Kubernetes Version

1.27.8

Cluster Distribution

k3s

Describe the bug

IP ranges no longer work on IP pools

To Reproduce

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: local
  namespace: metallb-system
spec:
  addresses:
  - 172.30.2.70-172.30.2.99
  autoAssign: false

Apply this pool an try to use 172.30.2.70 from that pool and get the not allowed error on the service.

Expected Behavior

Ranges should work like they did some versions before.

Additional Context

Workaround: use cidr ips

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: local
  namespace: metallb-system
spec:
  addresses:
  - 172.30.2.70/31
  - 172.30.2.72/29
  - 172.30.2.80/28
  autoAssign: false

I've read and agree with the following

  • I've checked all open and closed issues and my request is not there.
  • I've checked all open and closed pull requests and my request is not there.

I've read and agree with the following

  • I've checked all open and closed issues and my issue is not there.
  • This bug is reproducible when deploying MetalLB from the main branch
  • I have read the troubleshooting guide and I am still not able to make it work
  • I checked the logs and MetalLB is not discarding the configuration as not valid
  • I enabled the debug logs, collected the information required from the cluster using the collect script and will attach them to the issue
  • I will provide the definition of my service and the related endpoint slices and attach them to this issue
@viceice viceice added the bug label Dec 19, 2023
@viceice
Copy link
Author

viceice commented Dec 19, 2023

I'll provide logs later if required, but can't currently access them since i wont destry our production system again.

@superherointj
Copy link

superherointj commented Dec 23, 2023

I confirm issue. IP range wasn't working for me either.

CIDR workaround worked. Thanks for reporting.

My set-up:
MetalLB from helm-chart: metallb@0.13.12.
Host NixOS, K3s v1.28.4+k3s2, CNI is default for K3s (I believe, flannel, vxlan).

@shimritproj
Copy link
Contributor

I would like to work on it.

@oribon
Copy link
Member

oribon commented Dec 24, 2023

thanks! assigned

@shimritproj
Copy link
Contributor

shimritproj commented Jan 15, 2024

You just need to modify the "autoAssign" field, and this will work for you.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: local
  namespace: metallb-system
spec:
  addresses:
  - 172.30.2.70-172.30.2.99
  autoAssign: true

@viceice

@viceice
Copy link
Author

viceice commented Jan 15, 2024

You just need to modify the "autoAssign" field, and this will work for you.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: local
  namespace: metallb-system
spec:
  addresses:
  - 172.30.2.70-172.30.2.99
  autoAssign: true

@viceice

i don't want to enable auto assign. I've now converted all ip ranges to use cidr format.

@shimritproj
Copy link
Contributor

shimritproj commented Jan 16, 2024

i don't want to enable auto assign. I've now converted all ip ranges to use cidr format.

Okay, I will check it and update here. @viceice

@shimritproj
Copy link
Contributor

Hi, I was able to reproduce the bug on v0.13.12 but not on main, which makes me think this was probably fixed in some commit in between (I'm not totally sure which one). I'll modify the existing tests to include the autoAssign:false case so we won't regress again as I can see it is missing. thanks :) @viceice

@viceice
Copy link
Author

viceice commented Jan 25, 2024

thanks, when can we expect a new release?

@fedepaol
Copy link
Member

thanks, when can we expect a new release?

sometime next week (I started preparing the rel notes #2261) . Please make sure you don't see the problem deploying from main if you cna

@fedepaol
Copy link
Member

Closing this issue based on the above discussion. @viceice feel free to reopen if 0.14.3 (or main) do not work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants