Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error="could not find an available IP while allocating VIP" #35

Closed
burtsevyg opened this issue Apr 7, 2020 · 5 comments
Closed

error="could not find an available IP while allocating VIP" #35

burtsevyg opened this issue Apr 7, 2020 · 5 comments

Comments

@burtsevyg
Copy link

error="could not find an available IP while allocating VIP"

I get this error every day on my dev single server environment because all my "public" services(about 100 services) use traefik-public.

docker/ip-util-check script say that:

Network traefik-public/n37oijkbobyw has an IP address capacity of 253 and uses 220 addresses spanning over 1 nodes
WARNING: network is using more than the 75% of the total space. Remaining only 32 IPs after upgrade

What can I do in this situation?

@burtsevyg
Copy link
Author

As I understand dnsrr solve this problem but it does not support with traefik right now traefik/traefik#3288

@tiangolo
Copy link
Owner

Thanks for reporting back and closing the issue 👍

@clintmod
Copy link

clintmod commented Apr 6, 2021

As I understand dnsrr solve this problem but it does not support with traefik right now traefik/traefik#3288

dnsrr works for me with the latest version of Traefik and Docker

@tiger5226
Copy link

I will just save everyone a TON of time. Ingress default creation is the problem here. Every service you start up connects to your created network and the ingress network. The default for the ingress network is /24. That means if all of your services are exposed to the ingress network, you can have at most 254 services in your swarm created. These assignments to the ingress network don't get recycled (bug...but I noticed until a manager restart).

This can be tested by launching a swarm with docker swarm init --default-addr-pool 10.0.0.0/8 --default-addr-pool-mask-length 28. It creates the ingress network with mask length /28, which means only 14 service connections can be created. You will hit this issue almost immediately. It's a good reproduction scenario.

Keep in mind if one wants to reduce the number of ip address assigned by default on network create then you can set the mask length parameter of the swarm, then remove the ingress network, and recreate it with subnet 10.0.0.0/16 which will then allow 65534 service connections to the ingress, while keeping your default allocation of /28 for example for all networks created.

Thumbs up if this helped!

@cadmax
Copy link

cadmax commented Aug 5, 2021

In my environment, we solved this by creating more networks and linking to traefik, so we could use another 256 available addresses for each network created;
Example:

traefik-docker-compose.yml

version: '3.3'
networks:
  webgateway:
    driver: overlay
    ipam:
      driver: default
      config:
        - subnet : 192.168.1.0/24
  webgateway_2:
    driver: overlay
    ipam:
      driver: default
  webgateway_3:
    driver: overlay
    ipam:
      driver: default
  webgateway_4:
    driver: overlay
    ipam:
      driver: default

services:
  traefik:
    image: "traefik:v2.1.3"
    command:
      - "--ping=true"
      - "--ping.entryPoint=ping"
      - "--providers.docker.swarmMode=true"
      - "--providers.docker.network=traefik_webgateway"
      - "--providers.docker.network=traefik_webgateway_2"
      - "--providers.docker.network=traefik_webgateway_3"
      - "--providers.docker.network=traefik_webgateway_4"
      - "--providers.file.directory=/configuration"
      - "--providers.file.watch=true"
      - "--entryPoints.web.address=:80"
      - "--entryPoints.web.forwardedHeaders.insecure"
      - "--entryPoints.websecure.address=:443"
      - "--entryPoints.websecure.forwardedHeaders.insecure"
      - "--entryPoints.ping.address=:8082"
      - "--api.dashboard=true"
      - "--api.insecure=true"
      - "--metrics=true"
      - "--metrics.prometheus=true"
      - "--accesslog=true"
    networks:
      - webgateway
      - webgateway_2
      - webgateway_3
      - webgateway_4
    ports:
      - "443:433"
      - "80:80"
      - "8080:8080"
      - "8082:8082"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /root/traefik/configuration/:/configuration/
    deploy:
      restart_policy:
        condition: any
        delay: 5s
      mode: global
      placement:
       constraints:
         - node.role == manager
      labels:
        - traefik.enable=false

my api.yml 1:

version: "3.8"

networks:
  traefik_webgateway:
    external: true

services:
  web:
    image: myservicename
    command: ["node", "server"]
    environment:
      - TZ=America/Sao_Paulo
    networks:
      - traefik_webgateway
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
        delay: 5s
      labels:
        - "traefik.docker.network=traefik_webgateway"
        - "traefik.http.routers.myservicename.rule=Host(`service.example.com`)"
        - "traefik.http.routers.myservicename.entrypoints=web"
        - "traefik.http.routers.myservicename.service=myservicename"
        - "traefik.http.services.myservicename.loadbalancer.server.port=3335"

my front.yml after 256 services:

version: "3.8"

networks:
  traefik_webgateway_2:
    external: true

services:
  web:
    image: myfront
    command: ["node", "server"]
    environment:
      - TZ=America/Sao_Paulo
    networks:
      - traefik_webgateway_2
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
        delay: 5s
      labels:
        - "traefik.docker.network=traefik_webgateway_2"
        - "traefik.http.routers.myfront.rule=Host(`myfront.example.com`)"
        - "traefik.http.routers.myfront.entrypoints=web"
        - "traefik.http.routers.myfront.service=myfront"
        - "traefik.http.services.myfront.loadbalancer.server.port=8080"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants