Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Networking under the new menu #245

Closed
Paraphraser opened this issue Jan 19, 2021 · 4 comments
Closed

Networking under the new menu #245

Paraphraser opened this issue Jan 19, 2021 · 4 comments

Comments

@Paraphraser
Copy link

@Slyke

I would like to open a discussion on "new menu" networking.

I've made no secret of my views on the changes, particularly how it adds complexity that mere mortals should not have to deal with. You can relax because I don't propose to re-hash any of that here.

I have just done a clean clone of SensorsIot/IOTstack and stepped through the contents of the .templates folder to build the following table:

networkassignments

Here are the things that jump out at me:

  1. domoticz defines network_mode: bridge.

    I can't find any documentation to support that. See compose spec.

    Any idea what "bridge" means in this context? Any idea why it actually works when it seems to be an undocumented option? What seems to happen is that domoticz is the only container attached to a network called "bridge".

  2. Containers with a blue highlight have no "networks" clause in their service definitions. This implies that:

    • they get attached to "iotstack_default";
    • have interoperability issues with containers that are attached to "iotstack_nw".

    Interoperability issues seem to be a fairly common complaint on Discord.

  3. The right-most column ("IOTstack_Net_Internal") is not associated with any service definition. It seems to serve no purpose other than to produce warning messages at "up" time.

  4. The rationale for the "IOTstack_NextCloud" network is not clear. I assume it's intended to be a dedicated back-end path for database communications, presumably based on an assumption that it will improve performance.

    Such an assumption might be valid were (a) real switching hardware involved and (b) the unicast load to/from the database engine sufficient to swamp unicast traffic reaching the front end but I question whether there's likely to be any measurable performance difference in a Docker environment running on a single RPi.

    That said, my only real objection is the way it imposes itself on your attention at "up" time when it isn't in use. At the very least, I think that needs to be fixed. Please keep reading!

  5. I assume PiHole is attached to "IOTstack_VPN" to provide a DMZ-like function such that DNS resolution does not have to reach the internal network. If there's another reason then please elaborate.

Rolling all that together, here's what I think we should do:

  1. Get rid of "IOTstack_Net_Internal". It isn't doing anything.
  2. Reconsider "IOTstack_NextCloud".
  3. Retain "IOTstack_VPN" if there's a good reason for keeping it.
  4. Remove every single mention of "IOTstack_Net" in every single service definition so that, by default, every container is attached to "iotstack_default". By inference, the exceptions would be containers in host mode and the "IOTstack_NextCloud" and "IOTstack_VPN" cases if they continue.

I just bet your next question is something like:

but how will PiHole work? It can't just be attached to "IOTstack_VPN" alone because then it won't be visible to other containers. It has to be attached to something like "IOTstack_Net" too and that, in turn, forces other containers to adopt "IOTstack_Net".

Not true!

$ cat docker-compose.yml
version: '3.6'

services:

  portainer-ce:
    container_name: portainer-ce
    image: portainer/portainer-ce
    restart: unless-stopped
    ports:
      - "8000:8000"
      - "9002:9000"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./volumes/portainer-ce/data:/data

  nodered:
    container_name: nodered
    build: ./services/nodered/.
    restart: unless-stopped
    user: "0"
    env_file: ./services/nodered/nodered.env
    ports:
      - "1880:1880"
    volumes:
      - ./volumes/nodered/data:/data
      - /var/run/docker.sock:/var/run/docker.sock
      - /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket
    devices:
      - "/dev/ttyAMA0:/dev/ttyAMA0"
      - "/dev/vcio:/dev/vcio"
      - "/dev/gpiomem:/dev/gpiomem"

  influxdb:
    container_name: influxdb
    image: "influxdb:latest"
    restart: unless-stopped
    ports:
      - "8086:8086"
      - "8083:8083"
      - "2003:2003"
    env_file:
      - ./services/influxdb/influxdb.env
    volumes:
      - ./volumes/influxdb/data:/var/lib/influxdb
      - ./backups/influxdb/db:/var/lib/influxdb/backup

  grafana:
    container_name: grafana
    image: grafana/grafana
    restart: unless-stopped
    user: "0"
    ports:
      - "3000:3000"
    env_file:
      - ./services/grafana/grafana.env
    volumes:
      - ./volumes/grafana/data:/var/lib/grafana
      - ./volumes/grafana/log:/var/log/grafana

  mosquitto:
    container_name: mosquitto
    image: eclipse-mosquitto
    restart: unless-stopped
    user: "1883"
    ports:
      - "1883:1883"
    volumes:
      - ./volumes/mosquitto/data:/mosquitto/data
      - ./volumes/mosquitto/log:/mosquitto/log
      - ./volumes/mosquitto/pwfile:/mosquitto/pwfile
      - ./services/mosquitto/mosquitto.conf:/mosquitto/config/mosquitto.conf
      - ./services/mosquitto/filter.acl:/mosquitto/config/filter.acl

  pihole:
    container_name: pihole
    #image: pihole/pihole:latest
    image: pihole/pihole:v5.2
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "67:67/udp"
      - "8089:80/tcp"
      #- "443:443/tcp"
    env_file:
      - ./services/pihole/pihole.env
    volumes:
       - ./volumes/pihole/etc-pihole/:/etc/pihole/
       - ./volumes/pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/
    dns:
      - 127.0.0.1
      - 1.1.1.1
    # Recommended but not required (DHCP needs NET_ADMIN)
    #   https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
    cap_add:
      - NET_ADMIN
    restart: unless-stopped
    networks:
      - default
      - vpn_nw

networks:

  vpn_nw: # Network specifically for VPN
    name: IOTstack_VPN
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 10.77.88.0/24
        # - gateway: 192.18.200.1

What networks does Docker know about?

$ docker network ls
NETWORK ID     NAME               DRIVER    SCOPE
cc8e5a915889   IOTstack_VPN       bridge    local
f888e18ad260   bridge             bridge    local
1f6d0f27cdd2   host               host      local
3601ac431bdc   iotstack_default   bridge    local
61660cd374cd   none               null      local

What's attached to IOTstack_VPN?

$ docker network inspect IOTstack_VPN
[
    {
        "Name": "IOTstack_VPN",
        "Id": "cc8e5a9158893f7678d23543f27755e53f205ac21ec8d114542fcd9456306759",
        "Created": "2021-01-19T22:06:17.905620998+11:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.77.88.0/24"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "04bcb741444f9a7dc3975d5af84ae3c2c73cfb6a50b5e4f2bbc44728721e5c6f": {
                "Name": "pihole",
                "EndpointID": "ada03dbf6504b9ea009a8dba6ffd92709575c130b5c44ca6c5810d5d97f5566f",
                "MacAddress": "02:42:0a:4d:58:02",
                "IPv4Address": "10.77.88.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "IOTstack_VPN",
            "com.docker.compose.project": "iotstack",
            "com.docker.compose.version": "1.27.4"
        }
    }
]

It's connected to PiHole. What about the iotstack_default network?

$ docker network inspect iotstack_default 
[
    {
        "Name": "iotstack_default",
        "Id": "3601ac431bdc691cd7b28dbf94f90a78df69f7bf940a5b25d9efeff389307a3b",
        "Created": "2021-01-19T21:19:53.621745821+11:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.20.0.0/16",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "04bcb741444f9a7dc3975d5af84ae3c2c73cfb6a50b5e4f2bbc44728721e5c6f": {
                "Name": "pihole",
                "EndpointID": "dd2c1de50bed33937fcaff10353c19c5f86ff6e3f4d9d10219e0c6fc28ba24c8",
                "MacAddress": "02:42:ac:14:00:09",
                "IPv4Address": "172.20.0.9/16",
                "IPv6Address": ""
            },
            "686743304fc3e1fbcdbb495481ca298ca47239ed18cda28c91d9d4736fcbffa2": {
                "Name": "nodered",
                "EndpointID": "213778b7d307876d65d332503a05db4c2a2b4dd967a79b198f818ece5871a711",
                "MacAddress": "02:42:ac:14:00:02",
                "IPv4Address": "172.20.0.2/16",
                "IPv6Address": ""
            },
            "85cedb39da3f531b787d9686de9f216489c1cdd198339e5afa095ea0d4642bd3": {
                "Name": "portainer-ce",
                "EndpointID": "f3dd82c102bea9de8c86dd0615669a363823dae112e05765cb24227d18432e5b",
                "MacAddress": "02:42:ac:14:00:0a",
                "IPv4Address": "172.20.0.10/16",
                "IPv6Address": ""
            },
            "8658084606c8738ca449c22b6076ec5e2eacd2952128d5cf7b5a1d2f55e7182e": {
                "Name": "influxdb",
                "EndpointID": "69684edd7968fe0d44ad7ddcc79436b2f1da66118e01fd4253f5f3162ce6ea39",
                "MacAddress": "02:42:ac:14:00:04",
                "IPv4Address": "172.20.0.4/16",
                "IPv6Address": ""
            },
            "b60716094645c084f14ef977bb03642ff6ea2b30a9b55aab79a765b273458272": {
                "Name": "grafana",
                "EndpointID": "2d0b656f7a3194d58e54a78257305d804d85772273638854693b45626c7d325d",
                "MacAddress": "02:42:ac:14:00:07",
                "IPv4Address": "172.20.0.7/16",
                "IPv6Address": ""
            },
            "dc52303398b601425aa8f866b17930180a2b2dd9c35ac581d30f91f4f56daafa": {
                "Name": "mosquitto",
                "EndpointID": "c4fafec112f3a1479a1829bbac14d0886461eb15f13f03084db459501e6c4e3d",
                "MacAddress": "02:42:ac:14:00:05",
                "IPv4Address": "172.20.0.5/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "default",
            "com.docker.compose.project": "iotstack",
            "com.docker.compose.version": "1.27.4"
        }
    }
]

PiHole is attached to both the special case and default networks.

No smoke, no mirrors, nothing up my sleeve.

The same approach would work for nextcloud if the back-end network is retained.

Seems to me this solves a bunch of problems:

  1. No more half-in, half-out interoperability issues.
  2. IOTstackers who aren't data-comms gurus can have a quiet life.
  3. New containers can be added without having to consider comms unless it is necessary.
  4. The default network goes back to Docker choosing the IP range.

Simplifying it down to just those containers that actually need specialised comms probably also creates the opportunity for you to revisit how network definitions get tacked onto docker-compose.yml, so that they are only included when needed.

Rather than the all-in-one .templates/env.yml, how about something like:

~/IOTstack/.templates
  Network_Definitions
    vpn_nw.yml
    nextcloud_internal.yml

A mention of a network name like "vpn_nw" in a service definition is the signal to include the "vpn_nw.yml" at the end of docker-compose.yml with the "networks:" header dependent on whether there is at least one extra network.

Including an empty "default.yml" would generalise the solution for the PiHole and NextCloud cases, and provide the opportunity to chuck out a warning if a service definition mentioned a network for which there was no corresponding NETWORK_NAME.yml file.

Thoughts?

@senarvi
Copy link

senarvi commented Jan 28, 2021

I've only used IOTstack for a couple of days so I cannot comment on your proposal, but you're right the networking was confusing. zigbee2mqtt_assistant is by default configured to connect to mosquitto, but it won't find the host, because mosquitto is in iotstack_nw network and zigbee2mqtt_assistant is not. Also, it's not clear whether iotstack_nw ("exposed by your host") is the correct network to use, or whether iotstack_nw_internal ("for interservice communication, no access to outside") would be adequate.

@Paraphraser
Copy link
Author

@senarvi short answer: iotstack_nw

The basic tests are:

  • does the process running inside the container need to communicate with the outside world?
  • does something in the outside world need to communicate with the process running inside the container?

If the answer to either/both is "yes" then "iotstack_nw" is appropriate.

Because the vast majority of situations answer "yes" to both questions, it would be better if that was the default. The most appropriate default is so have no networking specified at all. Then docker-compose automatically sets up iotstack_default and attaches all containers to it. Nobody needs to think about networking at all. It just works.

Then, if there are specific situations where you need specialised networks, handle those case-by-case. That's what the proposal is all about.

@Slyke
Copy link
Collaborator

Slyke commented Feb 1, 2021

All very good points. Most of the services require communicating with the outside world. IOTstack will allow you to select the networking mode, and the network(s) on a per container basis. It will not include any unused networks. The reason is that some users may not want some of their services only accessible via VPN and docker internal.
image

@mats-nk
Copy link

mats-nk commented Sep 13, 2021

There is a third option in the evaluation process:

_The basic tests are:

1. does the process running inside the container need to communicate with the outside world?
2. does something in the outside world need to communicate with the process running inside the container?

If the answer to either/both is "yes" then "iotstack_nw" is appropriate._

And that is:
3. Is any multicast needed in the container?

If yes then is network_mode: host needed for the container.

Paraphraser added a commit to Paraphraser/IOTstack that referenced this issue Jan 18, 2022
This PR follows on from [Issue 422](SensorsIot#422 (comment)) and the networking scheme proposed therein to support remote WireGuard clients obtaining DNS from ad-blockers (eg PiHole) running in another container on the same RPi as the WireGuard server.

This PR implements:

1. Two internal networks:

	* "default" (`iotstack_default` at runtime).
	* "nextcloud" (`iotstack_nextcloud` at runtime).

2. Docker allocates all IP addressing, dynamically, from 172.16/12 (reverting from 10/8 subnets).
3. NextCloud *explicitly* joins both internal networks.
4. NextCloud_DB *explicitly* joins "nextcloud".
5. All other containers *implicitly* join "default".
6. No networking differences between old and new menus (full harmonisation).
7. Resolves all remaining new-menu inconsistencies first raised in [Issue 245](SensorsIot#245).

Adds `use-container-dns.sh` to WireGuard service template folder to support WireGuard forwarding DNS requests to ad-blockers running on the same RPi. This is based on work done by @ukkopahis. This change is related to the networking changes which deviate from the scheme proposed in Issue 422.

Documentation:

1. Adds "significant change to networking" to main README.md.
2. Updates WireGuard to explain how to forward DNS requests to ad-blockers running on the same RPi.

Signed-off-by: Phill Kelley <pmk.57t49@lgosys.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants