Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] VPN between BurmillaOS nodes #96

Closed
pwFoo opened this issue May 22, 2021 · 6 comments
Closed

[Question] VPN between BurmillaOS nodes #96

pwFoo opened this issue May 22, 2021 · 6 comments
Labels
question Further information is requested

Comments

@pwFoo
Copy link

pwFoo commented May 22, 2021

Hi,
I have a setup with 3 BurmillaOS servers. One have a static and public ip address and two apu systems are running behind a adsl router with a dynamic public ip address.
I try to build a docker swarm network between all three nodes and I think I need bidirectional network connections. That won't work with two nodes behind a nat...

How could a create tunneled (vpn) connections as easy as possible between all the nodes? Or is there a way without vpn to enable routed traffic between all nodes?

Is there a openvpn / wireguard system-service available? Important point. I need to establish the connection like client -> server because of the dynamic ip address in front of two of my nodes...

@olljanat
Copy link
Member

On theory OpenVPN should works as system-service and to be able to provide VPN connection for those but there is no ready made solution for it.

If you don't need overlay network between those nodes then Portainer + their Edge Agent might be easier option https://www.portainer.io/blog/using-the-portainer-edge-agent-edge-groups-and-edge-stacks-part-1

@justledbetter
Copy link

justledbetter commented Jun 1, 2021

I've used ZeroTier for this purpose with great success, here's how I plumbed it on my system (ymmv of course)

Put the following files in the zerotier-containerized directory:

Dockerfile:

## NOTE: to retain configuration; mount a Docker volume, or use a bind-mount, on /var/lib/zerotier-one

FROM debian:buster-slim as builder

## Supports x86_64, x86, arm, and arm64

RUN apt-get update && apt-get install -y curl gnupg
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 0x1657198823e52a61  && \
    echo "deb http://download.zerotier.com/debian/buster buster main" > /etc/apt/sources.list.d/zerotier.list
RUN apt-get update && apt-get install -y zerotier-one=1.6.4
COPY main.sh /var/lib/zerotier-one/main.sh

FROM debian:buster-slim
LABEL version="1.6.4"
LABEL description="Containerized ZeroTier One for use on CoreOS or other Docker-only Linux hosts."

# ZeroTier relies on UDP port 9993
EXPOSE 9993/udp

RUN mkdir -p /var/lib/zerotier-one
COPY --from=builder /usr/sbin/zerotier-cli /usr/sbin/zerotier-cli
COPY --from=builder /usr/sbin/zerotier-idtool /usr/sbin/zerotier-idtool
COPY --from=builder /usr/sbin/zerotier-one /usr/sbin/zerotier-one
COPY --from=builder /var/lib/zerotier-one/main.sh /main.sh

RUN chmod 0755 /main.sh
ENTRYPOINT ["/main.sh"]
CMD ["zerotier-one"]

main.sh:

#!/bin/sh

export PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin

if [ ! -e /dev/net/tun ]; then
        echo 'FATAL: cannot start ZeroTier One in container: /dev/net/tun not present.'
        exit 1
fi

exec "$@"

Then in that directory, run docker build -t zerotier:(version) .

And then kick it off with the following command:

docker run -d --name zt --restart always \ --network host \ --cap-add NET_ADMIN --cap-add SYS_ADMIN \ --device /dev/net/tun \ -v ZT_LOCAL:/var/lib/zerotier-one \ zerotier:(version)

Update: For those unfamiliar with ZeroTier, you'll need to login to my.zerotier.com and create a network before you can make use of it. Once your network is there, run

docker exec zt zerotier-cli join <networkid>

You will have to approve the initial connection in the ZeroTier console. Then, to confirm everything is online:

$ docker exec zt zerotier-cli listnetworks
200 listnetworks <nwid> <name> <mac> <status> <type> <dev> <ZT assigned ips>
200 listnetworks ... OK PRIVATE ztr2qusnnn ...
DC2> ip addr show dev ztr2qusnnn
12: ztr2qusnnn: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2800 qdisc pfifo_fast state UNKNOWN group default qlen 1000
.... bunch of address info

@olljanat olljanat added the question Further information is requested label Aug 31, 2021
@olljanat
Copy link
Member

@tredger included WireGuardto kernel which 2.0.0-beta5 will use. However to using it with swarm is a bit tricky because of MTU challenges described on moby/libnetwork#2661 If moby/moby#43197 gets merged and backported to 22.06.x releases it will simplify it.

On top of that I'm thinking that should we support it on other ways? Or at least have documentation about it?

@priceaj
Copy link

priceaj commented Apr 13, 2023

Could you use Tailscale? https://hub.docker.com/r/tailscale/tailscale

It would be nice to see this running as an OS service, maybe with some ros config parms for the environment variables

Its possible to host your own instance using https://github.com/juanfont/headscale so you aren't necessarily relying on a proprietary service

@olljanat
Copy link
Member

FYI, moby/moby#43197 is now included to v2.0.0-beta7

Could you use Tailscale? https://hub.docker.com/r/tailscale/tailscale

Very interesting option. It looks to be working fine as long I see. Here is example service config:

tailscale:
  image: ollijanatuinen/tailscale:v1.46.1
  environment:
    TS_ACCEPT_DNS: true
    TS_AUTHKEY: <AUTH KEY>
    TS_HOSTNAME: n1
    TS_USERSPACE: false
    TS_STATE_DIR: /var/lib/tailscale/state
    TS_SOCKET: /var/run/tailscale/tailscaled.sock
    TS_EXTRA_ARGS: --accept-routes --advertise-routes 192.168.101.0/24
  labels:
    io.rancher.os.scope: system
    io.rancher.os.before: docker
  net: host
  pid: host
  ipc: host
  uts: host
  privileged: true
  restart: always
  volumes_from:
  - system-volumes
  volumes:
    - /var/lib/tailscale:/var/lib/tailscale
    - /dev/net/tun:/dev/net/tun

and it can be deployed with commands (remember update auth key first):

ros service enable /var/lib/rancher/conf/tailscale.yml
ros service up tailscale

It would be nice to see this running as an OS service, maybe with some ros config parms for the environment variables

It works already with config above. After there is more test results from others we can consider to include it as part of OS services too. Basically there just need to be parameters for TS_AUTHKEY and TS_EXTRA_ARGS

@pwFoo
Copy link
Author

pwFoo commented Aug 25, 2023

Wireguard would be nice...
Anyone tested it with BurmillaOS?

@olljanat olljanat closed this as not planned Won't fix, can't repro, duplicate, stale Mar 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants