Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for devices with "service create" #1244

Open
flx42 opened this issue Jul 26, 2016 · 57 comments
Open

Add support for devices with "service create" #1244

flx42 opened this issue Jul 26, 2016 · 57 comments

Comments

@flx42
Copy link

@flx42 flx42 commented Jul 26, 2016

Initially reported: moby/moby#24865, but I realized it actually belongs here. Feel free to close the other one if you want. Content of the original issue copied below.

Related: #1030

Currently, it's not possible to add devices with docker service create, there is no equivalent for docker run --device=/dev/foo.

I'm an author of nvidia-docker with @3XX0 and we need to add devices files (the GPUs) and volumes to the starting containers in order to enable GPU apps as services.
See the discussion here: moby/moby#23917 (comment) (summarized below).

We figured out how to add a volume provided by a volume plugin:

$ docker service create --mount type=volume,source=nvidia_driver_367.35,target=/usr/local/nvidia,volume-driver=nvidia-docker [...]

But there is no solution for devices, @cpuguy83 and @justincormack suggested using --mount type=bind. But it doesn't seem to work, it's probably like doing a mknod but without the proper device cgroup whitelisting.

$ docker service create --mount type=bind,source=/dev/nvidiactl,target=/dev/nvidiactl ubuntu:14.04 sh -c 'echo foo > /dev/nvidiactl'
$ docker logs stupefied_kilby.1.2445ld28x6ooo0rjns26ezsfg
sh: 1: cannot create /dev/nvidiactl: Operation not permitted

It's probably equivalent to this:

$ docker run -ti ubuntu:14.04                      
root@76d4bb08b07c:/# mknod -m 666 /dev/nvidiactl c 195 255
root@76d4bb08b07c:/# echo foo > /dev/nvidiactl
bash: /dev/nvidiactl: Operation not permitted

Whereas the following works (invalid arg is normal, but no permission error):

$ docker run -ti --device /dev/nvidiactl ubuntu:14.04
root@ea53a1b96226:/# echo foo > /dev/nvidiactl
bash: echo: write error: Invalid argument
@stevvooe
Copy link
Contributor

@stevvooe stevvooe commented Jul 26, 2016

@flx42 For the container runtime, devices require special handling (a mknod syscall), so mounts won't work. We'll probably have to add some sort of support for this. (cc @crosbymichael)

Ideally, we'd like to be able to schedule over devices, as well.

@cpuguy83
Copy link
Contributor

@cpuguy83 cpuguy83 commented Jul 26, 2016

@stevvooe Already have device support in the runtime, just not exposed in swarm.

@flx42
Copy link
Author

@flx42 flx42 commented Jul 26, 2016

Ideally, we'd like to be able to schedule over devices, as well.

This question was raised here: moby/moby#24750
But the discussion was redirected here: moby/moby#23917, in order to have a single discussion thread.

@flx42
Copy link
Author

@flx42 flx42 commented Jul 28, 2016

@stevvooe I quickly hacked a solution, it's not too difficult:
flx42@a82b9fb
This is not a PR yet, would you be interested if I do one? Or are the swarmkit features frozen right now before 1.12?
The next step would be to also modify the engine API.

@flx42
Copy link
Author

@flx42 flx42 commented Jul 28, 2016

Forgot to mention that I can now run GPU containers by mimicking what nvidia-docker does:

./bin/swarmctl service create --device /dev/nvidia-uvm --device /dev/nvidiactl --device /dev/nvidia0 --bind /var/lib/nvidia-docker/volumes/nvidia_driver/367.35:/usr/local/nvidia --image nvidia/digits:4.0 --name digits
@stevvooe
Copy link
Contributor

@stevvooe stevvooe commented Jul 28, 2016

@flx42 I took a quick peak and the PR looks like a decent start. I am not sure about representing these as cluster-level resources for container startup. From an orchestration perspective, we have to match these up with announced resources at the node level, which might be okay. It might be better on ContainerSpec, but I'm not sure yet.

Go ahead and file as a [WIP] PR.

@flx42
Copy link
Author

@flx42 flx42 commented Jul 28, 2016

@stevvooe Yeah, that's the biggest discussion point for sure.

In engine-api, devices are resources:
https://github.com/docker/engine-api/blob/master/types/container/host_config.go#L249

But in swarmkit, resources are so far "fungible" objects like CPU shares and memory, with a base value and a limit. A device doesn't really fit that definition. For GPU apps we have devices that must be shared (/dev/nvidiactl) and devices that could be exclusively acquired (like /dev/nvidia0).

I decided to initially put devices into resources because there is already a function in swarmkit that creates a engine-api Resource object from a swarm Resource object:
https://github.com/docker/swarmkit/blob/master/agent/exec/container/container.go#L301-L324
This method would also need to access the container spec.

I will file a PR soon to continue the discussion.

@stevvooe
Copy link
Contributor

@stevvooe stevvooe commented Jul 28, 2016

@flx42 Great!

We really aren't planning on following the same resource model from HostConfig for SwarmKit. In this case, we are instructing the container to mount these devices, which is specific to a container runtime. Other runtimes may not have a container or devices. Thus, I would err on ContainerSpec.

Now, I would like to see scheduling of fungible GPUs but that might a wholly separate flow, keeping the initial support narrow. Such services would require manual constraint and device assignment, but you still achieve the goal.

Let's discuss this in the context of the PR.

@aluzzardi
Copy link
Contributor

@aluzzardi aluzzardi commented Aug 5, 2016

Thanks @flx42 - I think GPU is definitly something we want to support medium term.

/cc @mgoelzer

@flx42
Copy link
Author

@flx42 flx42 commented Aug 10, 2016

Thanks @aluzzardi, PR created, it's quite basic.

@mlhales
Copy link

@mlhales mlhales commented Dec 27, 2016

The --device option is really import for my use case too. I am trying to use swarm to manage 50 Raspberry Pi's to do computer vision, but I need to be able to access /dev/video0 to capture images. Without this option, I'm stuck, and have to manage them without swarm, which is painful.

@stevvooe
Copy link
Contributor

@stevvooe stevvooe commented Jan 6, 2017

@mlhales We need someone who is willing to workout the issues with --device in a clustered environment and support that solution, rather than just a drive by PR. If you or a colleague want to take this on, that would be great, but this isn't as simple as adding --device.

@StefanScherer
Copy link
Member

@StefanScherer StefanScherer commented Feb 15, 2017

Using --device=/dev/gpiomem would be great on a RPi swarm to access GPIO on each node without privileged mode.

@nazar-pc
Copy link

@nazar-pc nazar-pc commented Feb 20, 2017

Using --device=/dev/fuse would be great for mounting FUSE, which isn't currently possible.

@StefanScherer
Copy link
Member

@StefanScherer StefanScherer commented Feb 20, 2017

We found an easier way for Blinkt! LED strip to use sysfs. Now we can run Blinkt! in docker swarm mode without privileges.

@mathiasimmer
Copy link

@mathiasimmer mathiasimmer commented Feb 21, 2017

@StefanScherer is it a proper alternative for using e.g. --device=/dev/mem to access GPIO on a RPi ? Would love to see an example if you would care to share :)

@StefanScherer
Copy link
Member

@StefanScherer StefanScherer commented Feb 21, 2017

@mathiasimmer For the use-case with Blinkt! LED strip there are only eight RGB LED's. So using sysfs it not time critical for these few LED's. If you want to drive hundreds of them you still need faster GPIO access to have a higher clock rate. But for Blinkt! we have forked the Node.js module and adjusted in in this branch https://github.com/sealsystems/node-blinkt/tree/sysfs.
A sample application can be found as well and how to use this forked module as dependency in an own package.json.

@aluzzardi
Copy link
Contributor

@aluzzardi aluzzardi commented Feb 22, 2017

/cc @cyli

@stevvooe
Copy link
Contributor

@stevvooe stevvooe commented Feb 22, 2017

@aluzzardi I think we should resurrect the --device patch. I don't think there is anything in the pipeline that is sophisticated enough to handle proper, cluster-level dynamic resource allocation. Looking back at this issue, there isn't necessarily a model that will work well in all cases (mostly because no one here can seem to enumerate them).

We can always add logic in the scheduler to prevent device contention in the future.

@cyli
Copy link
Contributor

@cyli cyli commented Feb 22, 2017

Attempt to add devices to the container spec and plugin spec here: #1964

I've no objection to the --device flag - cc @diogomonica ?

@diogomonica
Copy link
Contributor

@diogomonica diogomonica commented Feb 23, 2017

--device allows any service to escalate privileges. Why would we add this w/out profiles on services?

@cyli
Copy link
Contributor

@cyli cyli commented Feb 23, 2017

@diogomonica I thought profiles mainly covered capabilities, etc?

@diogomonica
Copy link
Contributor

@diogomonica diogomonica commented Feb 23, 2017

@cyli well, if we believe "devices" are easy enough to understand for easy user acceptance then we might not need them, but we should look critically at adding anything that allows escalation of privileges of a container to the cmd-line before we have agood way of informing everything the service will need from a security perspective to the user.

@brubbel
Copy link

@brubbel brubbel commented Mar 12, 2017

Also following this. Very interested in access to character devices (/dev/bus/usb/...) in a docker swarm.
To help some others until this is supported by docker, a workaround for swarm + usb:

  1. On the (linux) host(s), create a udev rule which creates a symlink to your device (in my case an ftdi device). e.g. /etc/udev/rules.d/99-libftdi.rules
    SUBSYSTEMS=="usb", ATTRS{idVendor}=="xxxx", ATTRS{idProduct}=="xxxx", GROUP="dialout", MODE="0666", SYMLINK+="my_ftdi", RUN+="/usr/bin/setupdockerusb.sh"
    Then reload udev rules:
    sudo udevadm control --reload-rules
    Upon connect of the usb device, the udev manager will create a symlink /dev/my_ftdi -> /dev/bus/usb/xxx/xxx and execute /usr/bin/setupdockerusb.sh

  2. The /usr/bin/setupdockerusb.sh (ref)
    This script sets the character device permissions on (the first) container with given image name.

#!/bin/bash
USBDEV=`readlink -f /dev/my_ftdi`
read minor major < <(stat -c '%T %t' $USBDEV)
if [[ -z $minor || -z $major ]]; then
    echo 'Device not found'
    exit
fi
dminor=$((0x${minor}))
dmajor=$((0x${major}))
CID=`docker ps --no-trunc -q --filter ancestor=my/imagename|head -1`
if [[ -z $CID ]]; then
    echo 'CID not found'
    exit
fi
echo 'Setting permissions'
echo "c $dmajor:$dminor rwm" > /sys/fs/cgroup/devices/docker/$CID/devices.allow
  1. Create the docker swarm with following options:
    docker service create [...] --mount type=bind,source=/dev/bus/usb,target=/dev/bus/usb [...]

  2. Event listener (systemd service):
    Waits for a container to be started and sets permissions. Run with root permissions on host.

#!/bin/bash
docker events --filter 'event=start'| \
while read line; do
    /usr/bin/setupdockerusb.sh
done
chrisns added a commit to chrisns/clustered_domoticz_zwave that referenced this issue Apr 14, 2017
@allingeek
Copy link

@allingeek allingeek commented Jun 5, 2018

Just to be clear, I think a workaround like @BretFisher mentioned is a perfectly acceptable temporary solution. Especially so for global service use-cases where all worker nodes have the same requisite device attached. @nazar-pc I'm less concerned about the privilege here as the service itself is not processing user-provided input. But maybe that is naive.

@justincormack I think maybe some of the frustration is the lack of clear direction. There was at least some discussion happening in this thread right up until the end of February 2017. But that left the direction ambiguous. Pairing that will the time and research investment it takes to contribute to any Docker project at this depth and whatever effort people put in on this would almost certainly have to be funded. Perhaps I can convince one of my clients to make the investment. I'm just surprised that Docker hasn't funded this issue yet (especially considering the potential security impact).

@diogomonica Can you elaborate on what you think should be put in place re: your last comment? We're not actually preventing escalation risk by not offering devices in services. People aren't skipping the feature, they're just not using services or their using a work around. I'm not advocating for blind addition of the feature without consideration for security. But I think we need some informed vision to get started.

@dperny
Copy link
Collaborator

@dperny dperny commented Jun 6, 2018

There are a lot of desired features from Swarmkit. We don't have the peoplepower right now to chase down very many of them, and it's difficult to say from here what is important. I think the least contribution we'd need to work on it would be a design proposal.

Someone has suggested out-of-band that perhaps we provide a power-user field allowing the specification of any flags that Docker supports (including --device), with the understanding that using these flags means your tasks might do weird things or fail in weird ways unless you know what you're doing. This is an ugly and dirty solution which I am not a fan of, but it may be What We Have To Do in order to just make thing work.

A more elegant solution would be building out a system for noting what resources (devices) a node has available, and making scheduling decisions in swarmkit in order to put tasks where resources are available for them. This would be more complicated to build, but would end up likely being easier to use. In fact, some old proto messages that never got implemented (GenericResource) hint at that this design was being pursued as some point, but was abandoned, likely due to available human time constraints.

We've been working on swarmkit a lot, but the velocity has definitely slowed down since its release. Work has been focused on stabilizing the software, and feature development is much slower because there are fewer people working on it right now.

WDYT?

@vim-zz
Copy link

@vim-zz vim-zz commented Jun 6, 2018

I would prefer to have something that allow me to use swarm kit and manage my devices than not, anyway I am doing that atm with or without swarm. Currently I can't use swarm for my company use case because the lack of support for --device.

I think that at the current state, swarm is missing a perfect use case for IoT where I/O from devices is a must, leaving this use case open for other alternatives which has much less fit for that in any other aspect, solely by not allowing to connecting to devices.

Bottom line, your suggestion of having something working for power users is much better than having nothing.

@dperny
Copy link
Collaborator

@dperny dperny commented Jun 6, 2018

The biggest drawback of power-user flag-pass-through is that those features might interact with swarm in unpredictable ways, and cause really esoteric issues. I'm worried that doing it too haphazardly may increase the support burden.

@vim-zz
Copy link

@vim-zz vim-zz commented Jun 6, 2018

@dperny maybe enable this with some kind of user marking that says i-know-what-i-am-doing like unsafe block or similar?

@allingeek
Copy link

@allingeek allingeek commented Jun 6, 2018

@cnrmck
Copy link

@cnrmck cnrmck commented Jun 8, 2018

To me, this is a vital issue for IoT. We can't build the things we need to build without having access to devices. It's further frustrated by the fact that there doesn't exist a way (that I know of) to manage --data-path-addr flags outside of docker swarm. Otherwise, docker-compose could be a simple solution to the issue, at least to manage services deployed on a single device.
Right now, it's a catch 22. I can either manage my data path (so that I can send data through my connected cellular device) but I can't access my devices (like the camera), or vice-versa.
If anyone has a workaround I would greatly appreciate it.
Being able to do all of that through docker swarm would be much better.

@Cinderhaze
Copy link

@Cinderhaze Cinderhaze commented Jun 8, 2018

@cnrmck, the workaround above from @BretFisher is one option - #1244 (comment)

And the option from @brubbel is another... #1244 (comment)

@cnrmck
Copy link

@cnrmck cnrmck commented Jun 8, 2018

@Cinderhaze Thank you so much for summarizing that. Perhaps your comment should be pinned so that other people can find it. I'll try @BretFisher's solution.

TheHackmeister added a commit to SciFiFarms/TechnoCore that referenced this issue Jun 26, 2018
Add a new PlatformIO service to the docker file as well as the necessary steps The actual PlatformIO image will be in a separate repository. Eventually, I'd like to move all of the images to their own repo so that docker hub can automatically create and deploy new images.

Docker swarm does not yet support device mounting. To work around this, I had the PlatformIO service actually be a docker container that creates a standalone container (not on swarm) that has /dev/ttyUSB0 mounted as a device. It's pretty hacky and absolutely a security risk... But it works... Mostly. The container won't actually start unless an ESP8266 is plugged in. The swarmkit issue includes a discussion on how to implement device mounting: docker/swarmkit#1244

This also contains a few modifications needed to make standalone containers attachable to swarm networks.

The PlatformIO container will accept MQTT messages on platformio/build/[BOARD=nodemcuv2] that contains the JSON config for the ESP8266. The config will replace $mqtt_username and $mqtt_password with RabbitMQ creds generated in Vault.

BOARD will be passed in the PlatformIO --environment flag to target a specific environment in the PlatformIO build file. Currently, only nodemcuv2 is supported.



* #2 Secure Node-RED MQTT communications.

* #2 Secure MQTT communication with Home Assistant.

* Add vault config to git repo.

* Began transition to username/password MQTT authentication.

* Begin refactoring the installer.

* Refactor install.sh to be more clear and maintainable.

* Add persistance to Rabbit MQ.

* Add extract_from_json function.

* Refactor MQTT install and add additional steps as necessary.

* Refactor vault install and add additional steps/params as necessary.

* Add persistance and MQTT auth to Home Assistant install steps.

* Set vault CMD to server (The actual command) rather than vault.

* Restore NR to info log level and remove .backup files.

* Restore NR to info log level and remove .backup files.

* Added notes and updated node red.

* Enable Node-RED to use files as credentials.

* Additional Node-RED settings.

* Add Platform IO container and some installer refactoring.

* #2 Secure Node-RED MQTT communications.

* #2 Secure MQTT communication with Home Assistant.

* Begin refactoring the installer.

* Refactor install.sh to be more clear and maintainable.

* Add persistance to Rabbit MQ.

* Refactor vault install and add additional steps/params as necessary.

* Add persistance and MQTT auth to Home Assistant install steps.

* Restore NR to info log level and remove .backup files.

* Add Platform IO container and some installer refactoring.
@dperny
Copy link
Collaborator

@dperny dperny commented Jul 2, 2018

Hey, it's been a while, but y'all should take a look at #2682 that i just opened, which is a proposal for device support in swarm. Tell me what you think.

TheHackmeister added a commit to SciFiFarms/TechnoCore-Vault that referenced this issue Jul 23, 2018
Add a new PlatformIO service to the docker file as well as the necessary steps The actual PlatformIO image will be in a separate repository. Eventually, I'd like to move all of the images to their own repo so that docker hub can automatically create and deploy new images.

Docker swarm does not yet support device mounting. To work around this, I had the PlatformIO service actually be a docker container that creates a standalone container (not on swarm) that has /dev/ttyUSB0 mounted as a device. It's pretty hacky and absolutely a security risk... But it works... Mostly. The container won't actually start unless an ESP8266 is plugged in. The swarmkit issue includes a discussion on how to implement device mounting: docker/swarmkit#1244

This also contains a few modifications needed to make standalone containers attachable to swarm networks.

The PlatformIO container will accept MQTT messages on platformio/build/[BOARD=nodemcuv2] that contains the JSON config for the ESP8266. The config will replace $mqtt_username and $mqtt_password with RabbitMQ creds generated in Vault.

BOARD will be passed in the PlatformIO --environment flag to target a specific environment in the PlatformIO build file. Currently, only nodemcuv2 is supported.



* #2 Secure Node-RED MQTT communications.

* #2 Secure MQTT communication with Home Assistant.

* Add vault config to git repo.

* Began transition to username/password MQTT authentication.

* Begin refactoring the installer.

* Refactor install.sh to be more clear and maintainable.

* Add persistance to Rabbit MQ.

* Add extract_from_json function.

* Refactor MQTT install and add additional steps as necessary.

* Refactor vault install and add additional steps/params as necessary.

* Add persistance and MQTT auth to Home Assistant install steps.

* Set vault CMD to server (The actual command) rather than vault.

* Restore NR to info log level and remove .backup files.

* Restore NR to info log level and remove .backup files.

* Added notes and updated node red.

* Enable Node-RED to use files as credentials.

* Additional Node-RED settings.

* Add Platform IO container and some installer refactoring.

* #2 Secure Node-RED MQTT communications.

* #2 Secure MQTT communication with Home Assistant.

* Begin refactoring the installer.

* Refactor install.sh to be more clear and maintainable.

* Add persistance to Rabbit MQ.

* Refactor vault install and add additional steps/params as necessary.

* Add persistance and MQTT auth to Home Assistant install steps.

* Restore NR to info log level and remove .backup files.

* Add Platform IO container and some installer refactoring.
TheHackmeister added a commit to SciFiFarms/TechnoCore-Home-Assistant that referenced this issue Jul 23, 2018
Add a new PlatformIO service to the docker file as well as the necessary steps The actual PlatformIO image will be in a separate repository. Eventually, I'd like to move all of the images to their own repo so that docker hub can automatically create and deploy new images.

Docker swarm does not yet support device mounting. To work around this, I had the PlatformIO service actually be a docker container that creates a standalone container (not on swarm) that has /dev/ttyUSB0 mounted as a device. It's pretty hacky and absolutely a security risk... But it works... Mostly. The container won't actually start unless an ESP8266 is plugged in. The swarmkit issue includes a discussion on how to implement device mounting: docker/swarmkit#1244

This also contains a few modifications needed to make standalone containers attachable to swarm networks.

The PlatformIO container will accept MQTT messages on platformio/build/[BOARD=nodemcuv2] that contains the JSON config for the ESP8266. The config will replace $mqtt_username and $mqtt_password with RabbitMQ creds generated in Vault.

BOARD will be passed in the PlatformIO --environment flag to target a specific environment in the PlatformIO build file. Currently, only nodemcuv2 is supported.



* #2 Secure Node-RED MQTT communications.

* #2 Secure MQTT communication with Home Assistant.

* Add vault config to git repo.

* Began transition to username/password MQTT authentication.

* Begin refactoring the installer.

* Refactor install.sh to be more clear and maintainable.

* Add persistance to Rabbit MQ.

* Add extract_from_json function.

* Refactor MQTT install and add additional steps as necessary.

* Refactor vault install and add additional steps/params as necessary.

* Add persistance and MQTT auth to Home Assistant install steps.

* Set vault CMD to server (The actual command) rather than vault.

* Restore NR to info log level and remove .backup files.

* Restore NR to info log level and remove .backup files.

* Added notes and updated node red.

* Enable Node-RED to use files as credentials.

* Additional Node-RED settings.

* Add Platform IO container and some installer refactoring.

* #2 Secure Node-RED MQTT communications.

* #2 Secure MQTT communication with Home Assistant.

* Begin refactoring the installer.

* Refactor install.sh to be more clear and maintainable.

* Add persistance to Rabbit MQ.

* Refactor vault install and add additional steps/params as necessary.

* Add persistance and MQTT auth to Home Assistant install steps.

* Restore NR to info log level and remove .backup files.

* Add Platform IO container and some installer refactoring.
TheHackmeister added a commit to SciFiFarms/TechnoCore-Node-RED that referenced this issue Jul 23, 2018
Add a new PlatformIO service to the docker file as well as the necessary steps The actual PlatformIO image will be in a separate repository. Eventually, I'd like to move all of the images to their own repo so that docker hub can automatically create and deploy new images.

Docker swarm does not yet support device mounting. To work around this, I had the PlatformIO service actually be a docker container that creates a standalone container (not on swarm) that has /dev/ttyUSB0 mounted as a device. It's pretty hacky and absolutely a security risk... But it works... Mostly. The container won't actually start unless an ESP8266 is plugged in. The swarmkit issue includes a discussion on how to implement device mounting: docker/swarmkit#1244

This also contains a few modifications needed to make standalone containers attachable to swarm networks.

The PlatformIO container will accept MQTT messages on platformio/build/[BOARD=nodemcuv2] that contains the JSON config for the ESP8266. The config will replace $mqtt_username and $mqtt_password with RabbitMQ creds generated in Vault.

BOARD will be passed in the PlatformIO --environment flag to target a specific environment in the PlatformIO build file. Currently, only nodemcuv2 is supported.



* #2 Secure Node-RED MQTT communications.

* #2 Secure MQTT communication with Home Assistant.

* Add vault config to git repo.

* Began transition to username/password MQTT authentication.

* Begin refactoring the installer.

* Refactor install.sh to be more clear and maintainable.

* Add persistance to Rabbit MQ.

* Add extract_from_json function.

* Refactor MQTT install and add additional steps as necessary.

* Refactor vault install and add additional steps/params as necessary.

* Add persistance and MQTT auth to Home Assistant install steps.

* Set vault CMD to server (The actual command) rather than vault.

* Restore NR to info log level and remove .backup files.

* Restore NR to info log level and remove .backup files.

* Added notes and updated node red.

* Enable Node-RED to use files as credentials.

* Additional Node-RED settings.

* Add Platform IO container and some installer refactoring.

* #2 Secure Node-RED MQTT communications.

* #2 Secure MQTT communication with Home Assistant.

* Begin refactoring the installer.

* Refactor install.sh to be more clear and maintainable.

* Add persistance to Rabbit MQ.

* Refactor vault install and add additional steps/params as necessary.

* Add persistance and MQTT auth to Home Assistant install steps.

* Restore NR to info log level and remove .backup files.

* Add Platform IO container and some installer refactoring.
@s2100
Copy link

@s2100 s2100 commented Sep 29, 2020

Any changes yet?

And I really need --device for map Intel VPU to docker container.

@allfro
Copy link

@allfro allfro commented Jan 22, 2021

This is extremely useful for people developing distributed tunneling solutions like using openvpn on a swarm. Access to /dev/net/tun is easy enough to schedule across a cluster.

@dzobbe
Copy link

@dzobbe dzobbe commented Mar 11, 2021

quote this

@TeoTN
Copy link

@TeoTN TeoTN commented Mar 17, 2021

This would be probably also useful for exposing Bluetooth to a IoT manager running in swarm

@thandal
Copy link

@thandal thandal commented Mar 29, 2021

In the vein of not-too-ugly workarounds, see also https://docs.nuvla.io/nuvla/advanced-usage/compose-options.html

@allfro
Copy link

@allfro allfro commented Mar 29, 2021

no offence @thandal but that's a really ugly workaround 😂 . It unnecessarily exposes the docker socket to a container. I'm not comfortable doing that 😬

@cjdcordeiro
Copy link

@cjdcordeiro cjdcordeiro commented Mar 30, 2021

I'm a bit biased here 😛 but exposing the docker socket is actually safe in many cases, and in fact many mainstream de facto tools do use it (cadvisor, traefik, etc.). the thing is that it can be dangerous...so it has acquired a bad reputation over time, even if people don't really understand how it can be dangerous. In the example posted by @thandal , I'd agree that is not the preferred solution, but when it comes to security, it all boils down to the nature of the container you are deploying. The one I wrote in the Nuvla docs complies with the following:

  • it comes from a trusted entity
  • it is not exposing the docker socket outside the container
  • it is not making the said container (which is using using the docker socket) visible nor reachable from outside the host

So in this regard, it is safe.

Now, obviously, if you're building a web application, in a multi-tenant infrastructure, then yes, I'd agree with @allfro and you should avoid exposing it.

@allfro
Copy link

@allfro allfro commented Mar 30, 2021

We NEED device mapping for swarms. I'd hate to switch over to Kubernetes for something as trivial as mapping common devices such as /dev/tun across a cluster. We beg you Docker!

@cpuguy83
Copy link
Contributor

@cpuguy83 cpuguy83 commented Mar 30, 2021

Maybe stop begging someone else to write features you need?
That is why there is exactly one person working on this repo... in their spare time.

@allfro
Copy link

@allfro allfro commented Mar 30, 2021

@cpuguy83 isn't swarmkit developed by Docker corp which is also commercially sold as part of Docker EE?

@cpuguy83
Copy link
Contributor

@cpuguy83 cpuguy83 commented Mar 30, 2021

@allfro No. Docker sold off the EE stuff to Mirantis... but even before then Swarmkit had very little support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet