New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rationalise and document management of mqtt.local #1221
Comments
@MatthewCroughan could do with your help setting up the OpenBalena server. It needs to be somewhere that doesn't disappear |
NB. The mqtt.local device is built using a Docker compose configuration which I've released here https://github.com/DynamicDevices/does-rpi3-mqtt The name is a bit of a legacy as scope has expanded since I created the initial build. Currently the device runs a number of containers (as can be seen here)
None of the above is particularly secured in any way as it is assumed it is only for internal use.. |
The base image "Balena OS" running on the Raspberry Pi here is a "Prod" image I think. "Prod" images are locked down whereas "Dev" images are more open, including for example SSH access from the local network segment. The effect of this is that we can't SSH into the device locally (although it is possible to SSH tunnel to the device through the OpenBalena server - this is to be documented also) |
Are you able to create a team within the DoES GitHub repo? If so you could do so for the people managing OpenBalena/MQTT. If not let me know and I’ll do it, then you can create a private repo to host passwords.
… On 13 Aug 2019, at 17:08, Alex Lennon ***@***.***> wrote:
The base image "Balena OS" running on the Raspberry Pi here is a "Prod" image I think. "Prod" images are locked down whereas "Dev" images are more open, including for example SSH access from the local network segment.
The effect of this is that we can't SSH into the device locally (although it is possible to SSH tunnel to the device through the OpenBalena server - this is to be documented also)
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub <#1221?email_source=notifications&email_token=AAAGU2YXUI6LKTAYQH5XJODQELMA7A5CNFSM4ILMJ422YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4GFAVY#issuecomment-520900695>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AAAGU26SYPC76ORVW63RIRLQELMA7ANCNFSM4ILMJ42Q>.
|
OK! We now have a team @DoESLiverpool/openbalena-mqtt |
And I've added that team as moderators of this repo which shouldn't be accessible to Joe the Plumber |
For the record I'll also comment that the web dashboard we use to display data is another OpenBalena managed RPi device which simply runs a kiosk mode browser pointed to the Grafana instance on mqtt.local The build file for this dashboard (which can be repurposed to display any web content more or less) is here |
Hi @skos-ninja so @MatthewCroughan is setting up an #OpenBalena instance on a ProxMox VM server within DoES. We're calling it "doesmox". He's currently got it on 10.68.68.68 Can we have a static IP for it please? Also could we set these domains for that IP please? api.doesliverpool.com Thanks! |
Thinking that through what might be better is if we can have a static IP for the base box which is the first IP of a range of IP addresses that can be used by VMs. Maybe a range of 32 or something if that can be done? Then the domains would go to the second of the IP addresses in the range rather than the base IP address if that makes sense? |
Login details and passwords are being maintained here https://github.com/DoESLiverpool/openbalena/blob/master/README.md |
@ajlennon To be clear we should never override records of which are public facing as this will lead to confusion and can break at any time if DNSSEC is ever enabled for a domain |
Fine, make it doesliverpool.cc |
If you just want a does domain then we can probably do that the proper way??
…--
Sent from my mobile phone hence brevity and errors
On 15 Aug 2019, at 21:16, Alex Lennon ***@***.***> wrote:
Fine, make it doesliverpool.cc
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
OpenBalena needs a set of sub domains as above They could be public in which case the server would need to be public. It’s probably better for them to be internally accessible only and for the server to be internally accessible only |
The domains can be public and use internal IPs. Probably better than using hacks for the domains.
…--
Sent from my mobile phone hence brevity and errors
On 15 Aug 2019, at 22:05, Alex Lennon ***@***.***> wrote:
OpenBalena needs a set of sub domains as above
They could be public in which case the server would need to be public.
It’s probably better for them to be internally accessible only and for the server to be internally accessible only
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Having internally accessible DNS mappings is a pretty standard thing in my experience. I think publically accessible DNS mappings which resolve to internal IP addresses is more of a hack to be honest. |
But it sounds like we have to hack them onto our existing setup. Unless we “just” run bind locally.
…--
Sent from my mobile phone hence brevity and errors
On 16 Aug 2019, at 11:02, Alex Lennon ***@***.***> wrote:
Having internally accessible DNS mappings is a pretty standard thing in my experience.
I think publically accessible DNS mappings which resolve to internal IP addresses is more of a hack to be honest.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Yeah I don’t understand why it has to be “hacked” in. Its just internal DNS. Simple standard stuff |
I suppose we could run bind in a container on the Balena box serving out a separate domain. Then add the IP address of that bind server to the DHCP provided DNS servers which presumably would give us access to that domain? |
I always thought the MQTT broker was supposed to have its own network, separate entirely from the DoES network, to lower complexity and offer portability. The idea was that we would have an SSID named https://github.com/DoESLiverpool/somebody-should/wiki/MQTT-services This is also detailed in the mqtt.local section of the wiki. |
I’d rather the broker didn’t run as an AP until somebody shows its reliable in this mode |
@ajlennon I have a spare Ubiquiti access point if you wanna set up a proper network. |
The less there is to maintain the better imho |
@ajlennon Well if we have a separate network where we have access to the PFSense router controlling it, that doesn't need as much maintaining. Whereas for every change made to the DoES network there's a potential for interference with the mqtt setup. For every thing piled on top, the more we have to make sure that configuration is backed up. It's definitely semantic, we could just make mqtt happen on DoES, but I feel like any of the config changes we want/need would take a while to execute since it'd have to go through maintainers, rather than it just being something we can change. I just favor decentralized management so that there's less reliance on a single maintainer, in this case John to do every bit of management, since it won't just be MQTT we're dealing with on this network, there'll be other things we want to add, otherwise what's the point in using proxmox just to run a single app in a single container? For every time John updates the Ubiquiti router firmware, or some other downtime is incurred, there will be a 1 minute minimum outage on the DoES network, that can be avoided if we set up our own network. There's a lot of small things that go on that aren't problems if you set up your own network. |
I'm really not sure, but I think this would also protect us from loopbacks that occur, for example if I plug a switch into itself on the DoES network, I think this wouldn't shutdown our mqtt network since the networks aren't physically bridged in any manner. |
That’s a separate issue. Whatever is happening with the loop backs really shouldn’t be possible |
We can tell our DNS resolver on our equipment to try a local resolver first before it goes external however you would need to ensure a reasonable speed and uptime on it otherwise this will slow down general users experiences at DoES |
Sounds good. Maybe we can run a DNS resolver in another container on the ProxMox box ? |
@ajlennon If you run dnsmasq on the host, that makes the most sense. Since then all subsequent containers and will access it, since they use the host's networking config. If the host's networking config uses its own dns resolver, then the containers will use the host as the dns resolver. That only applies to containers though, VMs need to be manually configured to use the host as the resolver like any other machine. |
As above given we’re not constrained by the public IPv4 address space limitations my strong preference is for a small subnet of statically assigned IP addresses so we can simply bridge to containers and not get into worrying about the host networking configuration |
I've ordered an orange pi zero along with the nas expansion board in order to attach a hard drive to it. This is my ideal pi-service setup that I'd like to demo by attempting to run the mqtt broker on for a long period of time. This setup makes things very cheap, very reproducible, very efficient and very scalable, for any purpose. You could have more than one pi, more than one hard drive, network them together and have a load balancing, decentralized setup. I want to try that out. On it, I'll run one of these to manage LXC containers so we can run multiple small services if we choose. https://github.com/lcherone/lxd-ui If we don't agree for a container on this setup to be the main |
The important feature is to have a fully functional stack (mqtt, nodered, influx, grafana) that can be taken to another site/hackspace and easily commissioned. This includes getting any ESPs, sonoffs and other types of IoT devices functioning in that site quickly. (Putting the access point on the rPI is one way to achieve this, because otherwise the local Wifi connection needs to be flashed into each device. This is not a big deal, but it's a lot less slick.) |
I'd love to see an AP running reliably on an RPi if somebody wants to investigate that... :) |
For me, the important features are:
|
There isn't any real reason we can't run a DoESLiverpool AP out in the wild (ie at events) is there? Which would mean everything would "just work"(tm) |
If we go back to the event we did at Sensor City we were trying to run an AP on the RPi and it all went tits up. I am pretty sure we don't know why that is. It was running quite well "on the bench" and then when we tried to connect up various sensors it stopped working. Is that right @goatchurchprime ? My suspicion is that it's not very good at handling a few sensors or more connected to it, and it's not been tested by anybody running for extended periods of time. They probably just use it for one client to connect briefly to configure it as a client of another AP. Hence my concerns at trying to make a RPi AP part of DoES infrastructure unless we've does some good soak testing of tens of devices publishing to it over a couple of weeks. I hadn't thought of @amcewen 's point about changing networks but it's an excellent one. I don't want to be fiddling about on different networks either so if we did get to to a point where a RPi AP was reliable enough for a sensor AP network we'd also need to bridge it to the local network, maybe easy enough via the wired network interface but needs consideration |
That said there is a part of me that would like the sensor data to be published on a different WiFi network on a different channel so it doesn't have any impact on what day to day DoES'ers are doing. More thoughts... Does our existing DoES network AP infrastructure have a capability of running another independent AP SSID @skos-ninja ? Could this be on another channel? That might be an intermediate route to separate sensor data from everything else... |
The broker running on mqtt.local seems to be in use for wider purposes than just @goatchurchprime and my power monitoring project, which is great news.
As such it perhaps needs to
A few things need to be done to achieve this
we need to ensure data stored is archived in some manner, see Need to backup InfluxDB server database #1213
we need to address any "funnies" with failures as seen in mqtt.local went down and didn't come back up cleanly after reboot #1210
need need to ensure the uSD card doesn't fail any time soon because the flash wears out (I'm not too concerned about this as long as the data is backed up and the image is easily rebuildable), which brings me to...
we need to re-organise and document the OpenBalena instance which manages devices.
OpenBalena is Docker for embedded Linux devices. We're using it to manage the mqtt.local broker.
There's a getting started guide I've been using to set up servers
https://www.balena.io/open/docs/getting-started/
Currently the server running the mqtt.local broker is outside the scope of DoES.
Thus
Then I need to
This will support management of the MQtt broker by DoES organisers and act as an enabler for anybody wanting to setup other OpenBalena managed devices.
The text was updated successfully, but these errors were encountered: