Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rationalise and document management of mqtt.local #1221

Open
ajlennon opened this issue Aug 13, 2019 · 37 comments
Open

Rationalise and document management of mqtt.local #1221

ajlennon opened this issue Aug 13, 2019 · 37 comments
Assignees

Comments

@ajlennon
Copy link
Contributor

ajlennon commented Aug 13, 2019

The broker running on mqtt.local seems to be in use for wider purposes than just @goatchurchprime and my power monitoring project, which is great news.

As such it perhaps needs to

  • run more reliably
  • be manageable by a number of people rather than just me

A few things need to be done to achieve this

OpenBalena is Docker for embedded Linux devices. We're using it to manage the mqtt.local broker.

There's a getting started guide I've been using to set up servers

https://www.balena.io/open/docs/getting-started/

Currently the server running the mqtt.local broker is outside the scope of DoES.

Thus

  • there needs to be an OpenBalena server running within / accessible to DoES network infrastructure

Then I need to

  • document how to log into that server and the credentials
  • document the application(s) configured on and devices(s) connected to that server

This will support management of the MQtt broker by DoES organisers and act as an enabler for anybody wanting to setup other OpenBalena managed devices.

@ajlennon ajlennon self-assigned this Aug 13, 2019
@ajlennon
Copy link
Contributor Author

@MatthewCroughan could do with your help setting up the OpenBalena server. It needs to be somewhere that doesn't disappear

@ajlennon
Copy link
Contributor Author

ajlennon commented Aug 13, 2019

NB. The mqtt.local device is built using a Docker compose configuration which I've released here

https://github.com/DynamicDevices/does-rpi3-mqtt

The name is a bit of a legacy as scope has expanded since I created the initial build.

Currently the device runs a number of containers (as can be seen here)

  • Node Red flows running on http://mqtt.local:1880
  • InfluxDB time series database server running on 8086
  • Grafana graphing web dashboard running on http://mqtt.local
  • Mosquitto MQtt broker running on port 1883

None of the above is particularly secured in any way as it is assumed it is only for internal use..

@ajlennon
Copy link
Contributor Author

The base image "Balena OS" running on the Raspberry Pi here is a "Prod" image I think. "Prod" images are locked down whereas "Dev" images are more open, including for example SSH access from the local network segment.

The effect of this is that we can't SSH into the device locally (although it is possible to SSH tunnel to the device through the OpenBalena server - this is to be documented also)

@johnmckerrell
Copy link
Member

johnmckerrell commented Aug 13, 2019 via email

@ajlennon
Copy link
Contributor Author

OK! We now have a team @DoESLiverpool/openbalena-mqtt

@ajlennon
Copy link
Contributor Author

And I've added that team as moderators of this repo which shouldn't be accessible to Joe the Plumber

https://github.com/DoESLiverpool/openbalena

@ajlennon
Copy link
Contributor Author

For the record I'll also comment that the web dashboard we use to display data is another OpenBalena managed RPi device which simply runs a kiosk mode browser pointed to the Grafana instance on mqtt.local

The build file for this dashboard (which can be repurposed to display any web content more or less) is here

https://github.com/DynamicDevices/does-rpi3-dash

@ajlennon
Copy link
Contributor Author

Hi @skos-ninja so @MatthewCroughan is setting up an #OpenBalena instance on a ProxMox VM server within DoES.

We're calling it "doesmox". He's currently got it on 10.68.68.68

Can we have a static IP for it please?

Also could we set these domains for that IP please?

api.doesliverpool.com
registry.doesliverpool.com
vpn.doesliverpool.com
s3.doesliverpool.com

Thanks!

@ajlennon
Copy link
Contributor Author

ajlennon commented Aug 14, 2019

Thinking that through what might be better is if we can have a static IP for the base box which is the first IP of a range of IP addresses that can be used by VMs. Maybe a range of 32 or something if that can be done? Then the domains would go to the second of the IP addresses in the range rather than the base IP address if that makes sense?

@ajlennon
Copy link
Contributor Author

Login details and passwords are being maintained here

https://github.com/DoESLiverpool/openbalena/blob/master/README.md

@skos-ninja
Copy link
Member

@ajlennon To be clear we should never override records of which are public facing as this will lead to confusion and can break at any time if DNSSEC is ever enabled for a domain

@ajlennon
Copy link
Contributor Author

Fine, make it doesliverpool.cc

@johnmckerrell
Copy link
Member

johnmckerrell commented Aug 15, 2019 via email

@ajlennon
Copy link
Contributor Author

OpenBalena needs a set of sub domains as above

They could be public in which case the server would need to be public.

It’s probably better for them to be internally accessible only and for the server to be internally accessible only

@johnmckerrell
Copy link
Member

johnmckerrell commented Aug 16, 2019 via email

@ajlennon
Copy link
Contributor Author

Having internally accessible DNS mappings is a pretty standard thing in my experience.

I think publically accessible DNS mappings which resolve to internal IP addresses is more of a hack to be honest.

@johnmckerrell
Copy link
Member

johnmckerrell commented Aug 16, 2019 via email

@ajlennon
Copy link
Contributor Author

Yeah I don’t understand why it has to be “hacked” in. Its just internal DNS. Simple standard stuff

@ajlennon
Copy link
Contributor Author

I suppose we could run bind in a container on the Balena box serving out a separate domain. Then add the IP address of that bind server to the DHCP provided DNS servers which presumably would give us access to that domain?

@MatthewCroughan
Copy link
Member

MatthewCroughan commented Aug 16, 2019

I always thought the MQTT broker was supposed to have its own network, separate entirely from the DoES network, to lower complexity and offer portability. The idea was that we would have an SSID named DoESLiverpool-MQTT, and internal to that network all of this discussed configuration would be applied. The SSID would allow portability, so we could provision or clone that network setup (especially if it were implemented with a pi-like device) and then take it and the sensors on the road. @goatchurchprime has more details. It also means that things aren't centralized and liable to go down if anything were to happen during a firmware upgrade of the router, or any other element of the network. It also makes sense to segregate it since it doesn't need to be on the main DoES network and interact with other machines on that network other than the broker.

https://github.com/DoESLiverpool/somebody-should/wiki/MQTT-services

This is also detailed in the mqtt.local section of the wiki.

@ajlennon
Copy link
Contributor Author

I’d rather the broker didn’t run as an AP until somebody shows its reliable in this mode

@MatthewCroughan
Copy link
Member

@ajlennon I have a spare Ubiquiti access point if you wanna set up a proper network.
image

@ajlennon
Copy link
Contributor Author

The less there is to maintain the better imho

@MatthewCroughan
Copy link
Member

MatthewCroughan commented Aug 16, 2019

@ajlennon Well if we have a separate network where we have access to the PFSense router controlling it, that doesn't need as much maintaining. Whereas for every change made to the DoES network there's a potential for interference with the mqtt setup. For every thing piled on top, the more we have to make sure that configuration is backed up. It's definitely semantic, we could just make mqtt happen on DoES, but I feel like any of the config changes we want/need would take a while to execute since it'd have to go through maintainers, rather than it just being something we can change.

I just favor decentralized management so that there's less reliance on a single maintainer, in this case John to do every bit of management, since it won't just be MQTT we're dealing with on this network, there'll be other things we want to add, otherwise what's the point in using proxmox just to run a single app in a single container?

For every time John updates the Ubiquiti router firmware, or some other downtime is incurred, there will be a 1 minute minimum outage on the DoES network, that can be avoided if we set up our own network. There's a lot of small things that go on that aren't problems if you set up your own network.

@MatthewCroughan
Copy link
Member

I'm really not sure, but I think this would also protect us from loopbacks that occur, for example if I plug a switch into itself on the DoES network, I think this wouldn't shutdown our mqtt network since the networks aren't physically bridged in any manner.

@ajlennon
Copy link
Contributor Author

That’s a separate issue. Whatever is happening with the loop backs really shouldn’t be possible

@skos-ninja
Copy link
Member

We can tell our DNS resolver on our equipment to try a local resolver first before it goes external however you would need to ensure a reasonable speed and uptime on it otherwise this will slow down general users experiences at DoES

@ajlennon
Copy link
Contributor Author

Sounds good. Maybe we can run a DNS resolver in another container on the ProxMox box ?

@MatthewCroughan
Copy link
Member

MatthewCroughan commented Aug 16, 2019

@ajlennon If you run dnsmasq on the host, that makes the most sense. Since then all subsequent containers and will access it, since they use the host's networking config. If the host's networking config uses its own dns resolver, then the containers will use the host as the dns resolver. That only applies to containers though, VMs need to be manually configured to use the host as the resolver like any other machine.

@ajlennon
Copy link
Contributor Author

As above given we’re not constrained by the public IPv4 address space limitations my strong preference is for a small subnet of statically assigned IP addresses so we can simply bridge to containers and not get into worrying about the host networking configuration

@MatthewCroughan
Copy link
Member

MatthewCroughan commented Aug 20, 2019

image

I've ordered an orange pi zero along with the nas expansion board in order to attach a hard drive to it.

This is my ideal pi-service setup that I'd like to demo by attempting to run the mqtt broker on for a long period of time. This setup makes things very cheap, very reproducible, very efficient and very scalable, for any purpose. You could have more than one pi, more than one hard drive, network them together and have a load balancing, decentralized setup. I want to try that out.

On it, I'll run one of these to manage LXC containers so we can run multiple small services if we choose.

https://github.com/lcherone/lxd-ui
https://github.com/AdaptiveScale/lxdui
https://lxc-webpanel.github.io/screenshots.html

If we don't agree for a container on this setup to be the main mqtt.localwhen I have it set up, it'll be available at mqtt-backup.local

@goatchurchprime
Copy link
Member

The important feature is to have a fully functional stack (mqtt, nodered, influx, grafana) that can be taken to another site/hackspace and easily commissioned. This includes getting any ESPs, sonoffs and other types of IoT devices functioning in that site quickly. (Putting the access point on the rPI is one way to achieve this, because otherwise the local Wifi connection needs to be flashed into each device. This is not a big deal, but it's a lot less slick.)

@ajlennon
Copy link
Contributor Author

I'd love to see an AP running reliably on an RPi if somebody wants to investigate that... :)

@amcewen
Copy link
Member

amcewen commented Aug 27, 2019

For me, the important features are:

  • the infrastructure doesn't randomly disappear and mean things like the Dinky Occupancy monitor stops working
  • I can connect to monitor it and debug things without having to have my computer on a different WiFi network

@ajlennon
Copy link
Contributor Author

ajlennon commented Aug 27, 2019

There isn't any real reason we can't run a DoESLiverpool AP out in the wild (ie at events) is there? Which would mean everything would "just work"(tm)

@ajlennon
Copy link
Contributor Author

ajlennon commented Aug 27, 2019

If we go back to the event we did at Sensor City we were trying to run an AP on the RPi and it all went tits up. I am pretty sure we don't know why that is. It was running quite well "on the bench" and then when we tried to connect up various sensors it stopped working. Is that right @goatchurchprime ?

My suspicion is that it's not very good at handling a few sensors or more connected to it, and it's not been tested by anybody running for extended periods of time. They probably just use it for one client to connect briefly to configure it as a client of another AP.

Hence my concerns at trying to make a RPi AP part of DoES infrastructure unless we've does some good soak testing of tens of devices publishing to it over a couple of weeks.

I hadn't thought of @amcewen 's point about changing networks but it's an excellent one. I don't want to be fiddling about on different networks either so if we did get to to a point where a RPi AP was reliable enough for a sensor AP network we'd also need to bridge it to the local network, maybe easy enough via the wired network interface but needs consideration

@ajlennon
Copy link
Contributor Author

That said there is a part of me that would like the sensor data to be published on a different WiFi network on a different channel so it doesn't have any impact on what day to day DoES'ers are doing.

More thoughts... Does our existing DoES network AP infrastructure have a capability of running another independent AP SSID @skos-ninja ? Could this be on another channel? That might be an intermediate route to separate sensor data from everything else...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants