Build a complete logging system

Jack Kelly edited this page Jun 20, 2013 · 13 revisions
Clone this wiki locally

This page describes the full process of setting up a complete power logging system. This is based on the system I (Jack) used to log power data from my own home, as well as the homes of several Imperial MSc students.

Table of Contents

Shopping list

Atom PC with WiFi

For a complete shopping list to build an Atom PC see this blog post. To be honest, a cheap netbook would probably be an easier option from many perspectives.

Nanode

(RS Components stock numbers in brackets)

Sensor hardware

Software Installation

  1. Install latest firmware for DN2800MT
  2. Configure the BIOS so that if the system loses and then regains power then it should revert to the previous power state (this means that if you have a power cut or if someone accidentally unplugs the logging PC then it should return to its logging duties simply by having the power returned to it.)
  3. Install Ubuntu Server 64-bit. Install Open SSH server (which can be done towards the end of the Ubuntu Server installation).
  4. The PowerVR-based GMA3600 graphics on the Atom D2700 / D2800 is poorly supported on Linux. To get the terminal to work we need to disable LVDS (taken from here):
    1. SSH into your new Atom box.
    2. sudo nano /etc/default/grub
      1. GRUB_CMDLINE_LINUX_DEFAULT="video=LVDS-1:d" (delete splash and quiet).
      2. GRUB_HIDDEN_TIMEOUT=5
      3. GRUB_HIDDEN_TIMEOUT_QUIET=true
      4. GRUB_TIMEOUT=0
    3. Uncomment GRUB_TERMINAL=console
    4. Save and exit nano
    5. sudo update-grub
    6. sudo reboot
  5. Packages to install:
    1. sudo apt-get update && sudo apt-get upgrade
    2. sudo apt-get install git python-matplotlib emacs24-nox avrdude wireless-tools wpasupplicant ntp screen (matplotlib is only required if you plan to use powerstats. emacs is entirely optional. avrdude is only required if you want to be able to program your Nanode from your logging box)
  6. We'll use a new unix group called data who will be able to write data and update our repositories:
    1. sudo groupadd data
    2. Add yourself to the data group: sudo adduser <USERNAME> data
    3. sudo mkdir /data /usr/local/logging
    4. sudo chgrp data /data /usr/local/logging
    5. sudo chmod g+w /data /usr/local/logging

Enable WiFi

Add your WiFi details to /etc/network/interfaces

For example, mine looks like:

 # This file describes the network interfaces available on your system
 # and how to activate them. For more information, see interfaces(5).
 
 # The loopback network interface
 auto lo
 iface lo inet loopback
 
 # WIRELESS
 auto wlan0
 iface wlan0 inet dhcp
      wpa-ssid <SSID>
      wpa-psk  <PASSWORD>
 
 # WIRED
 # auto em0
 iface em0 inet dhcp

Set udev rules to allow access to Nanode

Create a udev rules file. sudo nano /etc/udev/rules.d/nanode.rules Copy and paste this text into the file:

 # the following rule should be all on one line
 ATTRS{idVendor}=="10c4", ATTRS{idProduct}=="ea60", MODE="660", GROUP:="plugdev"
 
 LABEL="nanode_rules_end"

Unplug your Nanode and plug it back in and you should have access to it. If this doesn't work then you might need to modify the idVendor and idProduct numbers. You can find the correct numbers by running lsusb.

Install rfm_edf_ecomanager on Nanode (if necessary)

This is the embedded C++ code which must be installed on the Nanode for logging purposes.

  1. cd /usr/local/logging
  2. git clone git://github.com/JackKelly/rfm_edf_ecomanager.git
  3. Upload pre-compiled hex file

Setup rfm_ecomanager_logger

  1. cd /usr/local/logging
  2. git clone git://github.com/JackKelly/rfm_ecomanager_logger.git

Setup environment variables to use lm script

rfm_ecomanager_logger provides a bash script called lm which provides a handy way to update all associated git repositories, start rfm_ecomanager_logger and babysitter, stop, print data etc. For this script to work, we must install all the packages described on this page, and also set some environment variables in the following way:

sudo emacs /etc/environment

Add the following lines to the top of the file:

  LOGGER_BASE_DIR="/usr/local/logging"
  DATA_DIR="/data"

And add $RFM_ECOMANAGER_LOGGER_DIR to PATH so PATH looks something like this:

  PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/logging/rfm_ecomanager_logger"

Train the system to recognise each of your sensors

The pair the logging system to each of your sensors, see the manual for rfm_ecomanager_logger.

Optional: setup babysitter to monitor logger

Optional. babysitter is used for keeping an eye on rfm_ecomanager_logger and sending an email if things go wrong; and also sends a daily heartbeat

  1. cd /usr/local/logging
  2. git clone git://github.com/JackKelly/babysitter.git
Now create a /usr/local/logging/babysitter/email_config.py file with your email server details, as per the comments at the top of babysitter/babysitter/babysitter.py.

Optional: setup powerstats

Optional. powerstats is used for creating useful statistics describing the data logged by rfm_ecomanager_logger and for drawing graphs

  • git clone git://github.com/JackKelly/powerstats.git

Optional: start logging when system restarts

Edit your crontab by running sudo crontab -e

Add the following line:

  @reboot /usr/local/logger/rfm_ecomanager_logger/lm start 2>&1 >> /usr/local/logger/rfm_ecomanager_logger/cron.log

Or, alternatively, if you would like the system to update the codebase on every restart then add:

  @reboot /usr/local/logger/rfm_ecomanager_logger/lm boot 2>&1 >> /usr/local/logger/rfm_ecomanager_logger/cron.log

If you start logging at boot then the log files and processes will be owned by root so if you want to start or stop logging after boot you need to use sudo. But, on Ubuntu, sudo doesn't inherit the PATH environment variable so it won't be able to find the lm script. So it is also convenient to allow you to access lm using sudo. Do this by running sudo visudo and add :/usr/local/logger/rfm_ecomanager_logger to Defaults secure_path

Optional: send data to a central server using rsync

Client config

  mkdir $LOGGER_BASE_DIR/rsync

Create a $LOGGER_BASE_DIR/rsync/rsync.sh file:

  # Add timestamp to log file
  date +%Y-%m-%d\ %k:%M:%S > /usr/local/logger/rsync/rsync_cron.log
  
  # Run rsync
  rsync -vzutr --password-file=/usr/local/logger/rsync/pass.txt $DATA_DIR/ rsync://USER@SERVER:PORT/USER
  # -v verbose                                                                            
  # -a archive mode (-r recursive, -l copy symlinks,                                      
  #                  -p preserve permissions, -t preserve modification times,             
  #                  -g preserve group, -o preserve owner, -D preserve device files)      
  # -u update (skip files that are newer on the receiver)                                 
  # -z compress                                                                           

chmod a+x rsync.sh (make the script executable)

Create a pass.txt file with your rsync password.

Finally, add a cron rule to run rsync.sh:

  sudo crontab -e

Add the following line (this will run the script at 6:15am every morning):

  15 6 * * * /usr/local/logger/rsync/rsync.sh 2>&1 >> /usr/local/logger/rsync/rsync_cron.log

Server config

You'll need to create an ~/.rsyncd/rsyncd.conf file which looks something like this:

 max connections = 6
 log file = /HOME/DIR/.rsyncd/rsyncd.log
 lock file = /HOME/DIR/.rsyncd/rsyncd.lock
 pid file = /HOME/DIR/.rsyncd/rsyncd.pid
 timeout = 300
 use chroot = no
 port = <PORT>
 filter = + *.dat
 transfer logging = true
 refuse options = delete
 
 read only = no
 list = yes
 secrets file = /HOME/DIR/.rsyncd/PASSWORDS_FILE
 
 [USER1]
 path = /DATA/PATH/ON/SERVER/USER1
 auth users = USER1
 
 [USER2]
 path = /DATA/PATH/ON/SERVER/USER2
 auth users = USER2

Then create and populate the /HOME/DIR/.rsyncd/PASSWORDS_FILE

Finally, start the server with

 rsync --daemon --config=/HOME/DIR/.rsyncd/rsyncd.conf

More info:

Cloning this hard disk for multiple systems

Clonezilla is a free, linux-based disk cloning tool. You can clone an entire hard disk to an image and then copy that image to as many hard disks as you wish.

  1. Prior to cloning run sudo lm reset to delete your data, radioIDs.pkl file and logs.
  2. Create a Clonezilla bootable USB stick
  3. Use this to create an image file onto an external USB hard disk
  4. Copy this to as many hard disks as you need (it's best to use
EXACTLY the same hard disks as the source disk; not just the same total size but also the same geometry).

Tweaks for each PC after cloning

Modify the following files:

  1. $LOGGER_BASE_DIR/babysitter/email_config.py (send babysitter emails to the correct person / people)
  2. $LOGGER_BASE_DIR/rsync/rsync.sh (modify destination and user)
  3. $LOGGER_BASE_DIR/rsync/pass.txt (modify password)
  4. /etc/network/interfaces (add user's WiFi details)
  5. /etc/hostname (I used computer names in the form <USER>-logger e.g. "jack-logger")
  6. /etc/hosts (modify the hostname which maps to 127.0.1.1, this needs to be the same as in /etc/hostname)