Particle (Spark) Photon/Electron DHT Temperature and Humidity Logger
C++ Arduino JavaScript
Latest commit 298eceb Sep 2, 2016 @nicjansma committed on GitHub Change json field to have one field
Having `json: true` used to work to ensure a `text/json` Content-Type.

Now it needs something in the payload, so let's use `"from": "particle"`

README.md

Particle (Spark) Core / Photon / Electron Remote Temperature and Humidity Logger

By Nic Jansma

This is a remote temperature and humidity sensor that logs data to a number of optional services, including:

I am currently using this on my kegerator (keezer) to monitor its temperature:

AdaFruit.io

Hardware

For hardware, I'm using a Particle Photon, a $19 Arduino-compatible development board with built-in WiFi. It's hooked up to a AM2303 (DHT22) temperature and humidity sensor. The Photon can be wrapped in a case to protect it.

Build list:

Total build cost: $24.00 - $74.00

In my case, a AM2302 is hooked up to the Photon as such:

  • Red (power) to VIN
  • Black (ground) to GND
  • Yellow (data) to D3

AdaFruit.io

My Photon is then wrapped in a case and taped to my keezer:

AdaFruit.io

Firmware

The firmware/ directory contains all of the code you will need for a Particle device.

The main code is in dht-logger.ino. You will also need DHT.cpp and DHT.h, which are libraries that help decode the temperature and humidity data from the sensor.

If you want to log to Adafruit.io, you will also need the 4 Adafruit library files:

  • Adafruit_IO_Arduino.cpp
  • Adafruit_IO_Arduino.h
  • Adafruit_IO_Client.cpp
  • Adafruit_IO_Client.h

You can paste the contents of all of these files into the Particle Build interface:

Particle Build

The firmware has several configuration options, all in dht-logger.ino:

  • DEVICE_NAME: A friendly name for this device, used when logging to DynamoDB and the HTTP end-points
  • DHTTYPE: Which sensor type, such as DHT11, DHT22, DHT21, or AM2301
  • DHTPIN: Which digital pin the DHT is connected to
  • LEDPIN: Which pin has a LED (which blinks each time a reading is being taken)
  • USE_FARENHEIT: Whether to use Farenheit versus Celsius
  • MIN_TEMPERATURE, MAX_TEMPERATURE, MIN_HUMIDITY and MAX_HUMIDITY: I've found that my DHT22 sensor sometimes gives wildly inaccurate readings (such as -300*F). These min/max values help weed out incorrect readings.
  • SEND_INTERVAL: How often to send data to the logging services
  • ADA_FRUIT_ENABLED: Whether or not to send data to Adafruit.io
    • ADAFRUIT_API_KEY: Your Adafruit.io API key
    • ADAFRUIT_FEED_*: Which Adafruit.io feed to use for each data-point
  • THINGSPEAK_ENABLED: Whether or not to send data to ThingSpeak
    • THINGSPEAK_CHANNEL: Which channel to log to
    • THINGSPEAK_API_KEY: ThingSpeak API key
  • HTTP_POST: Whether or not to send data to a HTTP endpoint
    • HTTP_POST_HOST: Host to send to
    • HTTP_POST_PORT: Port to send to
    • HTTP_POST_PATH: Path to send to
  • PARTICLE_EVENT: Whether or not to send data via a Particle event
    • PARTICLE_EVENT_NAME: Which event name to use

Many of these options are explained more in the Logging section below.

Logging

This firmware supports sending log data natively to several services.

It currently logs:

Adafruit.io

Adafruit.io provides an easy way to log your IoT data, and has a nice dashboard interface for visualizing it:

AdaFruit.io

To use Adafruit.io, you will need to create 3 feeds, for temperature, humidity and heat index. Put these feed names into the ADAFRUIT_FEED_* defines in dht-logger.io.

Your API Key can be found via the Your secret AIO Key button. It should go into ADAFRUIT_API_KEY.

ThingSpeak

ThingSpeak also has an easy interface for logging your IoT data, and their dashboards let you visualize it:

AdaFruit.io

You will need to create a single Channel. In that channel, you should use 3 Fields for temperature, humidity and heatindex (in that order). This Channel ID should go into THINGSPEAK_CHANNEL.

Your API Key can be found under API Keys. It should go into THINGSPEAK_API_KEY.

DynamoDB

DynamoDB is a NoSQL database from Amazon Web Services, and is a great place to log your IoT data. DynamoDB does not provide any visualizations like Adafruit or ThingSpeak, but once you have the data logged to DynamoDB, you can do whatever you want with it. It provides a great long-term storage option for your IoT data. DynamoDB is free for your first 25 GB.

Here's the DynamoDB dashboard showing my logged temperature data:

AdaFruit.io

Here's the challenge: it's actually somewhat inconvenient to get data from a Photon into DynamoDB, primarily because the Photon does not (yet) have SSL/TLS libraries.

Both Amazon IoT and/or Amazon API Gateway would be useful, but they only offer SSL endpoints. So we're going to need to have a bridge that can take data from the Photon and post it to a SSL endpoint for us. Luckily, Particle Webhooks can do this for us.

Here's how we're going to get all of these services working for us to be able to log to DynamoDB:

  1. Create a Particle Webhook that listens for an event with our data
  2. Have the Particle Webhook POST this data to a Amazon API Gateway endpoint
  3. Have the Amazon API Gateway run an Amazon Lambda function
  4. Have the Lambda function create rows in our DynamoDB table

It sounds a bit complicated, but the configuration for this should be relatively straightforward.

Amazon DynamoDB

First, you'll need to create a DynamoDB table to log your data. You can do this via the Amazon AWS Console:

  1. Open the Amazon AWS Console
  2. Select DynamoDB from the list of services
  3. Click on Tables and click Create table
  4. The Partition key should be device, a String, and Sort key should be time, a Number
  5. You can Use default settings if you wish. No other indexes are required. I reduced my read/write capacity units to 1/1, since I only have a single device logging to this table once every 10 seconds. You get up to 25 GB and 25 read/write capacity free as part of the AWS Free Tier.

AdaFruit.io

Your DynamoDB table is now configured!

Amazon Lambda

Amazon Lambda is a handy service from Amazon that lets you run small code snippets in the cloud when triggered by various events. We're going to create an Amazon API Gateway endpoint next that triggers our function. The Lambda function will be responsible for adding our data into the DynamoDB database. Amazon Lambda is free for the first 1 million requests per month.

Here are the steps required to create a Lambda function:

  1. Open the Amazon AWS Console
  2. Select Lambda from the list of services
  3. Click Create a Lambda function
  4. You can Skip the blueprint
  5. Name your Lambda function something like iot-dynamodb, and select Node.js as the Runtime
  6. For the code, paste in the code from lambda.js
  7. Edit the tableName variable to be the DynamoDB table name you selected
  8. Leave Handler as index.handler and change Role to Basic with DynamoDB. This will popup a new window for you to create a new IAM Role.
  9. You can probably leave the Advanced setting as their defaults, with 128 Memory (MB) and a 3 second Timeout
  10. Hit Create and you're all set

AdaFruit.io

Amazon API Gateway

Next, we're going to configure an Amazon API Gateway endpoint. The API Gateway endpoint gives us a convenient web URL, that when called, will run our Lambda function. The Lambda function interface gives you a convenient way to do this as well.

Here's the steps to setup the API Gateway:

  1. Open the Lambda function we just created
  2. Click on API endpoints
  3. Click Add API endpoint
  4. For API endpoint type, select API Gateway
  5. You can use whatever name for the API you want, but it defaults to LambdaMicroservice
  6. The Resource name can be whatever you want, and defaults to the name of the Lambda function
  7. Change Method to POST
  8. Change Security to Open with access key (so we can secure the endpoint with a secret key)
  9. Click Submit

AdaFruit.io

Note the new API endpoint URL for later use:

AdaFruit.io

  1. Next, switch to the Amazon API Gateway service so we can get an API key
  2. Change to the API Keys dropdown
  3. Click on Create API Key
  4. Name it whatever, you want, and select Enabled, and click Create
  5. Change Select API to LambdaMicroservice, and Stage to prod
  6. Click Add
  7. Note your new API key for later
  8. Click Save

AdaFruit.io

Phew! We now have an Amazon Lambda function that will create rows in our Amazon DynamoDB table by hitting an Amazon API Gateway endpoint URL.

Particle Webhook

A Particle Webhook will be the final bridge that gathers data from the Particle and sends it to an SSL endpoint (which the Photon doesn't yet natively support).

A Webhook is a simple command that instructs Particle to send data to another place when a Particle event occurs.

  1. First, configure the PARTICLE_EVENT_NAME in your dht-logger.ino to be whatever you wish, such as dht-logger-log.

  2. Ensure PARTICLE_EVENT is 1 in the same file

  3. Next, install the particle-cli NodeJS module:

    npm install -g particle-cli

  4. Copy particle-webhook.json into a local file

  5. Edit particle-webhook.json and replace YOURENDPOINT with the Amazon API Gateway endpoint and YOURAPIKEY with the Amazon API Gateway API key you created

  6. Run this command to install the webhook:

    particle webhook create particle-webhook.json

That should be it!

Now, your Particle will emit an event that triggers a webhook that hits an API Gateway that runs a Lambda that inserts a row into DynamoDB.

HTTP POST

The final service that the dht-logger can log to is any other HTTP POST URL.

Note that HTTPS is not supported (yet) by the Photon.

To configure the HTTP POST, modify the HTTP_POST and HTTP_POST_* variables in dht-logger.ino.

The payload for the HTTP POST will be the same as the DynamoDB data:

{
  "device": "[device name]",
  "temperature": 72.4,
  "humidity": 50.0,
  "heatIndex": 74.6
}

SmartThings

I also have this device monitored via my Samsung SmartThings app:

AdaFruit.io

To configure this, you'll want to use the Device Handler in my smart-things project. You can install it via the SmartThings API Console.

Thanks

This project was built on top of a lot of work done by others, including: