NOTE: This is a work in process. It still needs the specific JSON for device discovery and configuration, and some examples of using the protocol to talk to some fins. For now, see the TinyG JSON specification as to how this would work.
So let me get this straight. You are running JSON directly over the circuit board traces? That's gutsy. -- MakerFaire Bay Area 2013 Attendee
Kinen uses SPI channels to talk from Kinen masters to Kinen slaves. However, the protocol is designed to be transport agnostic so other types of device communication are easily substituted, including USB, RS-485, Ethernet, HTTP, 802.15.4 stacks, and direct function calls (string passing via pointers). For details of how the SPI channels work see Kinen SPI Specification
Kinen treats devices a RESTful resources, albeit running over SPI, USB serial and not necessarily HTTP. Kinen uses JSON as the resource representation. Using JSON provides a simple way to expose and transfer system state and other internals, which makes the embedded system much easier to manipulate using modern programming techniques. Data is treated as resources and state is transfered in and out. Unlike RESTful HTTP, methods are implied by convention as there is no request header in which to declare them. See Kinen RESTful Hardware Conventions for more information.
Kinen uses a dumbed-down version of JSON for device communications. Don't puke just yet, it's actually more efficient than you think (or can be made to be), and it's extremely flexible. The advantages are that JSON inherently treats communications endpoints as REST resources. This is a very general form that is capable of expressing simple devices all the way through complex, multi-channel and multi-layered devices - all in the same protocol. It also maps well to modern programming structures (objects); much more naturally than a stack of registers like on a garden variety MCU (or a DEC PDP-11). This makes programming drivers easier. See "The Channel Efficiency Question", on this page for further discussion.
Kinen uses a subset of JSON with the following limitations:
<LF>, it's at the end of the line (broken lines are not supported)
Q: How much efficiency do we need? "Isn't JSON and serial IO a really expensive way to communicate? I only have limited serial bandwidth and RAM for all those strings".
Part of the answer is that the hardware we have to work with for embedded systems is advancing - an ARM chip can be had for less than $1 now, but let's not ignore the huge (and growing) base of atmega328P devices. This has to work on that platform as well.
Here are some timings and sizings we did on platforms we've implemented. Please note that the FLASH and RAM footprints include the JSON parser and serializer, a dispatch-table structure and code for executing functions from JSON names (tokens), and the floating point libs necessary to convert float-to-ascii and ascii-to-float.
|Commands/sec||~1000||~1000||~1000||assumes USB or other serial at 115,200 baud|
|strtof() time||<70 uSec||<25 uSec||<7.5 uSec||time to marshal a floating point value in or out|
|FLASH footprint||~4-6 Kbytes||~8 Kbytes||~8 Kbytes||the floating point lib is about 3K of that|
|RAM footprint||~500 - 800 bytes||~2 Kbytes||~2 Kbytes||atmega328 may limit buffer and other sizes|
|malloc()||no||no||no||Dynamic memory allocation is not used|
Optimize when we needed - not before. Here are some options.
Baseline JSON: Let's suggest that the SPI channel runs at 500Kbits/sec, or byte-level interrupts about every 20 uSec (once slave selects and framing is accounted for), or about 50 KBytes/second. While many SPI protocols run faster, this is about the upper limit of what a little 8MHz CPU can handle. A simple single-valued JSON message is about 12-24 chars long, so at a transfer rate of 50 Kbytes this is about 2500 x 20 char messages / sec. Which should be enough for many applications - but not all.
Bear in mind that communications to controllers is typically bursty and relatively infrequent (the exception being for data transfer devices). The slave CPU needs to keep up with buffering serial input. There are a variety of progressive optimizations that can be applied if needed:
Relaxed JSON (First Optimization): Since we are in a well-known environment, we can relax the ASCII wire form similarly to the way many JSON serializers for HLLs do:
This gets the byte count down, but it's all still ASCII.
Binary Form (Second Optimization): Introduce a binary forms such as Google protocol buffers (or some subset of it) as the binary wire form. This requires more interpretation at both ends, but may be worth it in some cases. Protocol buffers map naturally to JSON style resource definitions.
Filtered Mode: We've started using a filter on responses. Only send the data that has changed since the last time you inquired. For example, a Gcode model inquiry (or status report) can have a dozen or so elements in it, including positions, velocities, line numbers, motion modes, selected plane, what units the system is in, etc. In filtered mode only those elements that have changed since the last report are communicated. This also lightens the load on the upstream host, who now only needs to parse, marshal and execute those items that are meaningful.
Arrays: Any of these optimizations can also take advantage of position-dependent array-encoded data. Instead of tag-value pairs, data is encoded in an array where the position is the "tag" (just like the bad old days of ASN.1 - a rather flawed encoding standard if there ever was one). The first value of an array (array) should always be the version of the array encoding, the second value should be the total number of values in the array, including the version code. The rest of the data can be packed - without any separators in the case of binary encoded data. New versions should always add data elements to the end of the array for backwards compatibility.
Of course, this must be thought through relative to filtered mode use - as arrays are individual elements that may be filtered or not. Filtering within an array is probably not practical.