F Prime Data Tool is a command line utility that can be used to decode F Prime data in binary form. This includes the output of ComLogger
objects and the binary recv.bin
logs generated by fprime-gds
. It includes different output options that are conducive to data analysis.
The tool is purposefully written as a stand-alone Python script without Python packaging definitions. It can also be manually copied to somewhere in your $PATH
(or even at a system level in /usr/local/bin
). Alternatively you can just copy it and run it locally (./fpdt.py
).
The processing loop of the tool is simple: given some top-level type (known as the record type) try to decode it from the input stream repeatedly until the input stream ends. This simple operation combined with the ability to select different top-level types is where the tool's versatility comes from. Three types in particular are useful record types for parsing F Prime data:
- Set
--record-type
toComLoggerRecord
to decode the*.com
logs generated byComLogger
. - Set
--record-type
toFprimeGdsRecord
to decode therecv.bin
log generated byfprime-gds
. - Set
--record-type
toFprimeGdsStream
to decode the data stream betweenfprime-gds
and FSW. Using tools likesocat
andtail -f
and process substitution this can be used to decode the stream as a man-in-the-middle. - Set
--record-type
toPrmDbRecord
to decodePrmDb.dat
files.
Notably the only difference between ComLoggerRecord
and FprimeGdsRecord
is that the former uses a uint16_t
for packet size while the latter uses a uint32_t
.
This is the default output format which is a whitespace delimited tabular format conducive to quick analysis using vnlog tools such as vnl-filter
and feedgnuplot.
For example you can plot a crude timeline of all events:
fpdt.py \
-d fsw-dictionary.xml \
--record-type FprimeGdsRecord \
recv.bin \
| vnl-filter \
-p 'event_time,event_topology_name,event_id' \
'event_component!="-"' \
| feedgnuplot --domain --dataid --autolegend
This format only works for ComLoggerRecord
and FprimeGdsRecord
record types.
This is particularly useful for being ingested into visidata. In fact, the tool will not support a CSV output format directly because you can just use visidata to do the conversion:
fpdt.py \
-F tsv \
-d fsw-dictionary.xml \
--record-type FprimeGdsRecord \
recv.bin \
| visidata - -b -o recv.csv
This format only works for ComLoggerRecord
and FprimeGdsRecord
record types.
For complex payloads where values and arguments are composed of custom arrays or custom serializables the tabular formats cannot capture the complexity easily. The JSON format, however, can and is very useful for deep inspection of the data, especially when paired with tools like jq
.
Print JSON object of time and value for all telemetry for component navigation
and channel PosePosition
fpdt.py \
-F json \
-d fsw-dictionary.xml \
--record-type FprimeGdsRecord \
gds-log2/recv.bin \
| jq 'select(.packet.payload.topology_name == "navigation.PosePosition") | {"time": .packet.payload.time.value, "value": .packet.payload.value}' -c
fpdt.py \
-d fsw-dictionary.xml \
--record-type ComLoggerRecord \
myComLoggerLog.com \
| vnl-filter 'event_topology_name=="imu.SaturationError"' \
| wc -l
fpdt.py \
-d fsw-dictionary.xml \
--record-type FprimeGdsRecord \
recv.bin \
| vnl-filter -p 'event' 'event_severity=="WARNING_HI"' \
| vnl-align \
| less -S
fpdt.py \
-F tsv \
-d fsw-dictionary.xml \
--record-type FprimeGdsRecord \
recv.bin \
| visidata - -b -o recv.csv
fpdt.py \
-d fsw-dictionary.xml \
--record-type ComLoggerRecord \
-F json \
myComLoggerLog.com \
| jq 'select(.packet.type == "TELEM") | select(.packet.payload.component == "fpga") | .packet.payload.time.utc_iso8601'
fpdt.py \
-d fsw-dictionary.xml \
--record-type PrmDbRecord \
-F json \
PrmDb.dat \
| jq 'select(.component == "globalPlanner")'
First run your fprime-gds
instance, taking note of what your --ip-address
and --ip-port
are.
fprime-gds --dictionary fpdt.py/fsw-dictionary.xml --no-app --ip-address 0.0.0.0 --ip-port 50000
Now use netcat (nc
) to connect to the fprime-gds
server and pipe the received uplink stream to fpdt.py
for parsing and logging. The first tee will save a binary log. The second tee will save the unformatted JSON. The jq
at the end is just for pretty printing.
nc localhost 50000 \
| stdbuf -o0 tee sent.bin \
| fpdt.py \
-R FprimeGdsStream \
-d fsw-dictionary.xml \
-F json
| tee sent.json \
| jq .
Example output:
{
"packet_size": 22,
"packet": {
"type": "COMMAND",
"payload": {
"opcode": 1281,
"arguments_raw": "0x000c68656c6c6f20776f726c6421",
"opcode_hex": "0x501",
"topology_name": "cmdDisp.CMD_NO_OP_STRING",
"component": "cmdDisp",
"mnemonic": "CMD_NO_OP_STRING",
"arguments": [
{
"name": "arg1",
"type": "string",
"value": {
"length": 12,
"string": "hello world!"
}
}
]
}
},
"offset": null
}
{
"packet_size": 54,
"packet": {
"type": "FILE",
"payload": {
"type": "START",
"sequence_index": 0,
"payload": {
"file_size": 12,
"source_path": {
"length": 28,
"value": "/tmp/fprime-uplink/hello.txt"
},
"destination_path": {
"length": 11,
"value": "./hello.txt"
}
}
}
},
"offset": null
}
{
"packet_size": 27,
"packet": {
"type": "FILE",
"payload": {
"type": "DATA",
"sequence_index": 1,
"payload": {
"byte_offset": 0,
"data_size": 12,
"data": "0x68656c6c6f20776f726c640a",
"data_str": "b'hello world\\n'"
}
}
},
"offset": null
}
{
"packet_size": 13,
"packet": {
"type": "FILE",
"payload": {
"type": "END",
"sequence_index": 2,
"payload": {
"checksum": 1240614885
}
}
},
"offset": null
}