Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
Add post-processing capabilities #6
One reader shared some helpful suggestions about some post-processing techniques that can be used to obtain other information from the accelerometer data.
Copy/pasted rough notes from the reader:
Integration tends to be a smoothing operation. When acceleration is integrated to velocity the noise in the "a" data is (kinda sorta) averaged out and generally results in a smoother curve. The same for integrating from velocity to position. Differentiation tends to exaggerate every little bump or bit of noise, so going from position to velocity generally results in a relatively noisier curve. Things get a lot worse with the second derivation from velocity to acceleration. At least that's how I remember it from a long time ago.
Actually I came here to post my code for converting a file full of structs to a .csv file, but the above was a nice trip down memory lane.
I'm sure you know that opening a file, reading out one dataset, closing the file, converting that dataset to csv, opening a csv file, writing that one dataset, and then closing the csv file (then repeating all those steps until done) would be inefficiency bordering on processor abuse. The following script reads 490 data sets from the file into an array. Each row in the array has to have the same struct as when the data was written into the file. Then each struct in the array is unpacked and put into a string which is stored in the csv file. This is repeated 490 data sets at a time until done.
Two things I learned are the datafile.seek() and datafile.position() functions which are used to keep track of where to start when reading out the next batch of data sets. There was also a bit of thought required to know when to quit.
Obviously this will go a lot faster if you comment out the