This is some slightly-cleaned-up code that forms the backbone of my data processing pipeline for my two-photon imaging data (for lab members, this is the scope in J-159, not J-157, so it runs ScanImage, not PraireView). It's not going to run on its own on your data, due to all the little things that can change, but it should be a good place to start.
The scripts are:
run_suite2p.py: runs Suite2p on raw imaging tiffs to detect ROIsclassify_cells.pyclassifies ROIs as likely to be cells or not, using classifiers trained on my hand-labeled dataprocess_data.py: takes the traces output byrun_suite2p.pyand the raw voltage signals of the photodiode tracking trial starts (and facecam frames) and lines them all up, with the actual functionality all inutils.pyconvert_vids.py: handles dropped frames from the facecamera, encodes the present frames into video so FaceMap can be runrun_cascade.py: runs CASCADE for spike inference on dF/F traceslabel_red_cells.py: implements the mostly-manual method for labeling inhibitory cells expressing tdTomato that the lab has settled on
These scripts are best run in about the order they're listed above, though sometimes that can be rearranged (like labeling red cells before running CASCADE, to run a separate inhibitory-tuned CASCADE model on the red cells).
Then there's neural_data_object.py which defines a class that holds all the processed data for a recording session, and provides a bunch of commonly used functions on it. My data analysis scripts basically all use this to interface with the data so I don't have to rewrite much code. It's been a pretty functional setup and I'd recommend doing something like it (just, maybe organized a bit more cleanly).
There's some missing logic around the dF/F computation, since I'm using a method from the Allen institute's pipeline (found here for that and I need to figure out the licensing for that code.