Permalink
Switch branches/tags
Nothing to show
Find file Copy path
d9f2605 Nov 4, 2018
1 contributor

Users who have contributed to this file

1031 lines (631 sloc) 70.3 KB

On-line neuroimaging resources

I try to list here links to softwares, databases, tutorials, blogs and other resources list that I or others have found relevant to learn about neuroimaging or to help us perform neuroimaging analysis. Most of the things listed here are for fMRI but feel free to point towards EEG, MEG, TMS things too. Feel free to add things: see the How to contribute section below.

This document is mostly meant for me to be able to quickly find things without having to google them or browse through my bookmarks, pocket, github stars and repos. But if it can help others, that's great.

Also I am by no means an expert or even have used or done all the things I list... But I wish I had and I wish someone had told me some of those things 5 years ago.

I am also working on a companion [reading list] ( ??? ).


How to use this document Most people don't use a map by starting in the upper left corner, scanning horizontally till they end up in the bottom right corner (or however it is that people read in the region of the world you are in at the moment). Similarly this document is obviously not meant to be read from top to bottom. The best is to browse the Table of content below and jump to section that interests you. For that reason there is some redundancy in the content. This also means that this document is not a cookbook: I just try to list things that could apply a to wide variety of topics and context, but in many cases only a handful of those will be relevant to you.

Note also that some of the sectioning is bit arbitrary: I try to put cross-links where useful.


How to contribute Feel free to add your own resource or any material you have found useful. Send me a pull request to this repository or raise an issue. Or if you don't know how to do that you can reach me on twitter https://twitter.com/RemiGau.

You can check the looking for section right below to see what sections of this document need populating. I have also tried to flag with ??? in the table of content and in the main document the areas where I am pretty sure I have missed existing gems.


Looking for

  • Material on the BOLD signal: origin and biophysics
  • Material on preprocessing, denoising
  • Material on statistical inference in neuroimaging: peak, voxel, cluster based
  • Material on multiple comparison correction in neuroimaging
  • Material on DTI, ASL
  • Material on connectivity: PPI, DCM, granger causality

To add in the list

http://cbs.fas.harvard.edu/science/core-facilities/neuroimaging/information-investigators/MRphysicsfaq

https://emmarobinson01.com/2016/10/07/forget-weak-statistics-fmri-studies-suffer-from-oversimplified-assumptions-made-during-pre-processing/

http://cbs.fas.harvard.edu/science/core-facilities/neuroimaging/information-investigators/MRphysicsfaq

specific talks from mumfordbrainstats, OHBM conference and other video series


Table of content


Metalist

There are tons of on-line resources for neuroimaging data analysis so the following list is not meant to be exhaustive. There are also similar lists here and there that might partly overlap with this one, so here is a list of lists.

Neuroimaging Informatics Tools and Resources Clearinghouse

The most obvious place where everything is centralized is the Neuroimaging Informatics Tools and Resources Clearinghouse. Many tools, atlases, courses are there but if your favorite isn't make sure to add it.

Lab guides and lab wikis

If your lab does not have a lab guide/wiki, it is well worth the time to make one. Lab wikis can save a lot of time for newcomers to get set up and started (rather than reinvent the wheel or take time from other lab members), while lab guides will also help PIs, PhD students and post-docs know what to expect from each others and to promote a healthier lab culture. Mariam Aly explains that well here.

If you lab does not have a guide and/or wiki here is a list you can use to create your own. But I suggest that you go beyond a copy-paste as the ones you find in there might not be tailored to your lab's needs.

And here are some neuroimaging oriented lab wikis:

Others

Online courses

Math and linear algebra courses

Khan Academy is a great free resource for all sorts of topics.

  • Their series on linear algebra is particularly useful and relevant to our needs.
  • The Fourier series and the
  • statistics one videos may also prove useful (h/t [Sam Jones] ( ??? ) ).

If you feel that your background in mathematics and signal processing is a bit weak please have a look at these slides. This file was put together by Joana Leitao and covers several topics that are important to be familiar with in neuroimaging:

  • basic linear algebra
  • ordinary least square solution for the general linear model
  • the BOLD response and convolution: what is a linear time invariant system and why is matters when doing a fMRI study ?
  • how to do t-test and ANOVAS within a general linear model

MRI courses ( ??? )

If you need to dust up your knowledge about MRI.

There are also blog posts series on practicalfmri (and on its companion winnower account) that cover

http://technicalfmri.blogspot.com/2018/02/physiological-monitoring-and-recording.html?m=1

fMRI courses ( ??? )

There are quite a few courses for fMRI analysis out there that I am aware of.

Machine learning ( ??? )

If you are going to do some multivariate analysis, it is likely you will need to know a bit lot of machine learning. I did find that that this class on coursera covered a lot of ground. It is not specific to neuroimaging but gives you a good overview of the basic concept you need to understand.

Worst case it will let you understand why John von Neumann said

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.

Resting state courses ( ??? )

There is one on the rMRI website.

Neurohackademy

Neurohackademy is more than a neuroimaging course: it is broader in scope as it covers reproducibility and open science issues in neuroimaging. It is also very practical and definitely python oriented. To know more, see this post by Tal Yarkoni about the 2018 edition of Neurohackademy.

Software specific ( ??? )

Most of the main analysis packages on top of the IRL courses usually have one video series that works as a course +/- tutorial.

SPM

Freesurfer

FSL

AFNI

Nipype

Nipype is best viewed as a way to create and run software-agnostic preprocessing/analysis-pipeline. It becomes very powerful when you need to use different softwares in your analysis.

Tim Van Mourik and a few other people have developed tool to facilitate building pipelines with nipype:

  • Porcupine stands for "PORcupine Creates Ur PipelINE" which is probably the worst recursive acronym with bad capitalisation and annoying use of slang. This software allows researchers to build pipelines using a GUI and generates the code that is needed to run the pipeline created.
  • Giraffe is web-based "Graphical Interface for Reproducible Analysis oF workFlow Experiments" that can take advantage of Porcupine to create pipelines.

Others ( ??? )

Statistics courses

Some of those are clearly not specific to neuroimaging but are well worth going through even if you are a PI.

  • If you have no idea what the distribution of p-value would look like if there were only noise in your data, then the odds are you will learn at least one thing in Daniel Lakens course on how to improve your statistical inferences. Most likely you will learn more than one thing.

Daniel also has a blog blog is very useful of stats related knowledge. Similarly Guillaumme Rousselet's has a series of posts on his blog where you learn more about robust statistics and how to improve your data visualizations.

Open-science and reproducibility

There is a MOOC on open-science is still under construction but on top of an insane list of resources has the module 5 already up and running to teach you how to use github and zenodo to create a time stamped screenshot of your code to link to in your papers.

Video series

If you run out of things to binge on on Netflix, Youtube has some useful channels if you want to learn more about fMRI data analysis. I also list here other repository of MRI related videos.

Mumford brainstats

Jeanette Mumford has a fantastic series of videos on neuroimaging analysis on youtube. The channel also has Facebook group (as well as a tumblr and twitter account) if you have questions.

Andrew Jahn

Here for the videos but he also as a blog (see the old version here). He has some very good follow along 'tutorials' for FSL, Freesurfer and AFNI amongst other things.

Center for Brains, Minds and Machines

Here

Organization from human brain mapping (OHBM)

The videos of the lectures and workshops from the previous HBM conferences are available online here.

fMRIf summer courses from the NIH

Here

Conference on Cognitive Computational Neuroscience (CCN)

This new conference has the videos from its first edition here

Blogs

There are many excellent blogs run by neuroscientists where you can find interesting and more or less technical information on neuroimaging analysis. I list a few below but you can find a subsample of my neuroscience blogroll in the file blogroll_a_sample.opml that you can import into your favorite news reader (e.g feedly).

Where to ask for help

If you have question linked to a specific software, check the documentation/FAQ/manual/wiki/tutorial for that software first. Then you can turn to the mailing list related to that software: but always start by looking through the archives of those mailing lists first before sending a question that has already been answered.

But if you have more general questions you can also try :

  • the neurostars forum
  • social medias: there are some specialised Facebook groups or good hashtags on twitter that will succeed when your google fu fails you.
  • the slack channel of brainhack

UNIX command line

Even if you have only used Windows in your life, the odds are that you will at some point have to use a UNIX command line (like the one you can find on a linux computer or a Mac) to do some of your MRI analysis. Best case scenario you might only need it to explore some folder structure on some server, worst case you might have to write some scripts to automate some tasks. Either way, having some basics ideas about how to interact with a UNIX is a good idea.

Matlab and SPM specific resources

Matlab ( ??? )

  • tutorials I learnt matlab with a book and by reading other's scripts and with a lot of coffee, patience, sweat, tears and, trial and errors. I am sure there are better ways to do it than that but I don't really know what the best tutorials are these days.

SPM

The python ecosystem ( ??? )

Matlab must still be the most used "language" in neuroimaging (citation needed) but there is huge neuroscience-oriented python ecosystem out there taking advantage of the scientific python community. On top of the financial aspect (those matlab licenses can be quite expensive), there are many good reasons why you might wanna switch if only because matlab breeds bad coding habits.

Here too there are plenty of generic python courses on datacamp, code academy or kaggle. You can also check things that are more scientific python oriented like the scipy lectures or Jake Vanderplas's jupyter notebooks Whirlwind Tour Of Python and Python Data Science Handbook.

There are also a handbook and a course that might ease the transition from matlab to python.

If you turn to neuroimaging in python I guess you will first want to go to check the nipy website and then turn to nibabel, nipype, nilearn, pyMVPA, …

Web apps ( ??? )

R based apps

Even if they are not specific to neuroimaging many of the R based web based apps from shiny apps and R psychologist can be very useful to help better understand:

Vizualizaton

  • the bioimagesuite seems like a convenient way to visualize and do some processing of you images on the fly via a web-browser. (h/t Renzo)

Anatomy atlases ( ??? )

Some of those might help you learn or revise your neuroanatomy:

BEFORE YOU START: Reproducibility ( ??? )

There are a few options you can investigate to make your analysis more replicable and reproducible. On top of [sharing your data and your code](#Sharing-your-code, data-and-your-results) you can use containers like docker or singularity that allows you to run your analysis in contained environment that has an operating system, the software you need and all their dependencies.

In practice this means that by using this container:

  • other researchers can reproduce your analysis now on their computer (e.g you can run a linux container with freesurfer on your windows computer),
  • you can reproduce your own analysis in 5 years from now without facing the problem of knowing which version of the software you used.

Neurodocker allows you to easily create a docker container suited to your needs in terms of neuroimaging analysis. There is nice tutorial here on how to use it.

Code-ocean is web based service that relies on docker containers to let you run your analysis online. There is post by Stephan Heunis describing how he did that with an SPM pipeline.

Another thing you can implement is using notebooks like jupyter, jupyter lab or binder ( ??? ). Here is fascinating talk by Fernando Perez, one the person behind the jupyter project.

BEFORE YOU START: Ethics and consent forms

The open brain consent form tries to facilitate neuroimaging data sharing by providing an “out of the box” solution addressing human subjects concerns and consisting of

  • widely acceptable consent form allowing deposition of anonymized data to public data archives
  • collection of tools/pipelines to help anonymization of neuroimaging data making it ready for sharing

BEFORE YOU START: Code and data management ( ??? )

In general I suggest you have a look at some of the courses and material offered by the Carpentries for data and code.

Code management

Version control

For managing your code, if you don't already, I suggest you make version control with GIT part of every day your every day workflow. GIT might seem scary and confusing at first but it is well worth the effort: the good news is that there are plenty of tutorials available (for example: here, there or there). Another advantage of using GIT is that it allows you to collaborate on many projects on github but which already makes a lot of sense even simply at the scale of a lab.

Even though GIT is most powerful when using the command line, there are also many graphic interfaces that might just be enough for what you need. Plus the graphic interface can help you get started to then you move on to use the command line only. There is no shame in using a GUI: just don't tell the GIT purists this is what you do otherwise you will never hear the end of it.

Coding style

Another good coding practice to have is a consistent coding style. For python you have the PEP8 standard and some tools like pylint, pycodestyle, or pep8online that help you make sure that your code complies with this standard.

You can also have a look at the code style used by google for many languages (h/t Kelly Garner). You will notice that matlab is not in the list so you might want to check this here.

Avoid selective debugging: unit tests, positive and negative control

Having a bug is annoying. Having your code run but give you an obviously wrong answer is more annoying. Having your code run and give you a plausible but wrong answer is scary (and potentially expensive when it crashes a spaceship onto a planet). Having your code run and give you the answer you want but not the true answer is the worst and keeps me up at night.

Selective debugging happens when we don't check the code that gives us the answer we want but we do check it when it gives us an answer that goes against our expectation. In a way it is a quite insidious form of p-hacking.

There are some recent examples in neuroimaging.

Some things that can be done about it:

  • organize code reviews in your lab: basically make sure that the code has been checked by another person. Pairing a beginner with a more senior member of the lab can also be a way to improve learning and skill transfer in the lab.
  • test your code. These tests can be implemented automatically to your project by continuous integration services like Travis.
  • test your pipeline with positive negative control. A negative control is testing your analysis by running on random noise or on data that should have no signal in it. The latter was the approach used by Anders Eklund and Tom Nichols in their cluster failure paper series. A positive control is making sure that your analysis can detect VERY obvious things it should detect (e.g motor cortex activation following button presses, classify responses to auditory versus visual stimuli in V1, …). Jo Etzel has post about this.

Data: BIDS, Datalad and YODA

BIDS

If you are going to do some fMRI analysis you will quickly drown in data if you are not a bit organized, so I highly recommend you use the brain imaging data structure standard (BIDS) to organize your data. The current version of BIDS only talks about raw data but it should soon cover derivatives (e.g preprocessed data). In general BIDS also allows you to more easily share your data and use plenty of analytical tools.

If you would like to use BIDS but you have no idea what a JSON file or the length of the specification document scares you, head over to the BIDS starter kit to find tutorials and scripts to help you rearrange your data.

Datalad

Datalad is to data what git is to code. It allows curation of data and version controlling of but also lets you crawl databases to explore and download data from them and it facilitates data sharing. Several of these features are described here with scripts that act as tutorial. There are videos presentation of it there.

YODA

Having a standard way to organize not only your data but also your code, the results, the documentation... from the beginning of a project can go a long way to save you a lot of time down the line (when communicating within or outside your lab, or when you have to wrap things up when moving to a new project/job). The YODA template is folder structure recommended by ReproNim that you can use.

Other good habits:

  • a simple, transparent and systematic filenaming is a good start
  • if you have to deal with data in spreadsheet I think you will enjoy this paper and this cookbook

Documentation ( ??? )

It is often said to:

Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.

Proper documentation of a project and good commenting of your code will help others to use it or pick it up later. But there are good selfish reasons to document your project and comment your code: it will most likely help future you when you have to respond to reviewers or when you want to check something in that data set or in that function you used 6 months ago.

  • Most likely, you will have to re-run your analysis more than once.
  • In the future, you or a collaborator may have to re-visit part of the project.
  • Your most likely collaborator is your future self, and your past self doesn’t answer emails.

See here for more.

There are plenty of recommendations out there about writing documentation. I did find this one useful and this list or this checklist that are more specific to README files.

In terms of code I guess the ideal is self-documenting code. Read the docs is a good option that also allows for continuous integration. Python also apparently has this thing called Sphinx that helps create intelligent and beautiful documentation (that alone should make matlab users envious). There are also ways to make it part of a continuous integration.

PLANNING YOUR STUDY

Reusing data

Some of the main databases are:

But there are many possibilities of databases where you can find your raw and/or pre-processed data. Maybe your university or your institute already ahs a repository of published data (e.g the Donders institute.

The recent google extension for databases can also be useful to locate datasets that might be of interest.

There are some tools that help you search through them like the metasearch tool on the Open Neuroimaging Laboratory but this is also where Datalad can become useful to browse or crawl those databases.

Defining your terms and your task

Ontologies

Inigo Montoya: You keep using that word. I don't think it means what you think it means. Ayotnom Ogini: Funny you should say that! I was about to tell you the same thing.

The use of alternate and even competitive terminologies can often impede scientific discoveries.

Piloting ( ??? )

Good piloting is very important but piloting is not meant to be about finding a hypothesis you want to test: because of the small sample size of pilot studies, anything interesting you see there is very likely to be a fluke. Piloting is more about checking the overall feasibility of that experiment and that you can get high [quality data](#ONCE YOU-HAVE-DATA:-quality-control), judged by criteria that are unrelated to your hypothesis.

Sam Schwarzkopf has a few interesting posts on the topic here and there

Piloting is usually a phase where it would be good to check with your local MRI physicist and statistician. And you also might already have to make choices about pre-processing and data analysis.

Pre-registration

If your work is not purely exploratory you might want to consider pre-registering your study. It is good way to decide in advance how you are going to collect and analyze your data. It helps make it clear to yourself and to others what part of your study was predicted (i.e confirmatory) and which part wasn't (i.e exploratory). This way pre-registration are a good way to restrict the number of researchers degrees of freedom and limit the possibility to engage (most often unknowingly) in questionable research practices like procedural overfitting (also known as p-hacking) or HARKing (Hypothesising After the Results are Known). You can also opt for registered reports where you submit your methods to a journal and get reviews on the protocol before the data collection and analysis is conducted. At the moment there are more than a 140 journals that accept registered reports.

For examples of studies that were pre-registered you can search in the zotero libraries curated by the open science framework.

Pre-registering neuroimaging studies can be quite challenging and comes with a whole set of constraints that might be absent in other fields. Jessica Flannery has created a template for pre-registering fMRI studies that you might find useful.

Optimizing your design

Before you run your study there are a few things you can do to optimize your design. Two of them are doing a power analysis and optimizing the efficiency of your fMRI design.

Design efficiency ( ??? )

If you need a reminder about what design efficiency is. When you want to optimize it you have few options:

Power

In order to investigate whether an effect exists, one should design an experiment that has a reasonable chance of detecting it. I take this insight as common sense. In statistical language, an experiment should have sufficient statistical power. Yet the null [hypothesis significant testing] ritual knows no statistical power.

Gerd Gigerenzer in Statistical Rituals: The Replication Delusion and How We Got There, DOI: 10.1177/2515245918771329

There is good evidence that the average statistical power has remained low for several decades in psychology which increases the false negative rate and reduces the positive predictive value of findings (i.e the chance that a significant finding is actually true). Maybe neuroimaging could learn from that mistake, especially that a large majority of neuroimaging studies seem to have even lower statistical power.

fMRI power is a matlab based toolbox to help you run your power analysis.

The website neuropowertools actually offers options to run both your design efficiency optimization and your power analysis. They also have their respective python packages.

For MVPA: same analysis approach

If you intend to run a MVPA - classification analysis on your data, there are a few things you can do BEFORE you start collecting data to optimize your design. There is no app/toolbox for that so I am afraid you will have to read the paper

Defining your region of interest ( ??? )

If you don't want to run a whole brain analysis, then you will most likely need to define your regions of interest (ROI). This must be done using data that is independent from the data you will use in the end otherwise you will have a [circularity] ( ??? ) problem (also known as double dipping or [voodoo correlation] ( ??? )).

  • around a coordinate identified in a previous study or in a [meta-analysis](#meta-analysis-( ??? )), or by using Neurosynth.
  • using a localizer
  • or relying on a functional or anatomical atlas.

Using previous results ( ??? )

Neurosynth can help with to run a meta-analysis to create mask to define your ROI. See for example this if you wanted to have a ROI for brain region matching the search term auditory and see here for a tutorial.

Localizers ( ??? )

A typical example of a localizer are retinotopic mappings. Sam Schwarzkopf has good tutorial for those.

Atlases

There are many atlases you could use to create ROIS. Some ship automatically with some softwares otherwise you can find lists on the

Some other retinotopics maps are apparently not listed in the above so here they are:

The problem then becomes which atlas to choose. To help you with this the Online Brain Atlas Reconciliation Tool can show the overlap that exist between some of those atlases. The links I had to the website (here and there) are broken at the moment so at least here is a link to the paper

Some toolboxes out there also allow you to create your own ROI and rely on anatomical / cytoarchitectonic atlases:

Non-standard templates ( ??? )

In case you want to normalize brains of children it might be better to use a pediatric template. Some of them are listed here.

ONCE YOU HAVE DATA: quality control

ONCE YOU HAVE DATA: preprocessing

Pipelines ( ??? )

There are some ready made pipeline as BIDS apps that already exist and have been tested. Using them might save you time and make your results more reproducible.

There is also an OPPNI for Optimization of Preprocessing Pipelines for NeuroImaging.

Artefact/Noise removal ( ??? )

PCA ( ??? )

ICA ( ??? )

ART ( ??? )

ART repair ( ??? )

Physiological noise ( ??? )

ANALYSIS: general linear model

  • a FAQ article on the GLM by Cyril Pernet with matlab code to go through
  • see the section on percent signal change to better understand how to report results
  • orthogonalization of regressors can be a bit hard to wrap your head aroudnd at first but Jeanette Mumford ( ??? ) has great paper on the topic with a jupyter notebook.

ANALYSIS: Resting state ( ??? )

I know almost nothing about resting state but I have been told this site is worth having a look at.

ANALYSIS: Model selection ( ??? )

Analytical flexibility is a big problem in neuroimaging most likely the source of a lot of false positive results.

If several analysis are attempted it can be good to have ways to decide amongst them. There is bad way to do like the one described in the overfitting toolbox.

But there are better ways to do it:

ANALYSIS: Statistical inferences and multiple comparison correction (MCP) ( ??? )

Cluster based inference ( ??? )

Family wise error (FWE) ( ??? )

In case you do not remember how random field theory works to correct for multiple comparison, check this.

False discovery rate (FDR) ( ??? )

Permutation tests ( ??? )

A talk by Carsten Allefeld on permutation test at OHBM 2018: https://www.pathlms.com/ohbm/courses/8246/sections/12542/video_presentations/116074

The prevalence test

SnPM ( ??? )

FSL PALM and Randomize( ??? )

Freesurfer PALM ( ??? )

ANALYSIS: Multivariate analysis ( ??? )

A talk by Pradeep Reedy Raamana at OHBM 2018 on cross-validation: https://www.pathlms.com/ohbm/courses/8246/sections/12542/video_presentations/116075

Neuroimaging toolboxes for representation similarity analysis (RSA), support vector machine (SVM), population receptive field (pRF), encoding model and others...

Matlab based

TDT

TDT is the The Decoding Toolbox.

ProNTo

PRoNTo is the Pattern Recognition for Neuroimaging Toolbox developed at UCL (UK).

RSA toolbox

PCM toolbox

The pattern components modelling toolbox of the Diedrichsen lab

cvMANOVA

From Carsten Allefeld

SAMSRF

A pRF analysis toolbox called the Seriously Annoying Matlab SuRFer from Sam Schwarzkopf.

Python based

pyMVPA

Intended to ease statistical learning analyses of large datasets.

nilearn

Nilearn is a Python module for fast and easy statistical learning on NeuroImaging data.

Popeye

For pRF analysis.

R based ( ??? )

ANALYSIS: Robustness checks

Non neuroimaging cases

ANALYSIS: Computational neuroscience

This paper comes with some material to apply bayesian decoding analysis to neuronal data can be of interest.

Free energy

As someone said on twitter there is a cottage industry of blog posts trying to understand/explain this:

And a tutorial

Dynamic causal modelling

ANALYSIS: Laminar and high-resolution MRI

Renzo Hubert is keeping track of the most recent development of laminar MRI via twitter but also on his blog. He also curates laminar-fMRI related talks on his Youtube channel or papers in this google spreahsheet.

  • This blog post has a list of most of the softwares that are related to laminar fMRI.
  • A more recent tool not listed in there for creating equivolumetric surfaces.

ANALYSIS: Meta analysis ( ??? )

a talk by Tom Nichols at OHBM 2018 for an overview a practical by Camille Maumet at OHBM 2018 on meta-analysis: [slides] ( ??? )

a talk on ALE and brainmap https://www.pathlms.com/ohbm/courses/8246/sections/12542/video_presentations/116066

NiMARE is a Python library for coordinate- and image-based meta-analysis. Chris Gorgolewski wrote a tutorial on how to use it.

For coordinate based meta-analysis:

For image based meta-analysis:

  • IBMA is the Image-Based Meta-Analysis toolbox for SPM.

REPORTING METHODS AND RESULTS (also useful for reviewing papers)

A checklist: COBIDAS report

The organization from human brain mapping (OHBM) created the Committee on Best practices In Data Analysis and Sharing (COBIDAS) that published a report with a set of guidelines with an appended checklist on how to conduct and report fMRI studies. It is a very useful resource to use to make sure you are not forgetting anything when writing up your article. See also Jeanette Mumford's video about it.

Percent signal change ( ??? )

  • a FAQ article on the GLM by Cyril Pernet with matlab code to go through has some mention on reporting PSC.
  • See also this FSL guide by Jeanette Mumford ( ??? ) for reporting results in PSC.
  • This post by Tom Nichols ( ??? ) can be helpful to understand what are the units that SPM paranter estimates are reported in.
  • The MarsBAR SPM toolbox can also help you deal with PSC.

Making figures ( ??? )

I keep hearing that the books by Edward R. Tufte are great https://www.amazon.com/dp/0961392118/?tag=codihorr-20 https://www.amazon.com/dp/0961392126/?tag=codihorr-20

http://mkweb.bcgsc.ca/essentials.of.data.visualization/ https://www.jisc.ac.uk/full-guide/data-visualisation https://jimgrange.wordpress.com/2016/06/15/solution-to-barbarplots-in-r/ https://f1000research.com/articles/4-466/v1

  • Color maps color blind friendly

  • Dual coded statistical maps Code to display beta values and t values on the same map. From the Data visualization in the neurosciences: overcoming the curse of dimensionality paper.

Tools to check results/statistics ( ??? )

Those recent tools cannot be applied to statistical maps but they can be useful for any behavioural results. Many of them can be used on a paper you are about to publish to check for errors or on a paper you are reviewing / reading.

  • Statcheck developed by automatically checks for errors in statistical reporting making sure that your p values match with your t/F values and degrees of freedom.
  • GRIM test checks for Granularity-Related Inconsistency of Means. Developed by Nick Brown and James Heathers makes sure that the mean reported are plausible given a measurement scale (liek a Likert scale or a visual analog scale) and a sample size. There are [GRIMMER](http://www.prepubmed.org/grimmer/, GRIMMEST extension to standard deviations and F values.
  • SPRITE stands for Sample Parameter Reconstruction via Iterative TEchniques and allows to generate the possible data distribution given a scale, a mean and variability measure web app, shiny app, code.

https://shinyapps.org/apps/p-checker/

  • p-curves
  • test of insufficient variance
  • z-curves

Peer review ( ??? )

YOU ARE NOT DONE YET: sharing your code, data and your results

There should be at least 3 boxes on your to do list once your study is completed.

  • sharing the code
  • sharing the data
  • sharing the statistical map
  • updating meta-analysis databases If the 3 first points are done before an article submission it can be useful for reviewers to check what you have done. But all of those points are important also for future researchers that would like to base new research on your results or to run a meta-analysis of similar studies.

Sharing code

You might be tempted to not share your code. If your code and/or your jupyter notebooks are on a github repository, you can make snapshot of it to publish on zenodo as explained here.

NeuroImaging Data Model (NIDM)

If you want to share your results I suggest you export your final results using the NIDM format that is supported natively by SPM12. There are also tools for exporting FSL results and things are under development for AFNI. The NIDM format makes your results easily viewable by other softwares (check the INCF-NIDASH repo for more information). There are extension in development for the NIDM to cover not only non-parametric statistical maps but also to export in very compact way many of the details of about your experiment and analysis.

Another good reason to use the NIDM model is that it facilitates uploading them to a site like neurovault where you can store them and share them with others.

Sharing your data

Some of the main databases where you can put your data are:

But there are many other possibilities to share your raw and/or pre-processed data. Maybe your university or your institute has ways to help you share your data (e.g the Donders institute.

FAIR data

https://www.pathlms.com/ohbm/courses/8246/sections/12542/video_presentations/115883

Meta-analysis databases

Another thing you can do to share your published results is to add them to meta-analytical databases like ANIMA, brainmap or neurosynth: for this you could use brainspell and Scribe.