The dataset contains the NEURON simulations and python codes for ANN fitting and evaluation.
The first folder contains the ANN benchmarking code, and the necessary ANNs in .h5 file format.
All NEURON models were created with the same logic, the most detailed documentation in in the single compartmental model folder. Fitting and evaluation was also done based on the same logical principles, the most detailed version is also in the single compartmental folder (fit_CNN_LSTM_latest.py)
Briefly, NEURON models are used to create training and evaluation datasets. These files can be generated by running calibrate.hoc files in the multicompartmental simulation folder. The resulting datasets will be in a txt format. The files will contain timesteps, somatic membrane potential values and input timings. This txt is fed to (in the multicompartmental cases) fit_multicompartmental.py at the line: "with open('/content/drive/My Drive/V1_DNN/Proof_of_principal/L2_3/vmi3.txt') as f:".
The multicompartmental models were constrained by the same file, which can be found in the multicompartmental folder (fit_multicompartmental.py).
As running these codes are computational resource intensive, there are a few options included to run them. First, NEURON dataset creation can be parallelized, following the example in L5 PC\init.hoc Second, as ANN fitting is memory intensive, there is an option for not load the dataset from a generator, which can be found in fit_multicompartmental.py.