-
Notifications
You must be signed in to change notification settings - Fork 30
Required parameters for DANNCE
DANNCE requires that a core set of parameters be specified explicitly in the DANNCE base config (see ./configs/dannce_mouse_config.yaml
for an example).
string. This is the path, relative to where you are calling dannce-train
or dannce-predict
, to the project-specific io.yaml
file that defines paths to required files, such as output directories and labeled data.
💡 In most cases you will be launching
dannce-train
ordannce-predict
from within a project directory that contains anio.yaml
file, so simply leaving this asio_config: io.yaml
will point DANNCE to the correct file every time.
int. This is the number of landmarks you are tracking in your dataset. For instance, if your skeleton contains 22 points, then set new_n_channels_out: 22
.
int. This is the number of timepoints that will be included in a batch of samples during training and prediction. This value is constrained by the amount of GPU memory you can access.
💡For prediction, increasing
batch_size
may increase prediction speed. For training,batch_size
can be modified as a hyperparameter. The effect of batch size on training is an open area of research in deep learning, although there is agreement that the optimal batch size is problem- and data-specific. We typically usebatch_size: 4
.
int. This is the number of times your full dataset will be looped over during training.
💡You should need fewer epochs as you increase the number of images in your training dataset. With ~100 timepoints in the training set, 500-1000 epochs is probably sufficient. Inspect your training and validation loss in your tensorboard logs. If using a validation set, you should continue to train if the validation loss has not yet plateaued.
string. Possible values: finetune
, new
, or continued
. This sets the training mode.
🔹 finetune
is used to train with your labels starting from pretrained weights
🔹 new
initializes a new network from scratch and likely requires thousands of labeled frames
🔹 continued
is used if you want to train a model for more epochs after an initial training session. It saves the weights and optimizer state of the network so it picks up right where it left off.
💡Typically you will just use
finetune
. Only usenew
if you have more than 10k labeled frames. Usecontinued
to restart training if it is cut short, off if you would like to continue training for additional epochs.
string. Possible values: AVG
or MAX
. This parameter toggles between two different versions of DANNCE.
💡The
AVG
version can achieve better accuracy, but is sometimes harder to train.net_type: AVG
must be paired with weights from a pretrainedAVG
model when finetuning, andnet_type: MAX
must be paired with weights from a pretrainedMAX
model when finetuning. DANNCE will use weights in the folder specified indannce_finetune_weights
. We recommend starting withAVG
.
int. Number of samples (i.e. timepoints;per animal) used for assessing validation metrics during training.
💡The more validation samples you use, the more accurately you can assess the accuracy of your models during training, thus allowing you better determine at which epoch you should stop training, or which of several alternative models will track better. However, the more validation samples you use, the fewer samples you can use for the actual training. We recommend using
num_validation_per_exp: 0
unless you are using over 100 samples per animal.
int. The length (in mm) of one size of the 3D cube whose center is the animal's 3D position.
💡 This should be big enough to fit the entire animal, with a little wiggle room to accommodate noise in the 3D COM.
int. During prediction, this sets the total number of timepoints evaluated.
string. If finetuning, this must be the path to the directory containing the pretrained weights file.