Skip to content

DOCSTRINGS

Mackenzie Mathis edited this page Apr 26, 2020 · 6 revisions

DOCSTRINGS of major functions.

create_new_project:

Signature: deeplabcut.create_new_project(project, experimenter, videos, working_directory=None, copy_videos=False, videotype='.avi')
Docstring:
Creates a new project directory, sub-directories and a basic configuration file. The configuration file is loaded with the default values. Change its parameters to your projects need.

Parameters
----------
project : string
    String containing the name of the project.

experimenter : string
    String containing the name of the experimenter.

videos : list
    A list of string containing the full paths of the videos to include in the project.
    Attention: Can also be a directory, then all videos of videotype will be imported.

working_directory : string, optional
    The directory where the project will be created. The default is the ``current working directory``; if provided, it must be a string.

copy_videos : bool, optional
    If this is set to True, the videos are copied to the ``videos`` directory. If it is False,symlink of the videos are copied to the project/videos directory. The default is ``False``; if provided it must be either
    ``True`` or ``False``.

Example
--------
Linux/MacOs
>>> deeplabcut.create_new_project('reaching-task','Linus',['/data/videos/mouse1.avi','/data/videos/mouse2.avi','/data/videos/mouse3.avi'],'/analysis/project/')
>>> deeplabcut.create_new_project('reaching-task','Linus',['/data/videos'],videotype='.mp4')

Windows:
>>> deeplabcut.create_new_project('reaching-task','Bill',[r'C:\yourusername
ig-95\Videos
eachingvideo1.avi'], copy_videos=True)
Users must format paths with either:  r'C:\ OR 'C:\\ <- i.e. a double backslash \ \ )

extract_frames:

Signature: deeplabcut.extract_frames(config, mode='automatic', algo='kmeans', crop=False, userfeedback=True, cluster_step=1, cluster_resizewidth=30, cluster_color=False, opencv=True, slider_width=25)
Docstring:
Extracts frames from the videos in the config.yaml file. Only the videos in the config.yaml will be used to select the frames.

Use the function ``add_new_video`` at any stage of the project to add new videos to the config file and extract their frames.

The provided function either selects frames from the videos in a randomly and temporally uniformly distributed way (uniform), 

by clustering based on visual appearance (k-means), or by manual selection. 

Three important parameters for automatic extraction: numframes2pick, start and stop are set in the config file. 

Please refer to the user guide for more details on methods and parameters https://www.biorxiv.org/content/biorxiv/early/2018/11/24/476531.full.pdf

Parameters
----------
config : string
    Full path of the config.yaml file as a string.
    
mode : string
    String containing the mode of extraction. It must be either ``automatic`` or ``manual``.
    
algo : string 
    String specifying the algorithm to use for selecting the frames. Currently, deeplabcut supports either ``kmeans`` or ``uniform`` based selection. This flag is
    only required for ``automatic`` mode and the default is ``uniform``. For uniform, frames are picked in temporally uniform way, kmeans performs clustering on downsampled frames (see user guide for details).
    Note: color information is discarded for kmeans, thus e.g. for camouflaged octopus clustering one might want to change this. 
    
crop : bool, optional
    If this is set to True, a user interface pops up with a frame to select the cropping parameters. Use the left click to draw a cropping area and hit the button set cropping parameters to save the cropping parameters for a video.
    The default is ``False``; if provided it must be either ``True`` or ``False``.
        
userfeedback: bool, optional
    If this is set to false during automatic mode then frames for all videos are extracted. The user can set this to true, which will result in a dialog,
    where the user is asked for each video if (additional/any) frames from this video should be extracted. Use this, e.g. if you have already labeled
    some folders and want to extract data for new videos. 

cluster_resizewidth: number, default: 30
    For k-means one can change the width to which the images are downsampled (aspect ratio is fixed).

cluster_step: number, default: 1
    By default each frame is used for clustering, but for long videos one could only use every nth frame (set by: cluster_step). This saves memory before clustering can start, however, 
    reading the individual frames takes longer due to the skipping.

cluster_color: bool, default: False
    If false then each downsampled image is treated as a grayscale vector (discarding color information). If true, then the color channels are considered. This increases 
    the computational complexity. 

opencv: bool, default: True
    Uses openCV for loading & extractiong (otherwise moviepy (legacy))
    
slider_width: number, default: 25
    Width of the video frames slider, in percent of window
    
Examples
--------
for selecting frames automatically with 'kmeans' and want to crop the frames.
>>> deeplabcut.extract_frames('/analysis/project/reaching-task/config.yaml','automatic','kmeans',True)
--------
for selecting frames automatically with 'kmeans' and considering the color information.
>>> deeplabcut.extract_frames('/analysis/project/reaching-task/config.yaml','automatic','kmeans',cluster_color=True)
--------
for selecting frames automatically with 'uniform' and want to crop the frames.
>>> deeplabcut.extract_frames('/analysis/project/reaching-task/config.yaml','automatic',crop=True)
--------
for selecting frames manually,
>>> deeplabcut.extract_frames('/analysis/project/reaching-task/config.yaml','manual')
--------
for selecting frames manually, with a 60% wide frames slider
>>> deeplabcut.extract_frames('/analysis/project/reaching-task/config.yaml','manual', slider_width=60)

While selecting the frames manually, you do not need to specify the ``crop`` parameter in the command. Rather, you will get a prompt in the graphic user interface to choose 
if you need to crop or not.
--------

label_frames:

Signature: deeplabcut.label_frames(config, multiple=False)
Docstring:
Manually label/annotate the extracted frames. Update the list of body parts you want to localize in the config.yaml file first.

Parameter
----------
config : string
    String containing the full path of the config file in the project.

multiple: bool, optional
    If this is set to True, a user can label multiple individuals.
    The default is ``False``; if provided it must be either ``True`` or ``False``.

Example
--------
To label multiple individuals
>>> deeplabcut.label_frames('/analysis/project/reaching-task/config.yaml',multiple=True)
--------

check_labels:

Signature: deeplabcut.check_labels(config, Labels=['+', '.', 'x'], scale=1)
Docstring:
Double check if the labels were at correct locations and stored in a proper file format.

This creates a new subdirectory for each video under the 'labeled-data' and all the frames are plotted with the labels.

Make sure that these labels are fine.

Parameter
----------
config : string
    Full path of the config.yaml file as a string.

Labels: List of at least 3 matplotlib markers. The first one will be used to indicate the human ground truth location (Default: +)

scale : float, default =1
    Change the relative size of the output images.

Example
--------
for labeling the frames
>>> deeplabcut.check_labels('/analysis/project/reaching-task/config.yaml')
--------

create_training_dataset:

Signature: deeplabcut.create_training_dataset(config, num_shuffles=1, Shuffles=None, windows2linux=False, userfeedback=False, trainIndexes=None, testIndexes=None, net_type=None, augmenter_type=None)
Docstring:
Creates a training dataset. Labels from all the extracted frames are merged into a single .h5 file.

Only the videos included in the config file are used to create this dataset.


[OPTIONAL] Use the function 'add_new_video' at any stage of the project to add more videos to the project.

Parameter
----------
config : string
    Full path of the config.yaml file as a string.

num_shuffles : int, optional
    Number of shuffles of training dataset to create, i.e. [1,2,3] for num_shuffles=3. Default is set to 1.

Shuffles: list of shuffles.
    Alternatively the user can also give a list of shuffles (integers!).

windows2linux: bool.
    The annotation files contain path formated according to your operating system. If you label on windows
    but train & evaluate on a unix system (e.g. ubunt, colab, Mac) set this variable to True to convert the paths.

userfeedback: bool, optional
    If this is set to false, then all requested train/test splits are created (no matter if they already exist). If you
    want to assure that previous splits etc. are not overwritten, then set this to True and you will be asked for each split.

net_type: string
    Type of networks. Currently resnet_50, resnet_101, resnet_152, mobilenet_v2_1.0,mobilenet_v2_0.75, mobilenet_v2_0.5, and mobilenet_v2_0.35 are supported.

augmenter_type: string
    Type of augmenter. Currently default, imgaug, tensorpack, and deterministic are supported.

Example
--------
>>> deeplabcut.create_training_dataset('/analysis/project/reaching-task/config.yaml',num_shuffles=1)
Windows:
>>> deeplabcut.create_training_dataset('C:\Users\Ulf\looming-task\config.yaml',Shuffles=[3,17,5])
--------

or use create_training_model_comparison:

Signature: deeplabcut.create_training_model_comparison(config, trainindex=0, num_shuffles=1, net_types=['resnet_50'], augmenter_types=['default'], userfeedback=False, windows2linux=False)
Docstring:
Creates a training dataset with different networks and augmentation types (dataset_loader) so that the shuffles
have same training and testing indices.

Therefore, this function is useful for benchmarking the performance of different network and augmentation types on the same training/testdata.


Parameter
----------
config : string
    Full path of the config.yaml file as a string.

trainindex: int, optional
    Either (in case uniform = True) indexes which element of TrainingFraction in the config file should be used (note it is a list!).
    Alternatively (uniform = False) indexes which folder is dropped, i.e. the first if trainindex=0, the second if trainindex =1, etc.

num_shuffles : int, optional
    Number of shuffles of training dataset to create, i.e. [1,2,3] for num_shuffles=3. Default is set to 1.

net_types: list
    Type of networks. Currently resnet_50, resnet_101, resnet_152, mobilenet_v2_1.0,mobilenet_v2_0.75, mobilenet_v2_0.5, and mobilenet_v2_0.35 are supported.

augmenter_types: list
    Type of augmenters. Currently "default", "imgaug", "tensorpack", and "deterministic" are supported.

userfeedback: bool, optional
    If this is set to false, then all requested train/test splits are created (no matter if they already exist). If you
    want to assure that previous splits etc. are not overwritten, then set this to True and you will be asked for each split.

windows2linux: bool.
    The annotation files contain path formated according to your operating system. If you label on windows
    but train & evaluate on a unix system (e.g. ubunt, colab, Mac) set this variable to True to convert the paths.

Example
--------
>>> deeplabcut.create_training_model_comparison('/analysis/project/reaching-task/config.yaml',num_shuffles=1,net_types=['resnet_50','resnet_152'],augmenter_types=['tensorpack','deterministic'])

Windows:
>>> deeplabcut.create_training_model_comparison('C:\Users\Ulf\looming-task\config.yaml',num_shuffles=1,net_types=['resnet_50','resnet_152'],augmenter_types=['tensorpack','deterministic'])

--------

train_network:

Signature: deeplabcut.train_network(config, shuffle=1, trainingsetindex=0, max_snapshots_to_keep=5, displayiters=None, saveiters=None, maxiters=None, allow_growth=False, gputouse=None, autotune=False, keepdeconvweights=True)
Docstring:
Trains the network with the labels in the training dataset.

Parameter
----------
config : string
    Full path of the config.yaml file as a string.

shuffle: int, optional
    Integer value specifying the shuffle index to select for training. Default is set to 1

trainingsetindex: int, optional
    Integer specifying which TrainingsetFraction to use. By default the first (note that TrainingFraction is a list in config.yaml).

Additional parameters:

max_snapshots_to_keep: int, or None. Sets how many snapshots are kept, i.e. states of the trained network. Every savinginteration many times
    a snapshot is stored, however only the last max_snapshots_to_keep many are kept! If you change this to None, then all are kept.
    See: https://github.com/AlexEMG/DeepLabCut/issues/8#issuecomment-387404835

displayiters: this variable is actually set in pose_config.yaml. However, you can overwrite it with this hack. Don't use this regularly, just if you are too lazy to dig out
    the pose_config.yaml file for the corresponding project. If None, the value from there is used, otherwise it is overwritten! Default: None

saveiters: this variable is actually set in pose_config.yaml. However, you can overwrite it with this hack. Don't use this regularly, just if you are too lazy to dig out
    the pose_config.yaml file for the corresponding project. If None, the value from there is used, otherwise it is overwritten! Default: None

maxiters: this variable is actually set in pose_config.yaml. However, you can overwrite it with this hack. Don't use this regularly, just if you are too lazy to dig out
    the pose_config.yaml file for the corresponding project. If None, the value from there is used, otherwise it is overwritten! Default: None

allow_groth: bool, default false.
    For some smaller GPUs the memory issues happen. If true, the memory allocator does not pre-allocate the entire specified
    GPU memory region, instead starting small and growing as needed. See issue: https://forum.image.sc/t/how-to-stop-running-out-of-vram/30551/2

gputouse: int, optional. Natural number indicating the number of your GPU (see number in nvidia-smi). If you do not have a GPU put None.
    See: https://nvidia.custhelp.com/app/answers/detail/a_id/3751/~/useful-nvidia-smi-queries

autotune: property of TensorFlow, somehow faster if 'false' (as Eldar found out, see https://github.com/tensorflow/tensorflow/issues/13317). Default: False

keepdeconvweights: bool, default: true
    Also restores the weights of the deconvolution layers (and the backbone) when training from a snapshot. Note that if you change the number of bodyparts, you need to
    set this to false for re-training.

Example
--------
for training the network for first shuffle of the training dataset.
>>> deeplabcut.train_network('/analysis/project/reaching-task/config.yaml')
--------

for training the network for second shuffle of the training dataset.
>>> deeplabcut.train_network('/analysis/project/reaching-task/config.yaml',shuffle=2,keepdeconvweights=True)
--------

evaluate_network:

Signature: deeplabcut.evaluate_network(config, Shuffles=[1], trainingsetindex=0, plotting=None, show_errors=True, comparisonbodyparts='all', gputouse=None, rescale=False)
Docstring:
Evaluates the network based on the saved models at different stages of the training network.

The evaluation results are stored in the .h5 and .csv file under the subdirectory 'evaluation_results'.
Change the snapshotindex parameter in the config file to 'all' in order to evaluate all the saved models.
Parameters
----------
config : string
    Full path of the config.yaml file as a string.

Shuffles: list, optional
    List of integers specifying the shuffle indices of the training dataset. The default is [1]

trainingsetindex: int, optional
    Integer specifying which TrainingsetFraction to use. By default the first (note that TrainingFraction is a list in config.yaml). This
    variable can also be set to "all".

plotting: bool, optional
    Plots the predictions on the train and test images. The default is ``False``; if provided it must be either ``True`` or ``False``

show_errors: bool, optional
    Display train and test errors. The default is `True``

comparisonbodyparts: list of bodyparts, Default is "all".
    The average error will be computed for those body parts only (Has to be a subset of the body parts).

gputouse: int, optional. Natural number indicating the number of your GPU (see number in nvidia-smi). If you do not have a GPU put None.
    See: https://nvidia.custhelp.com/app/answers/detail/a_id/3751/~/useful-nvidia-smi-queries

rescale: bool, default False
    Evaluate the model at the 'global_scale' variable (as set in the test/pose_config.yaml file for a particular project). I.e. every
    image will be resized according to that scale and prediction will be compared to the resized ground truth. The error will be reported
    in pixels at rescaled to the *original* size. I.e. For a [200,200] pixel image evaluated at global_scale=.5, the predictions are calculated
    on [100,100] pixel images, compared to 1/2*ground truth and this error is then multiplied by 2!. The evaluation images are also shown for the
    original size!

Examples
--------
If you do not want to plot
>>> deeplabcut.evaluate_network('/analysis/project/reaching-task/config.yaml', Shuffles=[1])
--------
If you want to plot
>>> deeplabcut.evaluate_network('/analysis/project/reaching-task/config.yaml',Shuffles=[1],True)

analyze_videos:

Signature: deeplabcut.analyze_videos(config, videos, videotype='avi', shuffle=1, trainingsetindex=0, gputouse=None, save_as_csv=False, destfolder=None, batchsize=None, cropping=None, get_nframesfrommetadata=True, TFGPUinference=True, dynamic=(False, 0.5, 10))
Docstring:
   Makes prediction based on a trained network. The index of the trained network is specified by parameters in the config file (in particular the variable 'snapshotindex')

   You can crop the video (before analysis), by changing 'cropping'=True and setting 'x1','x2','y1','y2' in the config file. The same cropping parameters will then be used for creating the video.

   Output: The labels are stored as MultiIndex Pandas Array, which contains the name of the network, body part name, (x, y) label position 

           in pixels, and the likelihood for each frame per body part. These arrays are stored in an efficient Hierarchical Data Format (HDF) 

           in the same directory, where the video is stored. However, if the flag save_as_csv is set to True, the data can also be exported in 

           comma-separated values format (.csv), which in turn can be imported in many programs, such as MATLAB, R, Prism, etc.

   Parameters
   ----------
   config : string
       Full path of the config.yaml file as a string.

  videos : list
       A list of strings containing the full paths to videos for analysis or a path to the directory, where all the videos with same extension are stored.

   videotype: string, optional
       Checks for the extension of the video in case the input to the video is a directory.
Only videos with this extension are analyzed. The default is ``.avi``

   shuffle: int, optional
       An integer specifying the shuffle index of the training dataset used for training the network. The default is 1.

   trainingsetindex: int, optional
       Integer specifying which TrainingsetFraction to use. By default the first (note that TrainingFraction is a list in config.yaml).

   gputouse: int, optional. Natural number indicating the number of your GPU (see number in nvidia-smi). If you do not have a GPU put None.
   See: https://nvidia.custhelp.com/app/answers/detail/a_id/3751/~/useful-nvidia-smi-queries

   save_as_csv: bool, optional
       Saves the predictions in a .csv file. The default is ``False``; if provided it must be either ``True`` or ``False``

   destfolder: string, optional
       Specifies the destination folder for analysis data (default is the path of the video). Note that for subsequent analysis this
       folder also needs to be passed.

   batchsize: int, default from pose_cfg.yaml
       Change batch size for inference; if given overwrites value in pose_cfg.yaml

   TFGPUinference: bool, default: True
       Perform inference on GPU with Tensorflow code. Introduced in "Pretraining boosts out-of-domain robustness for pose estimation" by
       Alexander Mathis, Mert Yüksekgönül, Byron Rogers, Matthias Bethge, Mackenzie W. Mathis Source: https://arxiv.org/abs/1909.11229

   dynamic: triple containing (state,detectiontreshold,margin)
       If the state is true, then dynamic cropping will be performed. That means that if an object is detected (i.e. any body part > detectiontreshold),
       then object boundaries are computed according to the smallest/largest x position and smallest/largest y position of all body parts. This  window is
       expanded by the margin and from then on only the posture within this crop is analyzed (until the object is lost, i.e. <detectiontreshold). The
       current position is utilized for updating the crop window for the next frame (this is why the margin is important and should be set large
       enough given the movement of the animal).

   Examples
   --------

   Windows example for analyzing 1 video
   >>> deeplabcut.analyze_videos('C:\\myproject\\reaching-task\\config.yaml',['C:\\yourusername\\rig-95\\Videos\\reachingvideo1.avi'])
   --------

   If you want to analyze only 1 video
   >>> deeplabcut.analyze_videos('/analysis/project/reaching-task/config.yaml',['/analysis/project/videos/reachingvideo1.avi'])
   --------

   If you want to analyze all videos of type avi in a folder:
   >>> deeplabcut.analyze_videos('/analysis/project/reaching-task/config.yaml',['/analysis/project/videos'],videotype='.avi')
   --------

   If you want to analyze multiple videos
   >>> deeplabcut.analyze_videos('/analysis/project/reaching-task/config.yaml',['/analysis/project/videos/reachingvideo1.avi','/analysis/project/videos/reachingvideo2.avi'])
   --------

   If you want to analyze multiple videos with shuffle = 2
   >>> deeplabcut.analyze_videos('/analysis/project/reaching-task/config.yaml',['/analysis/project/videos/reachingvideo1.avi','/analysis/project/videos/reachingvideo2.avi'], shuffle=2)

   --------
   If you want to analyze multiple videos with shuffle = 2 and save results as an additional csv file too
   >>> deeplabcut.analyze_videos('/analysis/project/reaching-task/config.yaml',['/analysis/project/videos/reachingvideo1.avi','/analysis/project/videos/reachingvideo2.avi'], shuffle=2,save_as_csv=True)
   --------

filterpredictions:

Signature: deeplabcut.filterpredictions(config, video, videotype='avi', shuffle=1, trainingsetindex=0, filtertype='median', windowlength=5, p_bound=0.001, ARdegree=3, MAdegree=1, alpha=0.01, save_as_csv=True, destfolder=None)
Docstring:
Fits frame-by-frame pose predictions with ARIMA model (filtertype='arima') or median filter (default).

Parameter
----------
config : string
    Full path of the config.yaml file as a string.

video : string
    Full path of the video to extract the frame from. Make sure that this video is already analyzed.

shuffle : int, optional
    The shufle index of training dataset. The extracted frames will be stored in the labeled-dataset for
    the corresponding shuffle of training dataset. Default is set to 1

trainingsetindex: int, optional
    Integer specifying which TrainingsetFraction to use. By default the first (note that TrainingFraction is a list in config.yaml).

filtertype: string
    Select which filter, 'arima' or 'median' filter.

windowlength: int
    For filtertype='median' filters the input array using a local window-size given by windowlength. The array will automatically be zero-padded.
    https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.medfilt.html The windowlenght should be an odd number.

p_bound: float between 0 and 1, optional
    For filtertype 'arima' this parameter defines the likelihood below,
    below which a body part will be consided as missing data for filtering purposes.

ARdegree: int, optional
    For filtertype 'arima' Autoregressive degree of Sarimax model degree.
    see https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html

MAdegree: int
    For filtertype 'arima' Moving Avarage degree of Sarimax model degree.
    See https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html

alpha: float
    Significance level for detecting outliers based on confidence interval of fitted SARIMAX model.

save_as_csv: bool, optional
    Saves the predictions in a .csv file. The default is ``False``; if provided it must be either ``True`` or ``False``

destfolder: string, optional
    Specifies the destination folder for analysis data (default is the path of the video). Note that for subsequent analysis this folder also needs to be passed.

Example
--------
Arima model:
deeplabcut.filterpredictions('C:\myproject\reaching-task\config.yaml',['C:\myproject\trailtracking-task\test.mp4'],shuffle=3,filterype='arima',ARdegree=5,MAdegree=2)

Use median filter over 10bins:
deeplabcut.filterpredictions('C:\myproject\reaching-task\config.yaml',['C:\myproject\trailtracking-task\test.mp4'],shuffle=3,windowlength=10)

One can then use the filtered rather than the frame-by-frame predictions by calling:

deeplabcut.plot_trajectories('C:\myproject\reaching-task\config.yaml',['C:\myproject\trailtracking-task\test.mp4'],shuffle=3,filtered=True)

deeplabcut.create_labeled_video('C:\myproject\reaching-task\config.yaml',['C:\myproject\trailtracking-task\test.mp4'],shuffle=3,filtered=True)
--------

plot_trajectories:

Signature: deeplabcut.plot_trajectories(config, videos, videotype='.avi', shuffle=1, trainingsetindex=0, filtered=False, showfigures=False, destfolder=None)
Docstring:
   Plots the trajectories of various bodyparts across the video.

   Parameters
   ----------
    config : string
   Full path of the config.yaml file as a string.

   videos : list
       A list of strings containing the full paths to videos for analysis or a path to the directory, where all the videos with same extension are stored.

   videotype: string, optional
       Checks for the extension of the video in case the input to the video is a directory.
Only videos with this extension are analyzed. The default is ``.avi``

   shuffle: list, optional
   List of integers specifying the shuffle indices of the training dataset. The default is [1]

   trainingsetindex: int, optional
   Integer specifying which TrainingsetFraction to use. By default the first (note that TrainingFraction is a list in config.yaml).

create_labeled_video:

Signature: deeplabcut.create_labeled_video(config, videos, videotype='avi', shuffle=1, trainingsetindex=0, filtered=False, save_frames=False, Frames2plot=None, delete=False, displayedbodyparts='all', codec='mp4v', outputframerate=None, destfolder=None, draw_skeleton=False, trailpoints=0, displaycropped=False)
Docstring:
    Labels the bodyparts in a video. Make sure the video is already analyzed by the function 'analyze_video'

Parameters
    ----------
    config : string
        Full path of the config.yaml file as a string.

    videos : list
        A list of strings containing the full paths to videos for analysis or a path to the directory, where all the videos with same extension are stored.

    videotype: string, optional
        Checks for the extension of the video in case the input to the video is a directory.
 Only videos with this extension are analyzed. The default is ``.avi``

    shuffle : int, optional
        Number of shuffles of training dataset. Default is set to 1.

    trainingsetindex: int, optional
        Integer specifying which TrainingsetFraction to use. By default the first (note that TrainingFraction is a list in config.yaml).

    filtered: bool, default false
        Boolean variable indicating if filtered output should be plotted rather than frame-by-frame predictions. Filtered version can be calculated with deeplabcut.filterpredictions

    videotype: string, optional
        Checks for the extension of the video in case the input is a directory.
Only videos with this extension are analyzed. The default is ``.avi``

save_frames: bool
        If true creates each frame individual and then combines into a video. This variant is relatively slow as
        it stores all individual frames. However, it uses matplotlib to create the frames and is therefore much more flexible (one can set transparency of markers, crop, and easily customize).

    Frames2plot: List of indices
        If not None & save_frames=True then the frames corresponding to the index will be plotted. For example, Frames2plot=[0,11] will plot the first and the 12th frame.

    delete: bool
        If true then the individual frames created during the video generation will be deleted.

    displayedbodyparts: list of strings, optional
        This select the body parts that are plotted in the video. Either ``all``, then all body parts
        from config.yaml are used orr a list of strings that are a subset of the full list.
        E.g. ['hand','Joystick'] for the demo Reaching-Mackenzie-2018-08-30/config.yaml to select only these two body parts.

    codec: codec for labeled video. Options see http://www.fourcc.org/codecs.php [depends on your ffmpeg installation.]

    outputframerate: positive number, output frame rate for labeled video (only available for the mode with saving frames.) By default: None, which results in the original video rate.

    destfolder: string, optional
        Specifies the destination folder that was used for storing analysis data (default is the path of the video).

    draw_skeleton: bool
        If ``True`` adds a line connecting the body parts making a skeleton on on each frame. The body parts to be connected and the color of these connecting lines are specified in the config file. By default: ``False``

    trailpoints: int
        Number of revious frames whose body parts are plotted in a frame (for displaying history). Default is set to 0.

    displaycropped: bool, optional
        Specifies whether only cropped frame is displayed (with labels analyzed therein), or the original frame with the labels analyzed in the cropped subset.

    Examples
    --------
    If you want to create the labeled video for only 1 video
    >>> deeplabcut.create_labeled_video('/analysis/project/reaching-task/config.yaml',['/analysis/project/videos/reachingvideo1.avi'])
    --------

    If you want to create the labeled video for only 1 video and store the individual frames
    >>> deeplabcut.create_labeled_video('/analysis/project/reaching-task/config.yaml',['/analysis/project/videos/reachingvideo1.avi'],save_frames=True)
    --------

    If you want to create the labeled video for multiple videos
    >>> deeplabcut.create_labeled_video('/analysis/project/reaching-task/config.yaml',['/analysis/project/videos/reachingvideo1.avi','/analysis/project/videos/reachingvideo2.avi'])
    --------

    If you want to create the labeled video for all the videos (as .avi extension) in a directory.
    >>> deeplabcut.create_labeled_video('/analysis/project/reaching-task/config.yaml',['/analysis/project/videos/'])

    --------
    If you want to create the labeled video for all the videos (as .mp4 extension) in a directory.
    >>> deeplabcut.create_labeled_video('/analysis/project/reaching-task/config.yaml',['/analysis/project/videos/'],videotype='mp4')

    --------

extract outlier frames:

Signature: deeplabcut.extract_outlier_frames(config, videos, videotype='avi', shuffle=1, trainingsetindex=0, outlieralgorithm='jump', 
comparisonbodyparts='all', epsilon=20, p_bound=0.01, ARdegree=3, MAdegree=1, alpha=0.01, extractionalgorithm='kmeans', automatic=False
, cluster_resizewidth=30, cluster_color=False, opencv=True, savelabeled=True, destfolder=None)
Docstring:
   Extracts the outlier frames in case, the predictions are not correct for a certain video from the cropped video running from
   start to stop as defined in config.yaml.

   Another crucial parameter in config.yaml is how many frames to extract 'numframes2extract'.

   Parameter
   ----------
   config : string
       Full path of the config.yaml file as a string.

   videos : list
       A list of strings containing the full paths to videos for analysis or a path to the directory, where all the videos with same e
xtension are stored.

   videotype: string, optional
       Checks for the extension of the video in case the input to the video is a directory.
Only videos with this extension are analyzed. The default is ``.avi``

   shuffle : int, optional
       The shufle index of training dataset. The extracted frames will be stored in the labeled-dataset for
       the corresponding shuffle of training dataset. Default is set to 1

   trainingsetindex: int, optional
       Integer specifying which TrainingsetFraction to use. By default the first (note that TrainingFraction is a list in config.yaml)
.

   outlieralgorithm: 'fitting', 'jump', 'uncertain', or 'manual'
       String specifying the algorithm used to detect the outliers. Currently, deeplabcut supports three methods + a manual GUI option. 'Fitting'
       fits a Auto Regressive Integrated Moving Average model to the data and computes the distance to the estimated data. Larger distances than
       epsilon are then potentially identified as outliers. The methods 'jump' identifies larger jumps than 'epsilon' in any body part; and 'uncertain'
       looks for frames with confidence below p_bound. The default is set to ``jump``.

   comparisonbodyparts: list of strings, optional
       This select the body parts for which the comparisons with the outliers are carried out. Either ``all``, then all body parts
       from config.yaml are used orr a list of strings that are a subset of the full list.
       E.g. ['hand','Joystick'] for the demo Reaching-Mackenzie-2018-08-30/config.yaml to select only these two body parts.

   p_bound: float between 0 and 1, optional
       For outlieralgorithm 'uncertain' this parameter defines the likelihood below, below which a body part will be flagged as a putative outlier.

   epsilon; float,optional
       Meaning depends on outlieralgoritm. The default is set to 20 pixels.
       For outlieralgorithm 'fitting': Float bound according to which frames are picked when the (average) body part estimate deviates from model fit
       For outlieralgorithm 'jump': Float bound specifying the distance by which body points jump from one frame to next (Euclidean distance)
ARdegree: int, optional
       For outlieralgorithm 'fitting': Autoregressive degree of ARIMA model degree. (Note we use SARIMAX without exogeneous and seasonal part)
       see https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html

   MAdegree: int
       For outlieralgorithm 'fitting': MovingAvarage degree of ARIMA model degree. (Note we use SARIMAX without exogeneous and seasonal part)
       See https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html

   alpha: float
       Significance level for detecting outliers based on confidence interval of fitted ARIMA model. Only the distance is used however.

   extractionalgorithm : string, optional
       String specifying the algorithm to use for selecting the frames from the identified putatative outlier frames. Currently, deeplabcut
       supports either ``kmeans`` or ``uniform`` based selection (same logic as for extract_frames).
       The default is set to``uniform``, if provided it must be either ``uniform`` or ``kmeans``.

   automatic : bool, optional
       Set it to True, if you want to extract outliers without being asked for user feedback.

   cluster_resizewidth: number, default: 30
       For k-means one can change the width to which the images are downsampled (aspect ratio is fixed).

   cluster_color: bool, default: False
       If false then each downsampled image is treated as a grayscale vector (discarding color information). If true, then the color channels are considered. This increases
       the computational complexity.

   opencv: bool, default: True
       Uses openCV for loading & extractiong (otherwise moviepy (legacy))

   savelabeled: bool, default: True
       If true also saves frame with predicted labels in each folder.

   destfolder: string, optional
       Specifies the destination folder that was used for storing analysis data (default is the path of the video).

   Examples

   Windows example for extracting the frames with default settings
   >>> deeplabcut.extract_outlier_frames('C:\myproject\reaching-task\config.yaml',['C:\yourusername\rig-95\Videos\reachingvideo1.avi'])
   --------
   for extracting the frames with default settings
   >>> deeplabcut.extract_outlier_frames('/analysis/project/reaching-task/config.yaml',['/analysis/project/video/reachinvideo1.avi'])
   --------
   for extracting the frames with kmeans
   >>> deeplabcut.extract_outlier_frames('/analysis/project/reaching-task/config.yaml',['/analysis/project/video/reachinvideo1.avi'],extractionalgorithm='kmeans')
   --------
   for extracting the frames with kmeans and epsilon = 5 pixels.
   >>> deeplabcut.extract_outlier_frames('/analysis/project/reaching-task/config.yaml',['/analysis/project/video/reachinvideo1.avi'],epsilon = 5,extractionalgorithm='kmeans')
   --------