Skip to content
rmattson1008 edited this page Feb 17, 2020 · 1 revision

Welcome!

Overview

The pipeline consists of cell segmentation, graph vertex construction via Gaussian mixture model means, edge construction via divergence functions, and eigen-decomposition of the matrix representation. The steps are broken down into the functions below.

Pipeline


Constraining the video (Optional)

Constrains the input video to specified number of frames, and write the result to an output video (.avi). If the video contains less frames than constrain_count, then then all frames of the video are returned. This is optional, but useful, because in some time-lapse videos cells stop moving after a period of time.

constrain_vid(vid_path, out_path, constrain_count)

Parameters

  • vid_path: String
    Path to the input video.
  • out_path: String
    Path to the output video.
  • constrain_count : int
    First N number of frames to extract from the video.
    If value is -1, then the entire video is used.

Returns

  • NoneType object

Tracking cell movements

cell_segmentation(vid_name, vid_path, masks_path, out_path)

Generates segmentation masks for every frame in the video, and saves the output at the specified output path. Uses the initial mask a a starting point. This is a necessary step since cells move over time and the segmentation masks need to be updated accordingly.

Parameters

  • vid_path: String
    Path to input video.
  • masks_path: String
    Path to initial segmentation mask.
  • out_path: String
    Path to output directory.

Returns

  • NoneType object

Median Normalization

Normalizes every frame in a video to minimize the effects lighting conditions may have had when constructing the videos.

median_normalize(vid_name, input_path, out_path)

Parameters

  • vid_name: String
    Name of the input video.
  • input_path: String
    Path to the grayscale video (.npy file).
  • out_path: String
    Directory to save the normalized video.

Returns

  • NoneType object

Downsample the video and masks (Optional)

Skip a given number of frames in the both the video and masks to generate a smaller video. This is useful for videos where cells slowly move over time, thus not any significant change is detected between many of the frames.

downsample_vid(vid_name, vid_path, masks_path, downsampled_path, frame_skip)

The saved video is (.avi) format.

Parameters

  • vid_name: String
    Name of the input video.
  • vid_path: String
    Path to the input video.
  • masks_path: String
    Path to the input masks.
  • downsampled_path:
    Path to directory where the downsampled video will be saved.
  • frame_skip:
    The number of frames to skip for downsampling.

Returns

  • NoneType object

Extract individual cells

Separate each cell found in the input video into their own videos using the segmentation masks generated from tracking the cells movement. This is an important step in the pipeline because the framework constructs a graph to model a specific organellar structure in single cell. Problems may arise if multiple cells are present in a frame because graph edges may be constructed between organelles in different cells, so to prevent this each cell is extracted.

generate_single_vids(vid_path, masks_path, output_path)

Extracts individual cells using the segmentation masks.

Parameters

  • vid_path: String
    Path to input video.
  • masks_path: String
    Path to the segmentation mask for the input video.
  • output_path: String
    Directory to save the individual videos.

Returns

  • NoneType object

Computing GMM intermediates

Regions of interests, or intensisty peaks, a found within the first frame of the video and those locations are considered the initial component means for the guassian mixture model (GMM). The pixel intensity variances around those regions become the initial covariances, while the normalized pixel intensities found at the location of each mean is considered to be the initial weights. Subsequently, the GMM is fit according to each frame, and the final means, covariances, weights, and precisions are saved. The final means are considered the vertices in the graph.

convert_to_grayscale(vid_path, output_path)

Converts an input video into an array of grayscale frames

Parameters

  • vid_path: String
    Path to a single video.
  • output_path: String
    Directory to save the grayscale frames.

Returns

  • NoneType object

compute_gmm_intermediates(vid_dir, intermediates_path)

Generates intermediate files

Parameters

  • vid_dir: String Path to the directory that contains the grayscale single videos.
  • intermediates_path: Path to save the intermediate files.

Returns

  • NoneType object

Computing distance metrics

A distance metric is applied to every combination pair of distributions from the GMM. The distances serve as edge weights between vertices in the graph.

compute_distances(intermediates_path, output_path)

Generate distances between means using Hellinger Distance.

Parameters

  • intermediates_path: String
    Path to the GMM intermediates.
  • output_path: String
    Directory to save the distance ouptuts.

Returns

  • NoneType object