Skip to content

Test Time Augmentation (TTA) wrapper for computer vision tasks: segmentation, classification, super-resolution, ... etc.

License

Notifications You must be signed in to change notification settings

andrewekhalel/edafa

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Buy Me A Coffee

Edafa

GitHub contributions welcome
Edafa is a simple wrapper that implements Test Time Augmentations (TTA) on images for computer vision problems like: segmentation, classification, super-resolution, Pansharpening, etc. TTAs guarantees better results in most of the tasks.

Test Time Augmentation (TTA)

Applying different transformations to test images and then average for more robust results.

pipeline

Installation

pip install edafa

Getting started

The easiest way to get up and running is to follow example notebooks for segmentation and classification showing TTA effect on performance.

How to use Edafa

The whole process can be done in 4 steps:

  1. Import Predictor class based on your task category: Segmentation (SegPredictor) or Classification (ClassPredictor)
from edafa import SegPredictor
  1. Inherit Predictor class and implement the main function
    • predict_patches(self,patches) : where your model takes image patches (numpy.ndarray) and return prediction (numpy.ndarray)
class myPredictor(SegPredictor):
    def __init__(self,model,*args,**kwargs):
        super().__init__(*args,**kwargs)
        self.model = model

    def predict_patches(self,patches):
        return self.model.predict(patches)
  1. Create an instance of you class
p = myPredictor(model,patch_size,model_output_channels,conf_file_path)
  1. Call predict_images() to run the prediction process
p.predict_images(images,overlap=0)

Configuration file

Configuration file is a json file containing two pieces of information

  1. Augmentations to apply (augs). Supported augmentations:
    • NO : No augmentation
    • ROT90 : Rotate 90 degrees
    • ROT180 : Rotate 180 degrees
    • ROT270 : Rotate 270 degrees
    • FLIP_UD : Flip upside-down
    • FLIP_LR : Flip left-right
    • BRIGHT : Change image brightness randomly
    • CONTRAST : Change image contrast randomly
    • GAUSSIAN : Add random gaussian noise
    • GAMMA : Perform gamma correction with random gamma
  2. Combination of the results (mean). Supported mean types:
    • ARITH : Arithmetic mean
    • GEO : Geometric mean
  3. Number of bits image (default is 8-bits) (bits).

Example of a conf file in json format

{
"augs":["NO",
"FLIP_UD",
"FLIP_LR"],
"mean":"ARITH",
"bits":8
}

Example of a conf file in yaml format

augs: [NO,FLIP_UD,FLIP_LR]
mean: ARITH
bits: 8

You can either pass file path (json or yaml) or the actual json text to conf parameter.

Contribution

All contributions are welcomed. Please make sure that all tests passed before pull request. To run tests

nosetests

About

Test Time Augmentation (TTA) wrapper for computer vision tasks: segmentation, classification, super-resolution, ... etc.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages