Skip to content
Choose a tag to compare

Data augmentation API, color conversion improvements, GPU tests and more

@edgarriba edgarriba released this
· 801 commits to master since this release
Choose a tag to compare

Kornia 0.2.0 Release Notes

  • Highlights
  • New Features
    • kornia.color
    • kornia.feature
    • kornia.geometry
    • kornia.losses
  • Improvements
  • Bug Fixes

Kornia v0.2.0 release is now available.

The release contains over 50 commits and updates support to PyTorch 1.4. This is the result of a huge effort in the desing of the new data augmentation module, improvements in the set of the color space conversion algorithms and a refactor of the testing framework that allows to test the library using the cuda backend.


Data Augmentation API

From this point forward, we will give support to the new data augmentation API. The kornia.augmentation module mimics the best of the existing data augmentation frameworks such torchvision or albumentations all re-implemented assuming as input torch.Tensor data structures that will allowing to run the standard transformations (geometric and color) in batch mode in the GPU and backprop through it.

In addition, a very interesting feature we are very proud to include, is the ability to return the transformation matrix for each of the transform which will make easier to concatenate and optimize the transforms process.

A quick overview of its usage:

import torch
import kornia

input: torch.Tensor = load_tensor_data(....)  # BxCxHxW

transforms = torch.nn.Sequential(
    kornia.augmentation.RandomAffine(degrees=(-15, 15)),

out: torch.Tensor = transforms(input)         # CPU
out: torch.Tensor = transforms(input.cuda())  # GPU

# same returning the transformation matrix

transforms = torch.nn.Sequential(
    kornia.augmentation.RandomAffine(degrees=(-15, 15), return_transformation=True),

out, transform = transforms(input) # BxCxHxW , Bx3x3

This are the following features found we introduce in the module:

  • BaseAugmentation (#407)
  • ColorJitter (#329)
  • RandomHorizontalFlip (#309)
  • MotionBlur (#328)
  • RandomVerticalFlip (#375)
  • RandomErasing (#344)
  • RandomGrayscale (#384)
  • Resize (#394)
  • CenterCrop (#409)
  • RandomAffine (#403)
  • RandomPerspective (#403)
  • RandomRotation (#397, #418)
  • RandomCrop (#408)
  • RandomResizedCrop (#408)
  • Grayscale

GPU Test

We have refactored our testing framework and we can now easily integrate GPU tests within our library. At this moment, this features is only available to run locally but very soon we will integrate with CircleCI and AWS infrastructure so that we can automate the process.

From root one just have to run: make test-gpu

Tests look like this:

import torch
from test.common import device

def test_rgb_to_grayscale(self, device):
        channels, height, width = 3, 4, 5
        img = torch.ones(channels, height, width).to(device)
        assert kornia.rgb_to_grayscale(img).shape == (1, height, width)

Ref PR:

New Features


We have added few more algorithms for color space conversion:


  • Implement kornia.hflip, kornia.vflip and kornia.rot180 (#268)
  • Implement kornia.transform_boxes (#368)


  • Implements to total_variation loss (#250)
  • Implement PSNR loss (#272)


  • Added convenience functions for work with LAF: get keypoint, orientation (#340)


  • Fixed conv_argmax2d/3d behaviour for even-size kernel and added test (#227)
  • Normalize accepts floats and allows broadcast over channel dimension (#236)
  • Single value support for normalize function (#301)
  • Added boundary check function to local features detector (#254)
  • Correct crop_and_resize on aspect ratio changes. (#305)
  • Correct adjust brightness and contrast (#304)
  • Add tensor support to Hue, Saturation and Gamma (#324)
  • Double image option for scale pyramid (#351)
  • Filter2d speedup for older GPUs (#356)
  • Fix meshgrid3d function (#357)
  • Added support for even-sized filters in filter2d (#374)
  • Use latest version of CircleCI (#373)
  • Infer border and padding mode to homography warper (#379)
  • Apply normalization trick to conv_softmax (#383)
  • Better nms (#371)
    • added spatial gradient 3d
    • added hardnms3d and tests for hardnms 2d
    • quadratic nms interp
    • update the tests because of changed gaussian blur kernel size in scale pyramid calculation
    • no grad for spatial grad
  • Focal loss flat (#393)
  • Add optional mask parameter in scale space (#389)
  • Update to PyTorch 1.4 (#402)

Bug fixes

  • Add from homogeneous zero grad test and fix it (#369)
  • Filter2d failed with noncontiguous input (view --> reshape) (#377)
  • Add ceil_mode to maxblur pool to be able to be used in resnets (#395)

Breaking Changes

  • crop_and_resize
    before: "The tensor must have the shape of Bx4x2, where each box is defined in the following order: top-left, top-right, bottom-left and bottom-right. The coordinates order must be in y, x respectively"
    after: "The tensor must have the shape of Bx4x2, where each box is defined in the following (clockwise) order: top-left, top-right, bottom-right and bottom-left. The coordinates must be in the x, y order."

As usual, thanks to the community to keep this project growing.
Happy coding ! 🌄