Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(linear) range normalization? std normalization? #78

Closed
ncullen93 opened this issue Feb 27, 2017 · 0 comments
Closed

(linear) range normalization? std normalization? #78

ncullen93 opened this issue Feb 27, 2017 · 0 comments

Comments

@ncullen93
Copy link

ncullen93 commented Feb 27, 2017

I don't see transforms for linear normalization (e.g. between 0-1 or arbitrary ranges) or std normalization? Do these exist somewhere else or do people just implement this in the Dataset class? Any plans for this? It's useful when sampling from folders. It also could be useful so you can potential remove the automatic division by 255. in the ToTensor() transform and therefore support loading arbitrary numpy arrays from file (an area for which there is a high user demand). Idk.. just a thought.

Anyways, here's some code to do these things:

class RangeNormalize(object):
    """Given min_val: (R, G, B) and max_val: (R,G,B),
    will normalize each channel of the torch.*Tensor to
    the provided min and max values.

    Works by efficiently calculating a linear transform:
        a = (max'-min')/(max-min)
        b = max' - a * max
        new_value = a * value + b
    where min' & max' are given values, 
    and min & max are observed min/max for each channel

    Example:
        >>> x = torch.rand(3,50,50)
        >>> rn = RangeNormalize((0,0,10),(1,1,11)) # normalize last channel between 10-11
        >>> x_norm = rn(x) 

    Also works with just one value for min/max across all channels:
        >>> x = torch.rand(3,50,50)
        >>> rn = RangeNormalize(-1,1)
        >>> x_norm = rn(x)
    """
    def __init__(self, min_, max_):
        if not isinstance(min_, list) and not isinstance(min_, tuple):
            min_ = [min_]*3
        if not isinstance(max_, list) and not isinstance(max_, tuple):
            max_ = [max_]*3

        self.min_ = min_
        self.max_ = max_

    def __call__(self, tensor):
        for t, min_, max_ in zip(tensor, self.min_, self.max_):
            max_val = torch.max(t)
            min_val = torch.min(t)
            a = (max_-min_)/float(max_val-min_val)
            b = max_ - a * max_val
            t.mul_(a).add_(b)
        return tensor

and

class StdNormalize(object):

    def __init__(self):
        pass

    def __call__(self, tensor):
        for t in tensor:
            mean = torch.mean(t)
            std  = torch.std(t)
            t.sub_(mean).div_(std)
        return tensor    
rajveerb pushed a commit to rajveerb/vision that referenced this issue Nov 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant