Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add option for mean and std to be tuples #987

Merged
merged 35 commits into from Jun 27, 2021

Conversation

JoanFM
Copy link
Contributor

@JoanFM JoanFM commented Apr 27, 2021

Description

I propose to allow the user to instantiate Normalize object with mean and std as tuples.

Status

Ready

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or new feature that would cause existing functionality to change)
  • New tests added to cover the changes
  • Docstrings/Documentation updated

kornia/augmentation/augmentation.py Outdated Show resolved Hide resolved
kornia/augmentation/augmentation.py Outdated Show resolved Hide resolved
kornia/augmentation/augmentation.py Outdated Show resolved Hide resolved
kornia/augmentation/augmentation.py Outdated Show resolved Hide resolved
kornia/enhance/normalize.py Outdated Show resolved Hide resolved
kornia/enhance/normalize.py Outdated Show resolved Hide resolved
kornia/enhance/normalize.py Outdated Show resolved Hide resolved
test/augmentation/test_augmentation.py Outdated Show resolved Hide resolved
test/augmentation/test_augmentation.py Outdated Show resolved Hide resolved
test/augmentation/test_augmentation.py Outdated Show resolved Hide resolved
JoanFM and others added 9 commits April 27, 2021 20:19
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Copy link
Member

@edgarriba edgarriba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add this extra test

test/augmentation/test_augmentation.py Outdated Show resolved Hide resolved
Copy link
Member

@shijianjian shijianjian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove those prints first.

In principle, I think it is better to keep all functional functions in a tensor only setting, with less tensor initializations. The tuple might be provided as the syntax sugar for nn.Modules, in which tensors might be initialised and stored properly in the module.

@JoanFM
Copy link
Contributor Author

JoanFM commented Apr 28, 2021

Please remove those prints first.

In principle, I think it is better to keep all functional functions in a tensor only setting, with less tensor initializations. The tuple might be provided as the syntax sugar for nn.Modules, in which tensors might be initialised and stored properly in the module.

Sure, prints are leftover, because I am not sure how, but my test was not going to the function I expected. And I still not understand.

I agree functions can work with torch.Tensors but initialization should support regular python tuples, I think it feels more natural to provide mean or std as regular python objects than with torch.Tensor, why to put such burden in the user end?

But yes, adding it as syntax sugar in the nn module discussed in the issue #520 would work for me

Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
@@ -99,6 +99,14 @@ def normalize(
if isinstance(std, float):
std = torch.tensor([std] * shape[1], device=data.device, dtype=data.dtype)

if isinstance(mean, tuple):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @shijianjian .

What I think it would make sense is to review everywhere in the code and allow only to provide mean and std as torch.Tensor at this stage.

So the conversion from whatever accepted input to torch.Tensor is in every instance constructor.

Like this you can skip all these checks and conversions at every functional iteration, and keep the code more mantainable

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I am not sure, are these functions that the user should be able to use from their own code (are they public) or are they to only be used by the internal kornia objects (therefore private)?

If so, I think this strategy should be followed in general and code will be better unittested maybe?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @shijianjian .

What I think it would make sense is to review everywhere in the code and allow only to provide mean and std as torch.Tensor at this stage.

So the conversion from whatever accepted input to torch.Tensor is in every instance constructor.

Like this you can skip all these checks and conversions at every functional iteration, and keep the code more mantainable

Correct. That is exactly what I meant. You only think about tensors for functional calls, and provide some easy API with nn.Module.

@shijianjian
Copy link
Member

I agree functions can work with torch.Tensors but initialization should support regular python tuples, I think it feels more natural to provide mean or std as regular python objects than with torch.Tensor, why to put such burden in the user end?

I don't think it is a good design to convert a tuple or list or a number to torch tensor in a functional call. It firstly reduced the efficiency whilst init and destroy small tensors. I think I have discussed with @edgarriba at some point about this, that to keep the Kornia core functions to accept purely tensors.

But yes, adding it as syntax sugar in the nn module discussed in the issue #520 would work for me

The syntax sugar I meant is to support different data types like tuple and lists in modulized implementations. In which way the mean and std will only be generated at the initialisation stage and will be reused afterwards.

In short, functional functions support tensor only, while the nn.Module implementation is the more user friendly choice.

@JoanFM
Copy link
Contributor Author

JoanFM commented Apr 28, 2021

I agree functions can work with torch.Tensors but initialization should support regular python tuples, I think it feels more natural to provide mean or std as regular python objects than with torch.Tensor, why to put such burden in the user end?

I don't think it is a good design to convert a tuple or list or a number to torch tensor in a functional call. It firstly reduced the efficiency whilst init and destroy small tensors. I think I have discussed with @edgarriba at some point about this, that to keep the Kornia core functions to accept purely tensors.

But yes, adding it as syntax sugar in the nn module discussed in the issue #520 would work for me

The syntax sugar I meant is to support different data types like tuple and lists in modulized implementations. In which way the mean and std will only be generated at the initialisation stage and will be reused afterwards.

In short, functional functions support tensor only, while the nn.Module implementation is the more user friendly choice.

Sorry, I missunderstood.

I totallly agree, would it be good if I do the conversion at init level? even for float std and mean? That would involve adding an optional parameter num_channels in case your std or mean are floats.

Would that make sense? I think is an effort that makes sense to avoid anything not tensors in the functional calls.

@shijianjian
Copy link
Member

I totallly agree, would it be good if I do the conversion at init level? even for float std and mean? That would involve adding an optional parameter num_channels in case your std or mean are floats.

Yes. All tensors shall be prepared in init. I would discard float option.

Would that make sense? I think is an effort that makes sense to avoid anything not tensors in the functional calls.

Yes. Much better.

Copy link
Member

@edgarriba edgarriba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JoanFM check the mypy - something went wrong

@edgarriba
Copy link
Member

@JoanFM you just need to rebase from master to fix the latest pytorch 1.9 formatting issues

@edgarriba edgarriba added enhancement 🚀 Improvement over an existing functionality module: enhance labels Jun 27, 2021
test/augmentation/test_augmentation.py Outdated Show resolved Hide resolved
test/enhance/test_normalize.py Outdated Show resolved Hide resolved
@edgarriba edgarriba merged commit 5bd2bf3 into kornia:master Jun 27, 2021
@shijianjian shijianjian mentioned this pull request Jul 6, 2021
18 tasks
edgarriba added a commit to edgarriba/kornia that referenced this pull request Jul 6, 2021
* add option for mean and std to be tuples

* Update kornia/augmentation/augmentation.py

Co-authored-by: Edgar Riba <edgar.riba@gmail.com>

* Update kornia/augmentation/augmentation.py

Co-authored-by: Edgar Riba <edgar.riba@gmail.com>

* Update kornia/enhance/normalize.py

Co-authored-by: Edgar Riba <edgar.riba@gmail.com>

* Update kornia/enhance/normalize.py

Co-authored-by: Edgar Riba <edgar.riba@gmail.com>

* Update test/augmentation/test_augmentation.py

Co-authored-by: Edgar Riba <edgar.riba@gmail.com>

* Update test/augmentation/test_augmentation.py

Co-authored-by: Edgar Riba <edgar.riba@gmail.com>

* Update test/augmentation/test_augmentation.py

Co-authored-by: Edgar Riba <edgar.riba@gmail.com>

* Update kornia/enhance/normalize.py

Co-authored-by: Edgar Riba <edgar.riba@gmail.com>

* Update kornia/augmentation/augmentation.py

Co-authored-by: Edgar Riba <edgar.riba@gmail.com>

* keep old imports

* Apply suggestions from code review

Co-authored-by: Edgar Riba <edgar.riba@gmail.com>

* fix: accept only tensors in function

* change shape check

* fix formatting

* check validity of num_channels

* add test for enhance normalize

* remove num_channels option

* try to fix linting

* fix linting

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change other Normalize class and test

* fix tests and mean shape assertions

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix augmentation tests

* fix smoke test

* merge isinstance

* fix doctests and python checks

* fix doc tests

* Apply suggestions from code review

Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1 Priority 1 🚨 High priority enhancement 🚀 Improvement over an existing functionality module: enhance
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants