New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add option for mean and std to be tuples #987
Conversation
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add this extra test
…to normalize_tuple
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove those prints first.
In principle, I think it is better to keep all functional functions in a tensor only setting, with less tensor initializations. The tuple might be provided as the syntax sugar for nn.Modules, in which tensors might be initialised and stored properly in the module.
Sure, prints are leftover, because I am not sure how, but my test was not going to the function I expected. And I still not understand. I agree But yes, adding it as |
Co-authored-by: Edgar Riba <edgar.riba@gmail.com>
kornia/enhance/normalize.py
Outdated
@@ -99,6 +99,14 @@ def normalize( | |||
if isinstance(std, float): | |||
std = torch.tensor([std] * shape[1], device=data.device, dtype=data.dtype) | |||
|
|||
if isinstance(mean, tuple): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @shijianjian .
What I think it would make sense is to review everywhere in the code and allow only to provide mean
and std
as torch.Tensor
at this stage.
So the conversion from whatever accepted input to torch.Tensor
is in every instance constructor
.
Like this you can skip all these checks and conversions at every functional iteration, and keep the code more mantainable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I am not sure, are these functions that the user should be able to use from their own code (are they public) or are they to only be used by the internal kornia
objects (therefore private)?
If so, I think this strategy should be followed in general and code will be better unittested maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @shijianjian .
What I think it would make sense is to review everywhere in the code and allow only to provide
mean
andstd
astorch.Tensor
at this stage.So the conversion from whatever accepted input to
torch.Tensor
is in everyinstance constructor
.Like this you can skip all these checks and conversions at every functional iteration, and keep the code more mantainable
Correct. That is exactly what I meant. You only think about tensors for functional calls, and provide some easy API with nn.Module.
I don't think it is a good design to convert a tuple or list or a number to torch tensor in a functional call. It firstly reduced the efficiency whilst init and destroy small tensors. I think I have discussed with @edgarriba at some point about this, that to keep the Kornia core functions to accept purely tensors.
The syntax sugar I meant is to support different data types like tuple and lists in modulized implementations. In which way the mean and std will only be generated at the initialisation stage and will be reused afterwards. In short, functional functions support tensor only, while the nn.Module implementation is the more user friendly choice. |
Sorry, I missunderstood. I totallly agree, would it be good if I do the Would that make sense? I think is an effort that makes sense to avoid anything not |
Yes. All tensors shall be prepared in init. I would discard float option.
Yes. Much better. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JoanFM check the mypy - something went wrong
for more information, see https://pre-commit.ci
…to normalize_tuple
@JoanFM you just need to rebase from master to fix the latest pytorch 1.9 formatting issues |
* add option for mean and std to be tuples * Update kornia/augmentation/augmentation.py Co-authored-by: Edgar Riba <edgar.riba@gmail.com> * Update kornia/augmentation/augmentation.py Co-authored-by: Edgar Riba <edgar.riba@gmail.com> * Update kornia/enhance/normalize.py Co-authored-by: Edgar Riba <edgar.riba@gmail.com> * Update kornia/enhance/normalize.py Co-authored-by: Edgar Riba <edgar.riba@gmail.com> * Update test/augmentation/test_augmentation.py Co-authored-by: Edgar Riba <edgar.riba@gmail.com> * Update test/augmentation/test_augmentation.py Co-authored-by: Edgar Riba <edgar.riba@gmail.com> * Update test/augmentation/test_augmentation.py Co-authored-by: Edgar Riba <edgar.riba@gmail.com> * Update kornia/enhance/normalize.py Co-authored-by: Edgar Riba <edgar.riba@gmail.com> * Update kornia/augmentation/augmentation.py Co-authored-by: Edgar Riba <edgar.riba@gmail.com> * keep old imports * Apply suggestions from code review Co-authored-by: Edgar Riba <edgar.riba@gmail.com> * fix: accept only tensors in function * change shape check * fix formatting * check validity of num_channels * add test for enhance normalize * remove num_channels option * try to fix linting * fix linting * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * change other Normalize class and test * fix tests and mean shape assertions * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix augmentation tests * fix smoke test * merge isinstance * fix doctests and python checks * fix doc tests * Apply suggestions from code review Co-authored-by: Edgar Riba <edgar.riba@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Description
I propose to allow the
user
to instantiateNormalize
object withmean
andstd
as tuples.Status
Ready
Types of changes