-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unifying Pre- and Postprocessing #61
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that the reference_tensor
in scale_range
and scale_mean_variance
only makes sense in post-processing. So it is a bit awkward to unify pre-and-postprocessing and then have special cases for it in each function.
@FynnBe let me know when you have made all the changes and I should have a look again. |
I think it's all up to date. I left |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you forgot scale_mean_variance
in preprocessing.
- `reference_implementation` | ||
- `scale_min_max` scale the tensor s.t. its min and max match a reference tensor | ||
- `scale_mean_variance` scale the tensor s.t. its mean and variance match a reference tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also refere to the preprocessing here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should scale_mean_variance
exist for preprocessing? It is different from the zero_mean_unit_variance
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for scale_mean_variance
we only need to know the reference tensor, correct?
Then it is all correct like it is right now, because we actually don't need to know any other kwargs here.
|
But then |
same as preprocessing is now only specified for |
actually for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I think I understand it know and it should all be correct:
For scale_mean_variance
we only need to know the reference_tensor
as kwarg, so we actually don't need to refer to any other kwargs.
- `reference_implementation` | ||
- `scale_min_max` scale the tensor s.t. its min and max match a reference tensor | ||
- `scale_mean_variance` scale the tensor s.t. its mean and variance match a reference tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for scale_mean_variance
we only need to know the reference tensor, correct?
Then it is all correct like it is right now, because we actually don't need to know any other kwargs here.
The |
True, but we have that already. This is good to go now and I will merge. |
No description provided.