Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pieapp Loss Score Range #315

Open
beyzacevik opened this issue May 23, 2022 · 4 comments
Open

Pieapp Loss Score Range #315

beyzacevik opened this issue May 23, 2022 · 4 comments
Assignees

Comments

@beyzacevik
Copy link

Hello,

Thank you for your contribution. I am using Pieapp as a loss function with L1 loss. I think the optimal point that we are trying to reach is 0 for Pieapp Loss as 0 is when you do not have any diff. from the ref. image. Since the score can vary in score<0, and score>0 range for distorted images. Thus, we need to use it like abs(Pieapp()). What do you think?

Thanks.

@zakajd
Copy link
Collaborator

zakajd commented Jun 7, 2022

Hi @beyzacevik
Thanks for noticing. I'll open a PR to fix this.

@shshojaei
Copy link

shshojaei commented Jul 3, 2023

Hello,

Thank you for your contribution. I am using Pieapp as a loss function with L1 loss. I think the optimal point that we are trying to reach is 0 for Pieapp Loss as 0 is when you do not have any diff. from the ref. image. Since the score can vary in score<0, and score>0 range for distorted images. Thus, we need to use it like abs(Pieapp()). What do you think?

Thanks.

Hi, i have a qustion about PieApp, i use it as an image quality assessment but for one of my (ref. image , pred.image) it gives me negative score (-0.0728).
Q1: what is max and min value of pieApp for two given images? when i try (ref. image , ref. image) it gives me about -3 .
Q2: lower is better?
Q3: should i use abs() too?

thanks

@zakajd
Copy link
Collaborator

zakajd commented Jul 3, 2023

@shshojaei
Q3: Yes, use absolute value to avoid negative scores
Q2: Generally yes, but for values close to zero metric isn't very stable / monotonic, so you can't confidently say that 1.5 is strictly better than 2.0
Q1: Max and min values are not defined, it's a neural-net based metric, so with some weird inputs final activations range can be quite high

@shshojaei
Copy link

@shshojaei Q3: Yes, use absolute value to avoid negative scores Q2: Generally yes, but for values close to zero metric isn't very stable / monotonic, so you can't confidently say that 1.5 is strictly better than 2.0 Q1: Max and min values are not defined, it's a neural-net based metric, so with some weird inputs final activations range can be quite high

thank you for your response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants