-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SSIM result is different from pytorch-msssim.ssim #5538
Comments
I think the default settings in scikit-image likely do not match those in
This type of thing has come up in the past and we have done a decent amount of validation vs. 3rd party implementations (MATLAB and a C-based implementation published as part of an IPOL paper): |
Thus the implementation here should be compared to either PyTorch Ignite's SSIM:
|
Nevermind, I have reopened again as it looks like pytorch-msssim can return both MS-SSIM and SSIM values and you seem to be correctly comparing SSIM. Still, I would try comparing to other implementations such as the PyTorch Ignite one to see if it also differs from those. If your image is very small, one other source of potential difference is from boundary pixels. Our implementation discard a small boundary at the image edge when computing the mean, which for images of say 256x256, will not make a large difference, but for tiny images such as 16x16 might make a substantial difference in the computed value. |
As another test, you could try running this test case, but using pytorch-msssim: scikit-image/skimage/metrics/tests/test_structural_similarity.py Lines 193 to 214 in 43d79b8
The original author's MATLAB script also gave the same value to a few digits, but I think we dropped that test case as redundant when we updated our |
Hi, |
Hi, I upload complete code here:
Results: from pytorch_mssim.ssim: tensor([0.9836, 0.9678, 0.9323, 0.8911, 0.8668, 0.8392, 0.8136, 0.7872, 0.7673, |
My two cents: isn't it a >>> np.zeros(5).dtype
dtype('float64')
>>> torch.zeros(5).dtype
torch.float32 |
Can I assign it? |
Comments moved over from the Discussion(we are planning to disable Discussions again and just keep the issues page) rfezzani:
Pixie412:
rfezzani:
import torch
import numpy as np
import os
from pytorch_msssim import ssim as ssim_func
from skimage.metrics import peak_signal_noise_ratio as psnr_metric
from skimage.metrics import structural_similarity as ssim_metric
from PIL import Image
def eval_seq_skimage(gt, pred):
T = len(gt)
ssim = np.zeros(T)
for t in range(T):
ssim[t] = ssim_metric(gt[t], pred[t], win_size=11, sigma=1.5,
use_sample_covariance=False, gaussian_weights=True,
multichannel=True, data_range=255) # (h, w,3)
return ssim
def eval_ssim_pytorch(gt, pred):
# (n, 3, h, w)
T = len(gt)
ssim = torch.zeros(T)
for t in range(T):
ssim[t] = ssim_func(torch.tensor(gt[t]).permute(2,0,1).unsqueeze(0),
torch.tensor(pred[t]).permute(2,0,1).unsqueeze(0),
data_range=255, win_size=11) # (1,3,h,w)
return ssim
pred_path = 'attention_compare/NewUnet_Perceptual_AttentionLoss/3432_pred'
gt_path = 'attention_compare/NewUnet_Perceptual_AttentionLoss/3432_gt'
gt_list = []
pred_list = []
for i in range(len(os.listdir(pred_path))):
pred_img = Image.open(os.path.join(pred_path, str(i) + '.jpg')) ##64*64
pred_img = np.array(pred_img) ##64,64,3
pred_img = pred_img / 255.0 ##(0, 1)
pred_list.append(pred_img)
gt_img = Image.open(os.path.join(gt_path, str(i) + '.jpg'))
gt_img = np.array(gt_img)
gt_img = gt_img / 255.0 # (h, w, 3)
gt_list.append(gt_img)
ssim_skimage = eval_seq_skimage(gt_list, pred_list)
ssim_pytorch = eval_ssim_pytorch(gt_list, pred_list)
print(ssim_skimage[-30:])
print(ssim_pytorch[-30:]) |
I moved the discussion back to the issues page. If there is further feedback we can reopen the issue |
Hi,
I use both pytorch_mssim.ssim and skimage.measure.compare_ssim to compute ssim, but the results are different. For example, ssim evaluation on an image sequence:
pytorch_msssim.ssim: [0.9655, 0.9500, 0.9324, 0.9229, 0.9191, 0.9154]
skimage.measure.compare_ssim: [0.97794482, 0.96226299, 0.948432, 0.9386946, 0.93113704, 0.92531453]
Why will this happen?
The text was updated successfully, but these errors were encountered: