Skip to content

Commit

Permalink
Switch to Gradient Sensitive Loss
Browse files Browse the repository at this point in the history
  • Loading branch information
igv committed Sep 21, 2018
1 parent f29b854 commit d532b0c
Show file tree
Hide file tree
Showing 3 changed files with 11 additions and 3 deletions.
6 changes: 5 additions & 1 deletion ESPCN.py
Expand Up @@ -52,4 +52,8 @@ def prelu(self, _x, i):
return tf.nn.relu(_x) - alphas * tf.nn.relu(-_x)

def loss(self, Y, X):
return tf.reduce_mean(tf.sqrt(tf.square(X - Y) + 1e-6)) + (1.0 - tf_ssim(Y, X)) * 0.5
dY = tf.image.sobel_edges(Y)
dX = tf.image.sobel_edges(X)
M = tf.sqrt(tf.square(dY[:,:,:,:,0]) + tf.square(dY[:,:,:,:,1]))
return tf.losses.absolute_difference(dY, dX) \
+ tf.losses.absolute_difference((1.0 - M) * Y, (1.0 - M) * X, weights=2.0)
6 changes: 5 additions & 1 deletion FSRCNN.py
Expand Up @@ -85,4 +85,8 @@ def prelu(self, _x, i):
return tf.nn.relu(_x) - alphas * tf.nn.relu(-_x)

def loss(self, Y, X):
return tf.reduce_mean(tf.sqrt(tf.square(X - Y) + 1e-6)) + (1.0 - tf_ssim(Y, X)) * 0.5
dY = tf.image.sobel_edges(Y)
dX = tf.image.sobel_edges(X)
M = tf.sqrt(tf.square(dY[:,:,:,:,0]) + tf.square(dY[:,:,:,:,1]))
return tf.losses.absolute_difference(dY, dX) \
+ tf.losses.absolute_difference((1.0 - M) * Y, (1.0 - M) * X, weights=2.0)
2 changes: 1 addition & 1 deletion README.md
Expand Up @@ -3,7 +3,7 @@ TensorFlow implementation of the Fast Super-Resolution Convolutional Neural Netw

## Prerequisites
* Python 3
* TensorFlow-gpu >= 1.3
* TensorFlow-gpu >= 1.8
* CUDA & cuDNN >= 6.0
* Pillow
* ImageMagick (optional)
Expand Down

0 comments on commit d532b0c

Please sign in to comment.