Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hi @fsalv #8

Closed
bibuzz opened this issue Jun 23, 2021 · 6 comments
Closed

Hi @fsalv #8

bibuzz opened this issue Jun 23, 2021 · 6 comments
Assignees
Labels
help wanted Extra attention is needed

Comments

@bibuzz
Copy link

bibuzz commented Jun 23, 2021

Hi @bibuzz!

Any help would be appreciated

Please clone the repository again, there was an issue in how our code was managing an even number of temporal frames. Now it should work with your 4 channels. Anyway, note that out Temporal Reduction Block works better with an odd number (minimum of 3), so you should consider adding a temporal step to your dataset.

#-------------
MEAN = 7433.6436 # mean of the proba-v dataset
STD = 2353.0723 # std of proba-v the dataset

Since I am using sentinel images and not of prob-v dataset, then the MEAN and STD will be different ? How do I calculate them?

Those two values are used in the normalization and denormalization layers and they were simply computed on all the training images of the dataset. Just compute mean and standard deviation of your images pixel values.

Originally posted by @fsalv in #7 (comment)

@bibuzz
Copy link
Author

bibuzz commented Jun 23, 2021

Hello Sir,

As you said I have added odd temporal LRs (now 5 LRs) for training and calculated the mean and STD (average of all the HRs and LRs used for training)

But then I get an issue in the build RAMs network step

build rams network

rams_network = RAMS(scale=SCALE, filters=FILTERS,
kernel_size=KERNEL_SIZE, channels=CHANNELS, r=R, N=N)

Error

InvalidArgumentError Traceback (most recent call last)
~\AppData\Roaming\Python\Python39\site-packages\tensorflow\python\framework\ops.py in _create_c_op(graph, node_def, inputs, control_inputs, op_def)
1879 try:
-> 1880 c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
1881 except errors.InvalidArgumentError as e:

InvalidArgumentError: Dimension size must be evenly divisible by 9 but is 4 for '{{node lambda_729/DepthToSpace}} = DepthToSpaceT=DT_FLOAT, block_size=3, data_format="NHWC"' with input shapes: [?,?,?,4].

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
in
1 # build rams network
----> 2 rams_network = RAMS(scale=SCALE, filters=FILTERS,
3 kernel_size=KERNEL_SIZE, channels=CHANNELS, r=R, N=N)

D:\SuperResolution\RAMS\utils\network.py in RAMS(scale, filters, kernel_size, channels, r, N)
146 x = conv3d_weightnorm(scale ** 2, (3,3,3), padding='valid')(x)
147 x = layers.Lambda(lambda x: x[...,0,:])(x)
--> 148 x = layers.Lambda(lambda x: tf.nn.depth_to_space(x, 3))(x)
149
150

This is the values for network

#-------------

Network Settings

#-------------
FILTERS = 48 # features map in the network
KERNEL_SIZE = 3 # convolutional kernel size dimension (either 3D and 2D)
CHANNELS = 5 # number of temporal steps
R = 8 # attention compression
N = 12 # number of residual feature attention blocks
lr = 1e-4 # learning rate (Nadam optimizer)
BATCH_SIZE = 48 # batch size
EPOCHS_N = 120 # number of epochs

Is there anything that I am missing?

@fsalv
Copy link
Collaborator

fsalv commented Jun 23, 2021

depth_to_space operation had hardcoded scale 3, but in your case it's 2. I have updated network.py, now it should allow any scale.

@EscVM EscVM added the help wanted Extra attention is needed label Jun 23, 2021
@bibuzz

This comment has been minimized.

@bibuzz
Copy link
Author

bibuzz commented Jun 24, 2021

Hello Sir

Thankyou for quick response, I had tried changing it to my scale 2 before but there was another error and it got solved
by changing the network values (changed R=4 and N=8 values),
Trying to get it to train now by removing the mask code in training.py and loss.py

Just a question regarding MEAN and STD for clarity sir,

Its not the MEAN and STD of the original png image before tiling. The MEAN and STD of each tiled png image that is put in train/NIR/ folder only (calculated using QGIS gdalinfo -stats) is added and then average found

[ imgset0000 (HR_img1Mean + LR_img2Mean + LR_img3Mean +LR_img4Mean + LR_img5Mean) +
imagset0001 (HR_img1Mean + LR_img2Mean + LR_img3Mean +LR_img4Mean + LR_img5Mean) ] / 10
10 being total number of images

This I took as the MEAN value in network.py. Is this correct ? Similarly calculated for STD

@bibuzz
Copy link
Author

bibuzz commented Jun 24, 2021

I ran the train, and tested too but the result was distorted. I am not sure how to not include the masking ( as I am not including the SM.png and QM.png) in the loss.py.
I removed the SSIM all together as not sure how to alter it.

Original code

l1_loss = (1.0/total_pixels_masked)*tf.reduce_sum(
tf.abs(
tf.subtract(cropped_labels_masked,
corrected_cropped_predictions)
), axis=[1, 2]
)
Altered code

l1_loss = tf.reduce_sum(
tf.abs(
cropped_predictions
), axis=[1, 2]
)

Output validation image

validationresult

loss.py

loss.txt

training.py

training.txt

The loss displayed is too large and PSNR in negative
Epoch 1/100
96/64 [=============================================] - 4s 45ms/step - Loss: 17301928.0000 - PSNR: -9.7797 - Val Loss: 0.0000e+00 - Val PSNR: 0.0000e+00

Any clue would of help

@fsalv
Copy link
Collaborator

fsalv commented Jun 24, 2021

No, your altered loss is not right. You have completely removed the subtraction with the ground truth, so it can't work. You can hardcode the masks as a matrix of all ones, or you should change the code to avoid the multiplications by cropped_y_mask . (Note also that QM are only used in the preprocessing stage, while the masks used for loss and metric computations are the SM).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants