-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi @fsalv #8
Comments
Hello Sir, As you said I have added odd temporal LRs (now 5 LRs) for training and calculated the mean and STD (average of all the HRs and LRs used for training) But then I get an issue in the build RAMs network step build rams networkrams_network = RAMS(scale=SCALE, filters=FILTERS, ErrorInvalidArgumentError Traceback (most recent call last) InvalidArgumentError: Dimension size must be evenly divisible by 9 but is 4 for '{{node lambda_729/DepthToSpace}} = DepthToSpaceT=DT_FLOAT, block_size=3, data_format="NHWC"' with input shapes: [?,?,?,4]. During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) D:\SuperResolution\RAMS\utils\network.py in RAMS(scale, filters, kernel_size, channels, r, N) This is the values for network#------------- Network Settings#------------- Is there anything that I am missing? |
|
This comment has been minimized.
This comment has been minimized.
Hello Sir Thankyou for quick response, I had tried changing it to my scale 2 before but there was another error and it got solved Just a question regarding MEAN and STD for clarity sir, Its not the MEAN and STD of the original png image before tiling. The MEAN and STD of each tiled png image that is put in train/NIR/ folder only (calculated using QGIS gdalinfo -stats) is added and then average found [ imgset0000 (HR_img1Mean + LR_img2Mean + LR_img3Mean +LR_img4Mean + LR_img5Mean) + This I took as the MEAN value in network.py. Is this correct ? Similarly calculated for STD |
I ran the train, and tested too but the result was distorted. I am not sure how to not include the masking ( as I am not including the SM.png and QM.png) in the loss.py. Original codel1_loss = (1.0/total_pixels_masked)*tf.reduce_sum(
|
No, your altered loss is not right. You have completely removed the subtraction with the ground truth, so it can't work. You can hardcode the masks as a matrix of all ones, or you should change the code to avoid the multiplications by |
Hi @bibuzz!
Please clone the repository again, there was an issue in how our code was managing an even number of temporal frames. Now it should work with your 4 channels. Anyway, note that out Temporal Reduction Block works better with an odd number (minimum of 3), so you should consider adding a temporal step to your dataset.
Those two values are used in the normalization and denormalization layers and they were simply computed on all the training images of the dataset. Just compute mean and standard deviation of your images pixel values.
Originally posted by @fsalv in #7 (comment)
The text was updated successfully, but these errors were encountered: