Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How should I make the input data set for DINCAE? #17

Open
Wt197012 opened this issue Nov 22, 2023 · 5 comments
Open

How should I make the input data set for DINCAE? #17

Wt197012 opened this issue Nov 22, 2023 · 5 comments

Comments

@Wt197012
Copy link

Hello, I am a beginner. How should I make the input data set for DINCAE? I now have some global SST nc data sets. Should I merge them into one nc file and then recreate the data set required by DINCAE? I'm confused about this because of create_input_file.py in examples

@Alexander-Barth
Copy link
Member

yes, you need to create a single merged netcdf file.
The structure of the file is described here:
https://gher-uliege.github.io/DINCAE/

@Wt197012
Copy link
Author

Thank you for your guidance, I will try more, thank you!

@Wt197012
Copy link
Author

Hello, after I created the nc file, run_DINACE program ran normally, but the following situation occurred
varname CHL (240, 240)
data shape: (36, 240, 240)
data range: nan nan
sz (36, 240, 240)
sz (36, 240, 240)
2023-11-28 22:25:05.965220: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
Number of input variables: 10
regularization_L2_beta 0
enc_nfilter_internal [16, 24, 36, 54]
nvar 10
nepoch_keep_missing 0

encoder: output size of convolutional layer: 1 (?, 240, 240, 16)
encoder: output size of pooling layer: 1 (?, 120, 120, 16)
encoder: output size of convolutional layer: 2 (?, 120, 120, 24)
encoder: output size of pooling layer: 2 (?, 60, 60, 24)
encoder: output size of convolutional layer: 3 (?, 60, 60, 36)
encoder: output size of pooling layer: 3 (?, 30, 30, 36)
encoder: output size of convolutional layer: 4 (?, 30, 30, 54)
encoder: output size of pooling layer: 4 (?, 15, 15, 54)
ndensein 12150
dense layer: output units: 0 (?, 2430)
dense layer: output units: 1 (?, 12150)
decoder: output size of upsample layer: 1 (?, 30, 30, 54)
skip connection at 1
decoder: output size of concatenation: 1 (?, 30, 30, 90)
decoder: output size of convolutional layer: 1 (?, 30, 30, 36)
decoder: output size of upsample layer: 2 (?, 60, 60, 36)
skip connection at 2
decoder: output size of concatenation: 2 (?, 60, 60, 60)
decoder: output size of convolutional layer: 2 (?, 60, 60, 24)
decoder: output size of upsample layer: 3 (?, 120, 120, 24)
skip connection at 3
decoder: output size of concatenation: 3 (?, 120, 120, 40)
decoder: output size of convolutional layer: 3 (?, 120, 120, 16)
decoder: output size of upsample layer: 4 (?, 240, 240, 16)
skip connection at 4
decoder: output size of concatenation: 4 (?, 240, 240, 26)
decoder: output size of convolutional layer: 4 (?, 240, 240, 2)
Epoch: 1/100... Training loss: nan RMS: nan 0.001
Save output 0
Epoch: 2/100... Training loss: nan RMS: nan 0.001
Epoch: 3/100... Training loss: nan RMS: nan 0.001
Epoch: 4/100... Training loss: nan RMS: nan 0.001
Epoch: 5/100... Training loss: nan RMS: nan 0.001
Epoch: 6/100... Training loss: nan RMS: nan 0.001
Epoch: 7/100... Training loss: nan RMS: nan 0.001
Epoch: 8/100... Training loss: nan RMS: nan 0.001
Epoch: 9/100... Training loss: nan RMS: nan 0.001
Epoch: 10/100... Training loss: nan RMS: nan 0.001
Epoch: 11/100... Training loss: nan RMS: nan 0.001
Save output 10
Epoch: 12/100... Training loss: nan RMS: nan 0.001
Epoch: 13/100... Training loss: nan RMS: nan 0.001
Epoch: 14/100... Training loss: nan RMS: nan 0.001
Epoch: 15/100... Training loss: nan RMS: nan 0.001
Epoch: 16/100... Training loss: nan RMS: nan 0.001
Epoch: 17/100... Training loss: nan RMS: nan 0.001
Epoch: 18/100... Training loss: nan RMS: nan 0.001
Epoch: 19/100... Training loss: nan RMS: nan 0.001
Epoch: 20/100... Training loss: nan RMS: nan 0.001
Epoch: 21/100... Training loss: nan RMS: nan 0.001
Save output 20

As a result, my output nc files are all nan values. Is there something wrong with the nc I created? Thank you!

@Wt197012
Copy link
Author

I'm sorry I didn't change the nan value to full_value, but is it normal for my loss output to be negative when I train?

@Alexander-Barth
Copy link
Member

Yes, this is normal. The loss is not the mean square error but a negative log likelihood, see equation 3:
https://gmd.copernicus.org/articles/15/2183/2022/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants