Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gb/lr loss #185

Merged
merged 6 commits into from
Feb 14, 2024
Merged

Gb/lr loss #185

merged 6 commits into from
Feb 14, 2024

Conversation

grantbuster
Copy link
Member

@bnb32 your success with the ERA wind model had me thinking about allowing the model more freedom to create it's own realistic high-res fields, but with the climate change applications i still want to adhere pretty strictly to the bias corrected climatology. This low-res loss function was something Ryan or Malik had used in prior work (i think the diversity gan?) and i think it makes a lot of sense given the reasoning above. I also added in an extremes feature to the lr loss but honestly initial tests are showing the lr loss alone having very good dynamic range.

@bnb32
Copy link
Collaborator

bnb32 commented Feb 14, 2024

@bnb32 your success with the ERA wind model had me thinking about allowing the model more freedom to create it's own realistic high-res fields, but with the climate change applications i still want to adhere pretty strictly to the bias corrected climatology. This low-res loss function was something Ryan or Malik had used in prior work (i think the diversity gan?) and i think it makes a lot of sense given the reasoning above. I also added in an extremes feature to the lr loss but honestly initial tests are showing the lr loss alone having very good dynamic range.

This is an interesting idea. Do you have a link to the paper? I'm curious what stops the generator from just repeating pixels in the high res so that the coarse version agrees exactly with the low res.

EDIT: oh, nvm. I see that you're still using the high res and synthetic. I thought you were coarsening the synthetic and computing loss with the lr input.

@@ -361,6 +362,35 @@ def test_s_enhance_3D_no_obs(s_enhance):
f].mean())


def test_t_coarsen():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good addition :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha yeah I cant believe we didnt have an explicit test for this previously! i was so nervous that this test would fail!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lmao that would have been nuts

@grantbuster
Copy link
Member Author

@bnb32 your success with the ERA wind model had me thinking about allowing the model more freedom to create it's own realistic high-res fields, but with the climate change applications i still want to adhere pretty strictly to the bias corrected climatology. This low-res loss function was something Ryan or Malik had used in prior work (i think the diversity gan?) and i think it makes a lot of sense given the reasoning above. I also added in an extremes feature to the lr loss but honestly initial tests are showing the lr loss alone having very good dynamic range.

This is an interesting idea. Do you have a link to the paper? I'm curious what stops the generator from just repeating pixels in the high res so that the coarse version agrees exactly with the low res.

EDIT: oh, nvm. I see that you're still using the high res and synthetic. I thought you were coarsening the synthetic and computing loss with the lr input.

I dont have a paper i just remember a conversation from a year or two ago. Well we're using high-res and synthetic but yes we ARE coarsening synthetic and high-res and computing loss on the low res fields (basically the same as coarsened synthetic and lr input). The discriminator is what stops the generator from making a high-res blocky mess that agrees with low res.

@bnb32
Copy link
Collaborator

bnb32 commented Feb 14, 2024

@bnb32 your success with the ERA wind model had me thinking about allowing the model more freedom to create it's own realistic high-res fields, but with the climate change applications i still want to adhere pretty strictly to the bias corrected climatology. This low-res loss function was something Ryan or Malik had used in prior work (i think the diversity gan?) and i think it makes a lot of sense given the reasoning above. I also added in an extremes feature to the lr loss but honestly initial tests are showing the lr loss alone having very good dynamic range.

This is an interesting idea. Do you have a link to the paper? I'm curious what stops the generator from just repeating pixels in the high res so that the coarse version agrees exactly with the low res.
EDIT: oh, nvm. I see that you're still using the high res and synthetic. I thought you were coarsening the synthetic and computing loss with the lr input.

I dont have a paper i just remember a conversation from a year or two ago. Well we're using high-res and synthetic but yes we ARE coarsening synthetic and high-res and computing loss on the low res fields (basically the same as coarsened synthetic and lr input). The discriminator is what stops the generator from making a high-res blocky mess that agrees with low res.

right, the disc still sees high res! I was thinking that we could get away with just storing coarsened hr with this loss function but i forgot about good old disc.

"""Test temporal coarsening of 5D array"""
t_enhance = 4
hr_shape = (3, 10, 10, 48, 2)
arr = np.random.uniform(-1, 1, )
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like you have a leftover here @grantbuster

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dang and just as the tests were almost done! haha no worries, i'll fix

@grantbuster grantbuster merged commit e5b3ab6 into main Feb 14, 2024
8 checks passed
@grantbuster grantbuster deleted the gb/lr_loss branch February 14, 2024 23:42
github-actions bot pushed a commit that referenced this pull request Feb 14, 2024
@malihass
Copy link
Collaborator

malihass commented Feb 15, 2024

@bnb32 your success with the ERA wind model had me thinking about allowing the model more freedom to create it's own realistic high-res fields, but with the climate change applications i still want to adhere pretty strictly to the bias corrected climatology. This low-res loss function was something Ryan or Malik had used in prior work (i think the diversity gan?) and i think it makes a lot of sense given the reasoning above. I also added in an extremes feature to the lr loss but honestly initial tests are showing the lr loss alone having very good dynamic range.

This is an interesting idea. Do you have a link to the paper? I'm curious what stops the generator from just repeating pixels in the high res so that the coarse version agrees exactly with the low res.
EDIT: oh, nvm. I see that you're still using the high res and synthetic. I thought you were coarsening the synthetic and computing loss with the lr input.

I dont have a paper i just remember a conversation from a year or two ago. Well we're using high-res and synthetic but yes we ARE coarsening synthetic and high-res and computing loss on the low res fields (basically the same as coarsened synthetic and lr input). The discriminator is what stops the generator from making a high-res blocky mess that agrees with low res.

right, the disc still sees high res! I was thinking that we could get away with just storing coarsened hr with this loss function but i forgot about good old disc.

I'm being a fly on the wall here!

@grantbuster @bnb32 You are talking about this ref? https://arxiv.org/pdf/2111.05962.pdf (maybe Eq 14?)
Doing the content loss on the low res field (instead of high res) forces to adhere to the low-res observation and gives freedom to the generator at the high res level. The idea was that the gen sticks to the little observation available, and is free to create features that the discriminator likes at the high-res level. So this strategy promotes diversity of the high-res features given the same low res data. The downside is that , as you immediately noted, you need to balance adversarial and content loss well, otherwise you end up with the high-res blocky problem.

If you do the content loss on the high-res data, then there is no high-res blocky issue, but you might kill the diversity if you try to generate many different high res fields. The upshot is stability and probably higher quality generated fields.

I think it is nice to have that content loss on low res data available now if we want to try some diverse field generation stuff!

@grantbuster
Copy link
Member Author

@bnb32 your success with the ERA wind model had me thinking about allowing the model more freedom to create it's own realistic high-res fields, but with the climate change applications i still want to adhere pretty strictly to the bias corrected climatology. This low-res loss function was something Ryan or Malik had used in prior work (i think the diversity gan?) and i think it makes a lot of sense given the reasoning above. I also added in an extremes feature to the lr loss but honestly initial tests are showing the lr loss alone having very good dynamic range.

This is an interesting idea. Do you have a link to the paper? I'm curious what stops the generator from just repeating pixels in the high res so that the coarse version agrees exactly with the low res.
EDIT: oh, nvm. I see that you're still using the high res and synthetic. I thought you were coarsening the synthetic and computing loss with the lr input.

I dont have a paper i just remember a conversation from a year or two ago. Well we're using high-res and synthetic but yes we ARE coarsening synthetic and high-res and computing loss on the low res fields (basically the same as coarsened synthetic and lr input). The discriminator is what stops the generator from making a high-res blocky mess that agrees with low res.

right, the disc still sees high res! I was thinking that we could get away with just storing coarsened hr with this loss function but i forgot about good old disc.

I'm being a fly on the wall here!

@grantbuster @bnb32 You are talking about this ref? https://arxiv.org/pdf/2111.05962.pdf (maybe Eq 14?) Doing the content loss on the low res field (instead of high res) forces to adhere to the low-res observation and gives freedom to the generator at the high res level. The idea was that the gen sticks to the little observation available, and is free to create features that the discriminator likes at the high-res level. So this strategy promotes diversity of the high-res features given the same low res data. The downside is that , as you immediately noted, you need to balance adversarial and content loss well, otherwise you end up with the high-res blocky problem.

If you do the content loss on the high-res data, then there is no high-res blocky issue, but you might kill the diversity if you try to generate many different high res fields. The upshot is stability and probably higher quality generated fields.

I think it is nice to have that content loss on low res data available now if we want to try some diverse field generation stuff!

Yes exactly that! I think for the climate change applications this will be a good option to have because the GCM data may deviate from coarsened wtk/nsrdb so giving the generator more freedom to create the high-res fields may result in higher output quality overall

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants