Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add save and sampling #23

Closed
wants to merge 2 commits into from
Closed

Conversation

CerebralSeed
Copy link
Contributor

This imports the same save and sampling method as used in your denoising diffusion repo.

This imports the same save and sampling method as used in your denoising diffusion repo.

G_kwargs = dict(batch_size=batches)
all_images_list = list(map(lambda n: self.G(batch_size = n), batches))
print(all_images_list)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stray print

@lucidrains
Copy link
Owner

@CerebralSeed nice! do you want to try doing the sampling for the upsampler too? perhaps the sampled images can have 2 columns, with original -> upsampled. original can be taken from the training dataloader for now, and extra credit would be to support a different validation dataloader specifically for eval

@lucidrains
Copy link
Owner

have you tested it? i think you should be able to see something with a toy dataset

@CerebralSeed
Copy link
Contributor Author

have you tested it? i think you should be able to see something with a toy dataset

Did, but I see I forgot to delete a debugging print statement.

@CerebralSeed
Copy link
Contributor Author

@CerebralSeed nice! do you want to try doing the sampling for the upsampler too? perhaps the sampled images can have 2 columns, with original -> upsampled. original can be taken from the training dataloader for now, and extra credit would be to support a different validation dataloader specifically for eval

I'll give it a try later this week.

@CerebralSeed
Copy link
Contributor Author

@lucidrains Was running some tests, and, like most GANs, there are stability issues. The Generator loss keeps increasing. I noticed in their paper, they were using AdamW(see Table A2 on page 19 where they specify their settings). Or maybe Wasserstein loss might be worth trying?

Also, was checking on the raw model sizes, as it seems the Discriminator is doing a little too well, relatively speaking. Sure enough, the Discriminator is almost twice the size of the Generator, while in their paper, it's about half the size or less.

print("Generator has", sum(p.numel() for p in gan.G.parameters()), "parameters.")
print("Discriminator has",sum(p.numel() for p in gan.D.parameters()), "parameters.")

Not sure where that's coming from, yet.

It looks like you might have gotten the stability issues sorted here #17 So I will update and try again to see if that improves.

@lucidrains
Copy link
Owner

@CerebralSeed nice to hear you are running tests! yup, it should be a lot more stable in the latest version

I moved the logic for building kwargs for the generator into its own method, so it can be called in the save_and_sample_every.

This is tested as working, but the original image colors seem a bit off. Maybe you can figure out what's causing the color issue.
@CerebralSeed
Copy link
Contributor Author

@lucidrains Disregard that latest commit. I copied the whole file and seem to have overwritten some updates you were making. I will push a new commit.

@CerebralSeed
Copy link
Contributor Author

Closing this Pull request and opening a new one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants