Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Shallow Water PDE example to use GIF writer #684

Merged
merged 3 commits into from Oct 26, 2020
Merged

Update Shallow Water PDE example to use GIF writer #684

merged 3 commits into from Oct 26, 2020

Conversation

vojtamolda
Copy link
Contributor

Hello S4TF team,

Here's a small update to PR #666. At the time the GIF writer wasn't yet merged so the example relied on ImageMagick's convert utility to create animations from a sequence of JPEG images. Now the GIFs are written directly without the conversion step.

Because I only need to store grayscale animations, I added support to save single channel tensor of shape [width, height, 1] as GIFs to AnimatedImage

Cheers,

Vojta

- Use Tensor as protocol requirement for water level output
- Remove single-purpose visualization code
- Updated readme file
- Greyscale image is represented as [width, height, 1] shaped tensor
- New palette with 64 shades of grey
@BradLarson
Copy link
Contributor

Are you getting a high enough quality with the grayscale quantization you've implemented? I'm asking, because better quantization was on my to-do list in part to improve the results from your model. If your grayscale-oriented quantization works well enough, I can de-prioritize that.

Also, now that the intermediate images are not required, would it be possible to rework the image input and image output paths to support running this at the root of the swift-models directory? All of our other models work that way, and I've been standardizing on using the output/ directory for output products. That would remove a step from the use of this model and bring it in line with our others.

Thanks for the follow-on, this will save me some work.

@vojtamolda
Copy link
Contributor Author

Yeah. I think the quality is okay. For my purposes I just map fixed -1 to +1 range of solution values onto a palette of 64 grays spanning from white to black. I tried both larger and smaller palettes and 64 seems like the best compromise between quality and file size. For the time being I don't have a problem with de-prioritizing improvements in this area. I tried using the existing default color palette but the grays there were just too sparse.

I agree that running from the root is a much better solution. I can add a CLI argument for the input image and save the GIFs in the output/ folder by default. I'll look around the other examples to see how things are done there and update the PR.

- Options for setting resolution, duration, iterations, learning rate and target image
- Write animations to 'output' directory
- Can be run from repository root
- Updated readme
@vojtamolda
Copy link
Contributor Author

Here's how the updated CLI looks like:

Solve shallow water PDE on a unit square.

Animations of the solution are saved in the 'output' directory.

USAGE: Shallow-Water-PDE [--splash] [--optimization] [--benchmark] [--resolution <N>]
[--duration <T>] [--target <image>] [--iterations <I>] [--learning-rate <α>]

OPTIONS:
  --splash/--optimization/--benchmark
                          Task to run. 
  --resolution <N>        Number of simulated values along X/Y directions. (default: 256)
  --duration <T>          Number of simulated time-steps. (default: 512)
  --target <image>        Image to use as an optimization target. (default:
                          Examples/Shallow-Water-PDE/Images/Target.jpg)
  --iterations <I>        Number of optimization iterations. (default: 200)
  --learning-rate <α>     Optimization learning rate. (default: 500.0)
  -h, --help              Show help information.

Copy link
Contributor

@BradLarson BradLarson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran through everything, and it looks great. It's a ton easier to just run this and see the results immediately.

Also, I think I have a good use for the grayscale quantization in another application, so that will save me some time in prototyping. Thanks for the nice update!

@BradLarson BradLarson merged commit e920838 into tensorflow:master Oct 26, 2020
@vojtamolda
Copy link
Contributor Author

Perfect! Thanks for the review and comments, Brad.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants