Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README #22

Merged
merged 12 commits into from
Jul 28, 2023
Merged

Update README #22

merged 12 commits into from
Jul 28, 2023

Conversation

ziw-liu
Copy link
Collaborator

@ziw-liu ziw-liu commented Jul 18, 2023

No description provided.

@ziw-liu ziw-liu changed the title Update readme Update README Jul 18, 2023
@ziw-liu ziw-liu added this to the 0.1.0: 2.5D UNet for 3D virtual staining using pytorch lightning milestone Jul 19, 2023
@mattersoflight mattersoflight self-assigned this Jul 21, 2023
@mattersoflight mattersoflight marked this pull request as ready for review July 21, 2023 20:09
It supports 2D, 2.5D (3D encoder, 2D decoder) and 3D U-Nets,
as well as 3D networks with anisotropic filters.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hiddenlayer (https://github.com/waleedka/hiddenlayer/blob/master/demos/pytorch_graph.ipynb) looks like the tool to use to plot network graphs.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I rendered this with torchview. However as with any other automatic graph renderer this is quite verbose (albeit slightly better than competitors such as netron and tensorboard) compared to human illustration, because they don't use edges for operators like concatenation and downsampling/upsampling. Hiddenlayers seems to support custom filtering of nodes, but it does not support annotating the edges, so the they will have to be folded into blocks like this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks - I agree that torchview or hiddenlayer diagrams are good documentation, but too detailed for README. Please save this drawing in docs and update the model summary figure in README. Please make the background transparent: I am attaching the original illustrator file I pulled from our google drive.
supp_modelarch_RGB.pdf.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mattersoflight thanks for looking up the PDF. I don't know if we should make it transparent though. Large bright-background figures in a dark mode page do look less polished, but they preserve the contrast of dark fonts/contours better. See mehta-lab/waveorder#106. If this is necessary, a workaround would be to make 2 versions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, thanks for reminding me that transparent images don't work well and finding the workaround for showing a different image for README. That is the way! All of these images are small and can be versioned with the repo. I will find the image for waveorder too.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See d315049

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops just realized that SVGs (~100 kB each) generate extra-orbit line counts. @mattersoflight should I replace these with rasterized PNGs (about the same size, don't scale as well, but don't count as lines)?

Copy link
Member

@mattersoflight mattersoflight Jul 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The images look good in my browser! To avoid counting svg files in code stats, I found a solution (https://github.com/github-linguist/linguist/blob/master/docs/overrides.md, markovmodel/PyEMMA#411). It looks like we can place a config file in our repo and have it ignore some files from stats github reports.

If the above is not workable, I think we should export images into png format.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The linguist config doesn't really apply: it can fix unintended language stats (e.g. for generated code) but SVG won't be counted as a language anyway. In the PyEMMA issue you linked they noted that the line count is just git diff so the language doesn't matter, and the only way seems to be using a binary format.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With f600e61 this PR still reports 10k lines in diff stat. Also SVG is intentionally diff-able on GH: https://github.blog/2014-10-06-svg-viewing-diffing/

A small glitch is that the line paths don't render well for me (yet) when converted to PNG (see below). Still trying to figure this out.

2_5d_unet_light

@ziw-liu
Copy link
Collaborator Author

ziw-liu commented Jul 21, 2023

@mattersoflight I don't think the conda environment section is necessary because pip install viscy should just work (as it does on CI). If it does not, it is a bug that should be fixed.

@mattersoflight mattersoflight merged commit 6daf322 into main Jul 28, 2023
1 of 3 checks passed
@ziw-liu ziw-liu deleted the update-readme branch July 29, 2023 00:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants