-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update README #22
Update README #22
Conversation
It supports 2D, 2.5D (3D encoder, 2D decoder) and 3D U-Nets, | ||
as well as 3D networks with anisotropic filters. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hiddenlayer (https://github.com/waleedka/hiddenlayer/blob/master/demos/pytorch_graph.ipynb) looks like the tool to use to plot network graphs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I rendered this with torchview. However as with any other automatic graph renderer this is quite verbose (albeit slightly better than competitors such as netron and tensorboard) compared to human illustration, because they don't use edges for operators like concatenation and downsampling/upsampling. Hiddenlayers seems to support custom filtering of nodes, but it does not support annotating the edges, so the they will have to be folded into blocks like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks - I agree that torchview or hiddenlayer diagrams are good documentation, but too detailed for README. Please save this drawing in docs
and update the model summary figure in README. Please make the background transparent: I am attaching the original illustrator file I pulled from our google drive.
supp_modelarch_RGB.pdf.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mattersoflight thanks for looking up the PDF. I don't know if we should make it transparent though. Large bright-background figures in a dark mode page do look less polished, but they preserve the contrast of dark fonts/contours better. See mehta-lab/waveorder#106. If this is necessary, a workaround would be to make 2 versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, thanks for reminding me that transparent images don't work well and finding the workaround for showing a different image for README. That is the way! All of these images are small and can be versioned with the repo. I will find the image for waveorder too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See d315049
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops just realized that SVGs (~100 kB each) generate extra-orbit line counts. @mattersoflight should I replace these with rasterized PNGs (about the same size, don't scale as well, but don't count as lines)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The images look good in my browser! To avoid counting svg files in code stats, I found a solution (https://github.com/github-linguist/linguist/blob/master/docs/overrides.md, markovmodel/PyEMMA#411). It looks like we can place a config file in our repo and have it ignore some files from stats github reports.
If the above is not workable, I think we should export images into png format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The linguist config doesn't really apply: it can fix unintended language stats (e.g. for generated code) but SVG won't be counted as a language anyway. In the PyEMMA issue you linked they noted that the line count is just git diff
so the language doesn't matter, and the only way seems to be using a binary format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With f600e61 this PR still reports 10k lines in diff stat. Also SVG is intentionally diff-able on GH: https://github.blog/2014-10-06-svg-viewing-diffing/
A small glitch is that the line paths don't render well for me (yet) when converted to PNG (see below). Still trying to figure this out.
@mattersoflight I don't think the conda environment section is necessary because |
No description provided.