Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose a uniform API at the highest level for models #190

Merged
merged 9 commits into from
Aug 3, 2022

Conversation

theabhirath
Copy link
Member

@theabhirath theabhirath commented Aug 1, 2022

This PR does a bunch of stuff:

  1. Proposes to refine the API at the highest level of models to only expose options that make sense for the model to load pretrained weights (this means removing configurations like drop rates).
  2. Completes the migration towards having inchannels and nclasses as uniform options for all models, helping along Add inchannels, imsize, nclasses as kwargs for all constructors #176 - only imsize is left now.
  3. For the purpose of general code cleanliness, defines a ton of type annotations that weren't there before - these will help along with the documentation (once refactored) to help the users figure out more cryptic errors that may occur when dealing with lower level model APIs.
  4. Does away with default model configurations at the highest level. I can restore this, but this behaviour is rather ambiguous - what's to make a ResNet 50 be more suited for a default configuration than a ResNet 18, or a 101?
  5. Throws in a small refactor of invertedresidual as prep for an EfficientNetv2 PR that will be landing shortly.
  6. Uses create_classifier in most cases where it can be for brevity
  7. Adds compat entries for packages introduced in Overhaul of ResNet API #174 (closes CompatHelper: add new compat entry for CUDA at version 3, (keep existing compat) #191, closes CompatHelper: add new compat entry for ChainRulesCore at version 1, (keep existing compat) #192, closes CompatHelper: add new compat entry for PartialFunctions at version 1, (keep existing compat) #193, closes CompatHelper: add new compat entry for NNlibCUDA at version 0.2, (keep existing compat) #194). Also closes CompatHelper: bump compat for Functors to 0.3, (keep existing compat) #180 because why not

@theabhirath theabhirath force-pushed the refine branch 2 times, most recently from 00355aa to 57a3ce7 Compare August 1, 2022 15:14
Also
a. more type annotations
b. Expose only configurations vital to the model API in terms of pretraining at the highest level
@theabhirath theabhirath changed the base branch from master to cl/fix August 2, 2022 13:54
@theabhirath theabhirath changed the base branch from cl/fix to master August 2, 2022 13:54
@theabhirath
Copy link
Member Author

Pending CI, this should be good to go!

@theabhirath theabhirath force-pushed the refine branch 3 times, most recently from bb0bd62 to 5040439 Compare August 3, 2022 05:47
@darsnack darsnack merged commit 7449985 into FluxML:master Aug 3, 2022
@theabhirath theabhirath deleted the refine branch August 4, 2022 11:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

2 participants