-
Notifications
You must be signed in to change notification settings - Fork 157
enable modifier groups in SparseML recipes #66
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given this will get more interesting, should we have some unit tests for it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the simplicity, need a bit more visibility into the parsing for the user and I agree with Tuan on getting some unit tests in
src/sparseml/optim/modifier.py
Outdated
| else: # Dict | ||
| modifiers = [] | ||
| for key, modifier_list in container.items(): | ||
| if "modifiers" in key and isinstance(modifier_list, List): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if a user accidentally mispells a group, we'll have a silent ignore here. We should at least warn if a key doesn't have modifiers in it and won't be used, and potentially could throw an exception here to be explicit since we aren't supporting anything else currently
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agreed. this was definitely a concern the issue is that not every top level dict element will necessarily be a modifier list. We could have for instance a list of parameter names stored as a variable to reference elsewhere. If this wasn't the case I would not have required "modifier" to be in the name at all.
One alternative is we could search all lists for BaseModifier objects and flatten that way, letting group names just be for organization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, will raise an error when a Modifier object is found outside of a named "modifier" list. That way we could keep the YAML flexibility, but enforce some simple structure in how modifiers are defined.
* GA code, toctree links (#61) - added tracking for docs output - added help links for docs output * Update README.md (#64) removed placeholder reference to comingsoon repo in favor of active repo * add decorator for flaky tf sparsity tests (#65) * enable modifier groups in SparseML recipes (#66) * enable modifier groups in SparseML recipes * unit tests for YAML modifier list loading * make build argument for nightly builds (#63) * rst url syntax correction (#67) correcting double slash at the end of URLs with updates to index.rst pre-compilation * match types explicitly in torch qat quant observer wrappers (#68) * docs updates (#71) enhancing left nav for Help; after this merge, the docs need to be rebuilt for this repo so docs.neuralmagic.com can be refreshed. cc @markurtz * Rename KSModifier to PruningModifier (#76) * Rename ConstantKSModifier to ConstantPruningModifier * Rename GradualKSModifier to GMPruningModifier * Fix broken link for Optimization Recipes (#75) * Serialize/deserialize MaskedLayer (#69) * Serialize/deserialize MaskedLayer * Remove unused symbols * Register pruning scheduler classes for serialization * removed ScheduledOptimizer, moved logic to ScheduledModifierManager (#77) * load recipes directly from SparseZoo (#72) * load recipes into Managers from sparsezoo stubs * moving recipe_type handling to SparseZoo only, supporting loading SparseZoo recipe objects * Revert "removed ScheduledOptimizer, moved logic to ScheduledModifierManager (#77)" (#80) This reverts commit 6073abb. * Update for 0.1.1 release (#82) * Update for 0.1.1 release - update python version to 0.1.1 - setup.py add in version parts and _VERSION_MAJOR_MINOR for more flexibility with dependencies between neural magic packages - add in deepsparse optional install pathway * missed updating version to 0.1.1 * rename examples directory to integrations (#78) * rwightman/pytorch-image-models integration (#70) * load checkpoint file based on sparsezoo recipe in pytorch_vision script (#83) * ultralytics/yolov5 integration (#73) * pytorch sparse quantized transfer learning notebook (#81) * load qat onnx models for conversion from file path (#86) * Sparsification update (#84) * Sparsification update - update sparsification descriptions and move to preferred verbage * update from comments on deepsparse for sparsification * Update README.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update README.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update README.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update README.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update docs/source/index.rst Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update docs/source/index.rst Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update docs/source/index.rst Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update docs/source/index.rst Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * Update docs/source/recipes.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * fix links in index.rst from reviewed content * update overviews and taglines from doc Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> * blog style readme for torch sparse-quant TL notebook (#85) Co-authored-by: Jeannie Finks (NM) <74554921+jeanniefinks@users.noreply.github.com> Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com> Co-authored-by: Eldar Kurtic <eldar.ciki@gmail.com> Co-authored-by: Tuan Nguyen <tuan@neuralmagic.com>
* Update faqs.md * Update getstarted.md Inclusion of NLP support Removing a question that could use a better response until GTM alignment is made.
This change will enable SparseML to parse multiple lists of modifiers (groups) from a recipe. The resulting structure will look like this:
All modifiers will be parsed into a single
Modifierlist at run time. Not adding any filtering in the lists to see if they are actually of modifiers, so the user will be able to see any errors if the recipe is incorrect.