Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Conversation

@markurtz
Copy link
Member

Implementation for allowing for phased pruning where the imposed sparsity alternates between the normal function for sparsity and no sparsity. Pruning starts on and alternates with the update_frequency. All weight pruning modifiers are supported with this. Note, if the sparsity ends up not being imposed at the final_epoch, it will be at the very end. Example:

!GMPruningModifier
    init_sparsity: 0.9
    final_sparsity: 0.9
    start_epoch: 0.0
    end_epoch: 10.0
    update_frequency: 2.0
    params: ["re:.*weight"]
    phased: True

@markurtz markurtz requested a review from a team June 27, 2021 18:58
@markurtz markurtz self-assigned this Jun 27, 2021
@markurtz markurtz requested review from bfineran, mgoin and rahul-tuli and removed request for a team June 27, 2021 18:58
Copy link
Contributor

@bfineran bfineran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

really slick implementation. LGTM. Will we want to enable variable lengths for pruning/non-pruning phases in the future?

Copy link
Member

@rahul-tuli rahul-tuli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks Good to me! :)

@markurtz
Copy link
Member Author

really slick implementation. LGTM. Will we want to enable variable lengths for pruning/non-pruning phases in the future?

Good question, I could potentially see that happening. For now, though, I've only heard about equivalent phases from the research. Shouldn't be too much in the future to overload the phased param to support this if we need it

@markurtz markurtz merged commit 5713665 into main Jun 28, 2021
markurtz added a commit that referenced this pull request Jun 29, 2021
* Update sparsifying_bert_using_recipes.md (#299)

Fix wrong link to tutorial images

* BERT pruning tutorial clean up (#300)

* Disable save ckpt for BERT tutorial command (#301)

* Add output for eval in tutorial (#302)

* Rewrite readme for hugging face transformers integration (#303)

* Rewrite readme for hugging face transformers integration

* Update integrations/huggingface-transformers/README.md

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update integrations/huggingface-transformers/README.md

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update integrations/huggingface-transformers/README.md

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update integrations/huggingface-transformers/README.md

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update integrations/huggingface-transformers/README.md

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update integrations/huggingface-transformers/README.md

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update integrations/huggingface-transformers/README.md

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* update from review

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Passage retrieval compression (#297)

* adding IR elastic stuff

* adding data download and modified es dense ranking

* adding Doc2query

* adding DPR code

* updating doc2quyery code

* adding msmarco eval scri[t

* making dataset HF compatible

* making dataset HF compatible

* running doc2query t5

* model running

* working on integrating

* done with yaml recipe for all prunable layers

* fixing config spacing for pruning yaml

* work on dataset making

* updaed thedownload data script and model training

* running doc2query but missing the work for pruning

* fixing issues in pruning

* moving around DPR

* added optimal lobotomizing project

* adding to readme for baseline

* new structures

* cleaning up structure and pushing baseline numbers

* moving sparse_ml_utils.py to src

Co-authored-by: Mark Kurtz <mark@neuralmagic.com>

* Update example commands for hugging face integration (#306)

* fix: correct minor typo (#307)

* Phased pruning (#311)

* Update example commands for hugging face integration

* Phased pruning implementation

* Update for quality

* Upgrade version to 0.5.1 for bug fix release

Co-authored-by: Tuan Nguyen <tuan@neuralmagic.com>
Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>
Co-authored-by: spacemanidol <dcampos3@illinois.edu>
Co-authored-by: Rahul Tuli <rahul@neuralmagic.com>
@markurtz markurtz deleted the phased-pruning branch September 1, 2021 11:40
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants