Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Conversation

@rahul-tuli
Copy link
Member

The goal of this PR is to make SparseML-PyTorch lr_sensitivity analysis integration (for image classification) more maintainable by removing dependency on LRArguments class. The idea is to have a single source of truth for all command line arguments. This also integrates click with the lr_analysis script as per standards.

The following commands have been tested locally to ensure these changes do not break existing flows:

sparseml.image_classification.lr_analysis \
--arch-key resnet50 --dataset imagenette --dataset-path data --batch-size 32

Output from above command:

LR Sensitivity
	LR		Loss
	1.0000E-09		2.3267
	1.1000E-09		2.3192
	1.2100E-09		2.3315
	1.3310E-09		2.3203
	1.4641E-09		2.3139
	1.6105E-09		2.3020
	1.7716E-09		2.3223
	1.9487E-09		2.3168
	2.1436E-09		2.3170
	2.3579E-09		2.3249
	2.5937E-09		2.3175
	2.8531E-09		2.3141
	3.1384E-09		2.3252
	3.4523E-09		2.3231
	3.7975E-09		2.3173
	4.1772E-09		2.3144
	4.5950E-09		2.3377
	5.0545E-09		2.3139
	5.5599E-09		2.3341
	6.1159E-09		2.3167
	6.7275E-09		2.3162
	7.4002E-09		2.3280
	8.1403E-09		2.3246
	8.9543E-09		2.3232
	9.8497E-09		2.3200
	1.0835E-08		2.2998
	1.1918E-08		2.3249
	1.3110E-08		2.3126
	1.4421E-08		2.3156
	1.5863E-08		2.3261
	1.7449E-08		2.3151
	1.9194E-08		2.3141
	2.1114E-08		2.3271
	2.3225E-08		2.3137
	2.5548E-08		2.3312
	2.8102E-08		2.3104
	3.0913E-08		2.3280
	3.4004E-08		2.3163
	3.7404E-08		2.3183
	4.1145E-08		2.3175
	4.5259E-08		2.3176
	4.9785E-08		2.3258
	5.4764E-08		2.3185
	6.0240E-08		2.3159
	6.6264E-08		2.3233
	7.2890E-08		2.3140
	8.0180E-08		2.3261
	8.8197E-08		2.3107
	9.7017E-08		2.3160
	1.0672E-07		2.3313
	1.1739E-07		2.3295
	1.2913E-07		2.3212
	1.4204E-07		2.3217
	1.5625E-07		2.3236
	1.7187E-07		2.3328
	1.8906E-07		2.3066
	2.0797E-07		2.3258
	2.2876E-07		2.3208
	2.5164E-07		2.3073
	2.7680E-07		2.3266
	3.0448E-07		2.3221
	3.3493E-07		2.3298
	3.6842E-07		2.3189
	4.0527E-07		2.3136
	4.4579E-07		2.3235
	4.9037E-07		2.3119
	5.3941E-07		2.3184
	5.9335E-07		2.3106
	6.5268E-07		2.3189
	7.1795E-07		2.3247
	7.8975E-07		2.3146
	8.6872E-07		2.3174
	9.5559E-07		2.3217
	1.0512E-06		2.3108
	1.1563E-06		2.3097
	1.2719E-06		2.3126
	1.3991E-06		2.3291
	1.5390E-06		2.3195
	1.6929E-06		2.3074
	1.8622E-06		2.3196
	2.0484E-06		2.3107
	2.2532E-06		2.3317
	2.4786E-06		2.3135
	2.7264E-06		2.3160
	2.9991E-06		2.3216
	3.2990E-06		2.2957
	3.6289E-06		2.3190
	3.9918E-06		2.3077
	4.3909E-06		2.3059
	4.8300E-06		2.3059
	5.3130E-06		2.3257
	5.8443E-06		2.3120
	6.4288E-06		2.2968
	7.0716E-06		2.3105
	7.7788E-06		2.3005
	8.5567E-06		2.3091
	9.4123E-06		2.2980
	1.0354E-05		2.3104
	1.1389E-05		2.2795
	1.2528E-05		2.3035
	1.3781E-05		2.2928
	1.5159E-05		2.3046
	1.6675E-05		2.2854
	1.8342E-05		2.2927
	2.0176E-05		2.2893
	2.2194E-05		2.2630
	2.4413E-05		2.2763
	2.6855E-05		2.2735
	2.9540E-05		2.2656
	3.2494E-05		2.2632
	3.5743E-05		2.2647
	3.9318E-05		2.2563
	4.3249E-05		2.2366
	4.7574E-05		2.2355
	5.2332E-05		2.2308
	5.7565E-05		2.2114
	6.3322E-05		2.1992
	6.9654E-05		2.2115
	7.6619E-05		2.1884
	8.4281E-05		2.1898
	9.2709E-05		2.1587
	1.0198E-04		2.1511
	1.1218E-04		2.1293
	1.2340E-04		2.1124
	1.3574E-04		2.0839
	1.4931E-04		2.0716
	1.6424E-04		2.0447
	1.8066E-04		2.0392
	1.9873E-04		2.0072
	2.1860E-04		1.9756
	2.4046E-04		1.9430
	2.6451E-04		1.9075
	2.9096E-04		1.8719
	3.2006E-04		1.8451
	3.5206E-04		1.7884
	3.8727E-04		1.7052
	4.2600E-04		1.6823
	4.6860E-04		1.6260
	5.1545E-04		1.5605
	5.6700E-04		1.5451
	6.2370E-04		1.4511
	6.8607E-04		1.3715
	7.5468E-04		1.2695
	8.3015E-04		1.2084
	9.1316E-04		1.1067
	1.0045E-03		1.0478
	1.1049E-03		0.9899
	1.2154E-03		0.8962
	1.3370E-03		0.8361
	1.4707E-03		0.7765
	1.6177E-03		0.6824
	1.7795E-03		0.6779
	1.9574E-03		0.6162
	2.1532E-03		0.5670
	2.3685E-03		0.5068
	2.6054E-03		0.4457
	2.8659E-03		0.4584
	3.1525E-03		0.4404
	3.4677E-03		0.3773
	3.8145E-03		0.4193
	4.1959E-03		0.3812
	4.6155E-03		0.3192
	5.0771E-03		0.3021
	5.5848E-03		0.3061
	6.1433E-03		0.3117
	6.7576E-03		0.3224
	7.4334E-03		0.2481
	8.1767E-03		0.2963
	8.9944E-03		0.2594
	9.8938E-03		0.2613
	1.0883E-02		0.2435
	1.1972E-02		0.2863
	1.3169E-02		0.2441
	1.4486E-02		0.2748
	1.5934E-02		0.2268
	1.7527E-02		0.2283
	1.9280E-02		0.2270
	2.1208E-02		0.2168
	2.3329E-02		0.1914
	2.5662E-02		0.2886
	2.8228E-02		0.3063
	3.1051E-02		0.3455
	3.4156E-02		0.2471
	3.7572E-02		0.3638
	4.1329E-02		0.3623
	4.5462E-02		0.3944
	5.0008E-02		0.2954
	5.5009E-02		0.5254
	6.0510E-02		0.5369
	6.6561E-02		0.5339
	7.3217E-02		0.7896
	8.0538E-02		0.8733
	8.8592E-02		0.8136
	9.7451E-02		1.2267
	1.0720E-01		1.9338
	1.1792E-01		3.2729
	1.2971E-01		2.8117
	1.4268E-01		1.7811
	1.5695E-01		1.5145
	1.7264E-01		1.3625
	1.8991E-01		1.4288
	2.0890E-01		1.4308
	2.2979E-01		1.1258
	2.5276E-01		1.2567
	2.7804E-01		1.1533
	3.0584E-01		1.1720
	3.3643E-01		1.3938
	3.7007E-01		1.6300
	4.0708E-01		1.3284
	4.4779E-01		1.3612
	4.9257E-01		1.3653
	5.4182E-01		1.4402
	5.0000E-01		1.1877

@rahul-tuli rahul-tuli changed the title Click refactor for SparseML-PyTorch lr-analysis integration Click refactor for SparseML-PyTorch lr-analysis integration Apr 18, 2022
@rahul-tuli rahul-tuli added the 0.13 release A label for release sparseml release 0.13 label Apr 18, 2022
Copy link
Contributor

@KSGulin KSGulin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, except for one minor nit, looks great!

@rahul-tuli rahul-tuli merged commit 20516c8 into click-it-base Apr 19, 2022
@rahul-tuli rahul-tuli deleted the click-lr-analysis branch April 19, 2022 14:57
rahul-tuli added a commit that referenced this pull request Apr 19, 2022
* Click refactor for SparseML-PyTorch lr-analysis integration

* Review comments from @KSGulin
rahul-tuli added a commit that referenced this pull request Apr 25, 2022
…tion models (#711)

* Click refactor for SparseML-PyTorch integration

* Click refactor for `Pruning Sensitivity` analysis (#714)

* Click refactor for SparseML-PyTorch pr_sensitivity analysis integration

* Review comments from @KSGulin

* Click refactor for SparseML-PyTorch `lr-analysis` integration (#713)

* Click refactor for SparseML-PyTorch lr-analysis integration

* Review comments from @KSGulin

* Click refactor for SparseML PyTorch `export` integration (#712)

* Click refactor for SparseML-PyTorch export integration

* Review comments from @KSGulin

* Addressed all review comments from @bfineran, @dbogunowicz and @KSGulin

* Regenerate and Update the train-cli docstring due to changes in a few cli-args

* `nm_argparser.py` not needed anymore

* removed `nm_argparser.py` from init

* Remove All CLI args aliases and updated doctrings accordingly
markurtz added a commit that referenced this pull request May 2, 2022
* Avoid numerically unstable log (#694)

* fix QAT->Quant conversion of repeated Gemm layers with no activation QDQ (#698)

* Revert rn residual quant (#691)

* Revert ResNet definition to not quantize input to add op in residual branches.

* Correct typo.

Co-authored-by: Mark Kurtz <mark@neuralmagic.com>

* Fix: Add linebreak before 'Supplied' for better readability (#701)

* Bump notebook in /research/information_retrieval/doc2query (#679)

Bumps [notebook](http://jupyter.org) from 6.4.1 to 6.4.10.

---
updated-dependencies:
- dependency-name: notebook
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mark Kurtz <mark@neuralmagic.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>

* Added integration to masked_language_modeling training command (#707)

* Switch off fp16 on QAT start (#703)

* Switch off fp16 on QAT start

* address: review comments

* Disable fp16 when torch version is lesser than `1.9`

* Fix transformer prediction step (#716)

* Fix for prediction step when teacher model has more inputs than student.

* Updated signature of prediction_step method.

* Style and quality fixes.

* bump main to 0.13 (#696)

Co-authored-by: dhuang <dhuang@dhuangs-MacBook-Pro.local>

* Fix: default python log calls to debug level (#719)

* Feature/integrations (#688)

* added tutorials to root readme split by domain

* readme update

* edited text/structure

* grammar edits

* fix QATWrapper not properly overwritting qconfig properties for symmetric activations (#724)

* re-add fix symmetric zero points for unit8 quantization (#604) (#725)

* Fix 'self' and 'disable' not working for transformers distillation (#731)

* Click refactor for SparseML-PyTorch integration with Image Classification models (#711)

* Click refactor for SparseML-PyTorch integration

* Click refactor for `Pruning Sensitivity` analysis (#714)

* Click refactor for SparseML-PyTorch pr_sensitivity analysis integration

* Review comments from @KSGulin

* Click refactor for SparseML-PyTorch `lr-analysis` integration (#713)

* Click refactor for SparseML-PyTorch lr-analysis integration

* Review comments from @KSGulin

* Click refactor for SparseML PyTorch `export` integration (#712)

* Click refactor for SparseML-PyTorch export integration

* Review comments from @KSGulin

* Addressed all review comments from @bfineran, @dbogunowicz and @KSGulin

* Regenerate and Update the train-cli docstring due to changes in a few cli-args

* `nm_argparser.py` not needed anymore

* removed `nm_argparser.py` from init

* Remove All CLI args aliases and updated doctrings accordingly

* [Fix] Follow-up fix for #731 (Fix 'self' and 'disable' not working for transformers distillation) (#737)

* initial commit

* added more files and fixed quality

* Update trainer.py

* Added flag to exclude quantization of embedding activations. (#738)

* Added flag to exclude quantization of embedding activations.

* Updated testing to contemplate quantize_embedding_activations flag.

* Updated testing to contemplate quantize_embedding_activations flag.

* Updated debugging

* Revert "Updated debugging"

This reverts commit 449703d.

* Corrected order of arguments to pass assertion.

* Update src/sparseml/version.py

Co-authored-by: Eldar Kurtic <eldar.ciki@gmail.com>
Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
Co-authored-by: Alexandre Marques <alexandre@neuralmagic.com>
Co-authored-by: Konstantin Gulin <66528950+KSGulin@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
Co-authored-by: Rahul Tuli <rahul@neuralmagic.com>
Co-authored-by: dhuangnm <74931910+dhuangnm@users.noreply.github.com>
Co-authored-by: dhuang <dhuang@dhuangs-MacBook-Pro.local>
Co-authored-by: Ricky Costa <79061523+InquestGeronimo@users.noreply.github.com>
Co-authored-by: dbogunowicz <97082108+dbogunowicz@users.noreply.github.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

0.13 release A label for release sparseml release 0.13

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants