This repository has been archived by the owner on May 1, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 802
Automatic RL compression + Greedy compression #151
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Adds two symbolic links 'latest_log_file' and 'latest_log_dir' which make it easier to access the log file and directory of the last executed experiment.
This commit includes the AMC implementation. It currently does not perform well and everyone is invited to help getting it to work! ☺ A summary of the current state of the algorithm is available in the accompanying notebook: examples/automated_deep_compression/amc-results.ipynb The main AMC code is contained in: examples/automated_deep_compression/ADC.py. This file also contains instructions for installing the RL library which contains the DDPG agent implementation. I’ve integrated two RL libraries: Coach from Intel AI Lab, and Spinup from OpenAI. I mainly use Coach for my experiments. The sample application compress_classifier.py calls ADC.do_adc which instantiates a DDPG Agent, and a DistillerWrapperEnvironment. DistillerWrapperEnvironment implements a Gym environment interface and acts as a mediator between the Agent and Distiller. Distiller is used for pruning (thinning), evaluation and fine-tuning.
Added a couple of experimental rewards Added an option to produce a reward at every step (slows down the learning), and but produces more stable behavior. Changed the name of some of the AMC arguments. A couple of small fixes in the AMC notebook.
A parameter was missing from one of the function calls.
Use the application argument --test-size to limit the size of the Test dataset.
This is still patchy and needs generalizing to the general use-case
previously we didn't shuffle the Test dataset, but now we want to always shuffle it. The motivation is that we sometimes want to use only part of the DS, but we want to use-different images in each epoch.
--training-epoch-duration --test-epoch-duration These agruments allow us to use only part of the datasets.
Should contain a single fix for the --cpu flag feature.
Shuffle the Test DS unless in Deterministic mode
And also shuffle the Test dataset
Move the AMC application arguments to a separate file because AMC requires extra installations (gym, rl-coach) which are not required for people who don't wnat to run AMC.
This is still a hack - wanting to spend my time running the experiments and leaving the clean-up/refactoring for later.
Expand the command line arguments to recreate the original command line invocation.
The use of DataParallel is causing various small problems when used in conjunction with SummaryGraph. The best solution is to force SummaryGraph to use a non-data-parallel version of the model and to always normalize node names when accessing SummaryGraph operations.
Note: this did not improve performance
Eventually this hack needs proper handling.
ResNets handling is still a hack, but this should prevent some common errors.
The code needs to be more modular, so that we can (1) share functionality with other components; (2) make it easily configurable. This refactoring, encapsulates the logic related handling a model (fine-tuning; compression; evaluation) in a single class. This is not the final refactoring we plan. Also added functionality to sample networks from the posterior distribution of the discovered networks. This functionality is currently "hidden" from the command-line, but I'll "expose" it.
This doesn't seem to make a lot of difference in the results, but I'm committing because several important tests used this LR.
Use the random policy to weed out false-positive on algorithm results.
the assert is correct, but the syntax is not always correct. Fixing this is tedious and the assert is not really required.
michaelbeale-IL
pushed a commit
that referenced
this pull request
Apr 24, 2023
Merging the 'amc' branch with 'master'. This updates the automated compression code in 'master', and adds a greedy filter-pruning algorithm.
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.