Skip to content
This repository has been archived by the owner on May 1, 2023. It is now read-only.

Automatic RL compression + Greedy compression #151

Merged
merged 61 commits into from
Feb 13, 2019
Merged

Automatic RL compression + Greedy compression #151

merged 61 commits into from
Feb 13, 2019

Conversation

nzmora
Copy link
Contributor

@nzmora nzmora commented Feb 11, 2019

No description provided.

Adds two symbolic links 'latest_log_file' and 'latest_log_dir'
which make it easier to access the log file and directory of the
last executed experiment.
This commit includes the AMC implementation.
It currently does not perform well and everyone is invited to help
getting it to work! ☺

A summary of the current state of the algorithm is available
in the accompanying notebook:
examples/automated_deep_compression/amc-results.ipynb

The main AMC code is contained in:
examples/automated_deep_compression/ADC.py.

This file also contains instructions for installing the RL library
which contains the DDPG agent implementation.
I’ve integrated two RL libraries: Coach from Intel AI Lab, and
Spinup from OpenAI.  I mainly use Coach for my experiments.

The sample application compress_classifier.py calls ADC.do_adc
which instantiates a DDPG Agent, and a DistillerWrapperEnvironment.
DistillerWrapperEnvironment implements a Gym environment
interface and acts as a mediator between the Agent and Distiller.
Distiller is used for pruning (thinning), evaluation and fine-tuning.
Added a couple of experimental rewards
Added an option to produce a reward at every step (slows down the learning), and but produces more stable behavior.
Changed the name of some of the AMC arguments.
A couple of small fixes in the AMC notebook.
A parameter was missing from one of the function calls.
Use the application argument --test-size to limit the size
of the Test dataset.
This is still patchy and needs generalizing to the general use-case
previously we didn't shuffle the Test dataset, but now we want to
always shuffle it.  The motivation is that we sometimes want to use
only part of the DS, but we want to use-different images in each epoch.
--training-epoch-duration
--test-epoch-duration

These agruments allow us to use only part of the datasets.
Should contain a single fix for the --cpu flag feature.
Shuffle the Test DS unless in Deterministic mode
And also shuffle the Test dataset
Move the AMC application arguments to a separate file because
AMC requires extra installations (gym, rl-coach) which are not required
for people who don't wnat to run AMC.
This is still a hack - wanting to spend my time running the
experiments and leaving the clean-up/refactoring for later.
Expand the command line arguments to recreate the original
command line invocation.
The use of DataParallel is causing various small problems when used
in conjunction with SummaryGraph.
The best solution is to force SummaryGraph to use a non-data-parallel
version of the model and to always normalize node names when accessing
SummaryGraph operations.
Note: this did not improve performance
Eventually this hack needs proper handling.
ResNets handling is still a hack, but this should prevent some
common errors.
The code needs to be more modular, so that we can (1) share functionality
with other components; (2) make it easily configurable.
This refactoring, encapsulates the logic related handling a model
(fine-tuning; compression; evaluation) in a single class.
This is not the final refactoring we plan.

Also added functionality to sample networks from the posterior distribution
of the discovered networks.  This functionality is currently "hidden" from
the command-line, but I'll "expose" it.
This doesn't seem to make a lot of difference in the results, but
I'm committing because several important tests used this LR.
Use the random policy to weed out false-positive on algorithm
results.
the assert is correct, but the syntax is not always correct.
Fixing this is tedious and the assert is not really required.
@nzmora nzmora merged commit 7fcf111 into master Feb 13, 2019
michaelbeale-IL pushed a commit that referenced this pull request Apr 24, 2023
Merging the 'amc' branch with 'master'.
This updates the automated compression code in 'master', and adds a greedy filter-pruning algorithm.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant