-
The trainer files are divided into three categories and can be found in
./trainersdirectory:a. Backpropagation-based Trainers
backprop.py(BP-Vanilla, BP-Checkpointing, BP-Accumulate) can be executed throughbash run_backprop_job.sh.b. Zero-order-based Trainers
zero_finite_differences.py(ZO-Vanilla, ZO-Accumulate, ZO-Multiple, ZO-Adaptive),svrg_zero_finite_differences.py(ZO-SVRG), andsparse_zero_finite_differences.py(ZO-Sparse) can be all executed throughbash run_zo_job.sh.c. Forward-mode AD-based Trainers
forward_mode_ad_beta.py(FmAD-Vanilla, FmAD-Multiple, FmAD-Adaptive),more_forward_mode_ad_beta.py(FmAD-Accumulate),svrg_forward_mode_ad_beta(FmAD-SVRG) andsparse_forward_mode_ad_beta.py(FmAD-Sparse) can be all executed throughbash run_zo_job.sh. -
All the hyperparameters will be set through the
.shfiles for their corresponding gradient computation method. -
The results will be saved in an automatically created
./resultsdirectory.
Astuary/Gradient_Estimation
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|