Tensorflow2.0 implementation of Efficient Neural Architecture Search via Parameters Sharing.
TF1.* implementation (official code) can be taken from the link
whole searching phase(300 epochs) needs ~5 hours with 1024 batch-size on Titan RTX 24G
if 256 batch-size, it needs ~10 hours
None
- macro search
- not search_whole_channels
- low GPU-util while searching(solution:increase batch-size)
- micro search
- BN infer case
- fixed_arc training
- aux_heads
- lr_cosine
- test data
- multi gpu
- save model weights
- python 3.5+
- tensorflow2.0
- matplotlib
- run main_macro.py directly
None