-
Notifications
You must be signed in to change notification settings - Fork 798
cannot resume model for training #59
Comments
Hi, gleefeng, I got the following error when I ran the command "python3 compress_classifier.py --arch simplenet_cifar ../../../data.cifar10 -p 30 -j=1 --lr=0.01": 2018-10-22 17:03:03,745 - Log file for this run: /home/project/compress/distiller-master/examples/classifier_compression/logs/2018.10.22-170303/2018.10.22-170303.log 2018-10-22 17:03:03,852 - Do you have this issue? |
You can git clone this project to solve this problem. some git messages must be checked by execution_env.py |
Hi @gleefeng , Unfortunately resuming from a quantization session is not supported currently. See #21 (comment). Cheers, |
Yeah, it's ok, thank you! |
I encountered the same problem:dog: I just want to evaluate the accuracy of quantized model. # if train_with_fp_copy and optimizer is None:
# raise ValueError('optimizer cannot be None when train_with_fp_copy is True') class WRPNQuantizer(Quantizer): def __init__(self, model, optimizer=None, bits_activations=32, bits_weights=32, bits_overrides=OrderedDict(),
quantize_bias=False): |
@hustzxd the workaround you detail indeed works, thanks for posting here. |
Hmmm...I also have problem here. |
We'll track this on #185, closing this one. |
I use the test :
python compress_classifier.py -a preact_resnet20_cifar --lr 0.1 -p 50 -b 128 ../../../data.cifar10/ -j 1 --resume ../../../data.cifar10/models/best.pth.tar --epochs 200 --compress=../quantization/preact_resnet20_cifar_pact.yaml --out-dir="logs/" --wd=0.0002 --vs=0
some error:
how to fix it?
The text was updated successfully, but these errors were encountered: