Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to save the model of each stage #21

Open
cc19970821 opened this issue Aug 12, 2021 · 3 comments
Open

How to save the model of each stage #21

cc19970821 opened this issue Aug 12, 2021 · 3 comments

Comments

@cc19970821
Copy link

hello, i want to save the model to do the T-SNE, how to save the model of each stage?
i found you had set the ckpt but don't set the torch.save() to save the model.

@yaoyao-liu
Copy link
Owner

Hi,

I removed the command for the checkpoints to save space.
If you need to save the checkpoints, you may add the following commands:

torch.save(b1_model, ckp_name)
torch.save(b2_model, ckp_name_b2)  

after these lines:

else:
raise ValueError('Please set the correct baseline.')

@ZHUANGHP
Copy link

ZHUANGHP commented Mar 5, 2022

@yaoyao-liu After saving the models at each stage, how could we measure the accuracy for such stage by loading the models from the checkpoints. For instance in training on cifar100 for 5 phases, could we load "iter_9_b1.pth" and "iter_9_b2.pth" (e.g., using --resume) without training to measure the accuracy of the last phase? I have tested the code and it does not seem to be trivial. If I directly load "iter_9_b1.pth", "iter_9_b2.pth" and test them using

b1_model.eval()
b2_model.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
    for batch_idx, (inputs, targets) in enumerate(testloader):
        inputs, targets = inputs.to(device), targets.to(device)
        outputs, _ = process_inputs_fp(the_args, fusion_vars, b1_model, b2_model, inputs)
        loss = nn.CrossEntropyLoss(weight_per_class)(outputs, targets)
        test_loss += loss.item()
        _, predicted = outputs.max(1)
        total += targets.size(0)
        correct += predicted.eq(targets).sum().item()
print('Test set: {} test loss: {:.4f} accuracy: {:.4f}'.format(len(testloader), test_loss / (batch_idx + 1),
                                                               100. * correct / total))

then the accuracy would be roung 2%. Is this because fusion_vars should somehow be updated during the training?

@yaoyao-liu
Copy link
Owner

Hi @ZHUANGHP,

Thanks for your interest in our work.

If you need to run the test without training, we can create the test code following this:
https://github.com/yaoyao-liu/class-incremental-learning/tree/main/mnemonics-training/2_eval

If you have further questions, please feel free to email me. Have a nice day!

Best,
Yaoyao

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants