Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for Neural Network Model Checkpoint #2

Open
bibiiscool opened this issue Apr 26, 2024 · 3 comments
Open

Request for Neural Network Model Checkpoint #2

bibiiscool opened this issue Apr 26, 2024 · 3 comments

Comments

@bibiiscool
Copy link

I'm a student and I am highly interested in your neural network model.
I'm currently interested in validating the performance of your neural network model. However, training the model from scratch takes considerable time. Would it be possible for you to share the checkpoint of your model?
It would greatly accelerate my experimentation process. Thank you for considering my request.

@pinakinathc
Copy link
Owner

Hi @bibiiscool I checked my old servers and found the two latest checkpoints -- https://drive.google.com/file/d/1UDOmf3QzeKdZHrqkk1qP7hJY7tY-BTYF/view?usp=share_link

Check with both to see which gives robust performance. A few pointers since you are working on this --

The big challenge is to make sure your model is robust to varying sketch style and geometry. I trained these model using NPR sketches rendered using Blender from the 3D garments. Off-the-shelf generalisation to freehand badly drawn sketches is challenging (depends on how good was your NPR).

The SIGGRAPH Asia 2015 dataset is probably not available online -- but I have a copy and have taken permission from the authors to host it again. Will do it shortly to help reproduce the results in paper. My point -- have a good dataset to train.

@bibiiscool
Copy link
Author

Hi @bibiiscool I checked my old servers and found the two latest checkpoints -- https://drive.google.com/file/d/1UDOmf3QzeKdZHrqkk1qP7hJY7tY-BTYF/view?usp=share_link

Check with both to see which gives robust performance. A few pointers since you are working on this --

The big challenge is to make sure your model is robust to varying sketch style and geometry. I trained these model using NPR sketches rendered using Blender from the 3D garments. Off-the-shelf generalisation to freehand badly drawn sketches is challenging (depends on how good was your NPR).

The SIGGRAPH Asia 2015 dataset is probably not available online -- but I have a copy and have taken permission from the authors to host it again. Will do it shortly to help reproduce the results in paper. My point -- have a good dataset to train.

Thank you very much for your response. I truly appreciate your help in clarifying the approach used and the challenges involved.

I will start running the checkpoints as soon as possible and keep you updated on my progress once I have any findings or observations.

@bibiiscool
Copy link
Author

bibiiscool commented May 9, 2024

Sorry to bother you again, but when I place the checkpoint into the project and run predict.py, no matter what I change parser.add_argument('--model_name', type=str, default='model_A') to in predict.py, whether it's model_AA, model_B, or any other model, I always get the following error:

/root/miniconda3/envs/garment/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  warnings.warn(
/root/miniconda3/envs/garment/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Traceback (most recent call last):
  File "predict.py", line 63, in <module>
    model = GarmentModel.load_from_checkpoint(opt.ckpt)
  File "/root/miniconda3/envs/garment/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 137, in load_from_checkpoint
    return _load_from_checkpoint(
  File "/root/miniconda3/envs/garment/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 205, in _load_from_checkpoint
    return _load_state(cls, checkpoint, strict=strict, **kwargs)
  File "/root/miniconda3/envs/garment/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 259, in _load_state
    keys = obj.load_state_dict(checkpoint["state_dict"], strict=strict)
  File "/root/miniconda3/envs/garment/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1604, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for GarmentModel:
        Missing key(s) in state_dict: "encoder.model.1.weight", "encoder.model.1.bias", "encoder.model.1.running_mean", "encoder.model.1.running_var", "encoder.model.4.0.conv1.weight", "encoder.model.4.0.bn1.weight", "encoder.model.4.0.bn1.bias", "encoder.model.4.0.bn1.running_mean", "encoder.model.4.0.bn1.running_var", "encoder.model.4.0.conv2.weight", "encoder.model.4.0.bn2.weight", "encoder.model.4.0.bn2.bias", "encoder.model.4.0.bn2.running_mean", "encoder.model.4.0.bn2.running_var", "encoder.model.4.1.conv1.weight", "encoder.model.4.1.bn1.weight", "encoder.model.4.1.bn1.bias", "encoder.model.4.1.bn1.running_mean", "encoder.model.4.1.bn1.running_var", "encoder.model.4.1.conv2.weight", "encoder.model.4.1.bn2.weight", "encoder.model.4.1.bn2.bias", "encoder.model.4.1.bn2.running_mean", "encoder.model.4.1.bn2.running_var", "encoder.model.5.0.conv1.weight", "encoder.model.5.0.bn1.weight", "encoder.model.5.0.bn1.bias", "encoder.model.5.0.bn1.running_mean", "encoder.model.5.0.bn1.running_var", "encoder.model.5.0.conv2.weight", "encoder.model.5.0.bn2.weight", "encoder.model.5.0.bn2.bias", "encoder.model.5.0.bn2.running_mean", "encoder.model.5.0.bn2.running_var", "encoder.model.5.0.downsample.0.weight", "encoder.model.5.0.downsample.1.weight", "encoder.model.5.0.downsample.1.bias", "encoder.model.5.0.downsample.1.running_mean", "encoder.model.5.0.downsample.1.running_var", "encoder.model.5.1.conv1.weight", "encoder.model.5.1.bn1.weight", "encoder.model.5.1.bn1.bias", "encoder.model.5.1.bn1.running_mean", "encoder.model.5.1.bn1.running_var", "encoder.model.5.1.conv2.weight", "encoder.model.5.1.bn2.weight", "encoder.model.5.1.bn2.bias", "encoder.model.5.1.bn2.running_mean", "encoder.model.5.1.bn2.running_var", "encoder.model.6.0.conv1.weight", "encoder.model.6.0.bn1.weight", "encoder.model.6.0.bn1.bias", "encoder.model.6.0.bn1.running_mean", "encoder.model.6.0.bn1.running_var", "encoder.model.6.0.conv2.weight", "encoder.model.6.0.bn2.weight", "encoder.model.6.0.bn2.bias", "encoder.model.6.0.bn2.running_mean", "encoder.model.6.0.bn2.running_var", "encoder.model.6.0.downsample.0.weight", "encoder.model.6.0.downsample.1.weight", "encoder.model.6.0.downsample.1.bias", "encoder.model.6.0.downsample.1.running_mean", "encoder.model.6.0.downsample.1.running_var", "encoder.model.6.1.conv1.weight", "encoder.model.6.1.bn1.weight", "encoder.model.6.1.bn1.bias", "encoder.model.6.1.bn1.running_mean", "encoder.model.6.1.bn1.running_var", "encoder.model.6.1.conv2.weight", "encoder.model.6.1.bn2.weight", "encoder.model.6.1.bn2.bias", "encoder.model.6.1.bn2.running_mean", "encoder.model.6.1.bn2.running_var", "encoder.model.7.0.conv1.weight", "encoder.model.7.0.bn1.weight", "encoder.model.7.0.bn1.bias", "encoder.model.7.0.bn1.running_mean", "encoder.model.7.0.bn1.running_var", "encoder.model.7.0.conv2.weight", "encoder.model.7.0.bn2.weight", "encoder.model.7.0.bn2.bias", "encoder.model.7.0.bn2.running_mean", "encoder.model.7.0.bn2.running_var", "encoder.model.7.0.downsample.0.weight", "encoder.model.7.0.downsample.1.weight", "encoder.model.7.0.downsample.1.bias", "encoder.model.7.0.downsample.1.running_mean", "encoder.model.7.0.downsample.1.running_var", "encoder.model.7.1.conv1.weight", "encoder.model.7.1.bn1.weight", "encoder.model.7.1.bn1.bias", "encoder.model.7.1.bn1.running_mean", "encoder.model.7.1.bn1.running_var", "encoder.model.7.1.conv2.weight", "encoder.model.7.1.bn2.weight", "encoder.model.7.1.bn2.bias", "encoder.model.7.1.bn2.running_mean", "encoder.model.7.1.bn2.running_var", "alignNet.layer.0.weight", "alignNet.layer.0.bias", "updater.layer.0.weight", "updater.layer.0.bias", "decoder.lin0.weight", "decoder.lin1.weight", "decoder.lin2.weight", "decoder.lin3.weight", "decoder.lin4.weight", "decoder.lin5.weight", "decoder.lin6.weight", "decoder.lin7.weight", "decoder.lin8.weight". 
        Unexpected key(s) in state_dict: "alignUpdater.layer.0.weight", "alignUpdater.layer.0.bias", "alignUpdater.layer.1.weight", "alignUpdater.layer.1.bias", "alignUpdater.layer.1.running_mean", "alignUpdater.layer.1.running_var", "alignUpdater.layer.1.num_batches_tracked", "alignUpdater.layer.3.weight", "alignUpdater.layer.3.bias", "alignUpdater.layer.4.weight", "alignUpdater.layer.4.bias", "alignUpdater.layer.4.running_mean", "alignUpdater.layer.4.running_var", "alignUpdater.layer.4.num_batches_tracked", "alignUpdater.feat_emb.0.weight", "alignUpdater.feat_emb.0.bias", "alignUpdater.feat_emb.1.weight", "alignUpdater.feat_emb.1.bias", "alignUpdater.feat_emb.1.running_mean", "alignUpdater.feat_emb.1.running_var", "alignUpdater.feat_emb.1.num_batches_tracked", "alignUpdater.feat_emb.3.weight", "alignUpdater.feat_emb.3.bias", "alignUpdater.alpha_emb.0.weight", "alignUpdater.alpha_emb.0.bias", "alignUpdater.alpha_emb.1.weight", "alignUpdater.alpha_emb.1.bias", "alignUpdater.alpha_emb.1.running_mean", "alignUpdater.alpha_emb.1.running_var", "alignUpdater.alpha_emb.1.num_batches_tracked", "alignUpdater.alpha_emb.3.weight", "alignUpdater.alpha_emb.3.bias", "alphaClassifier.classifier.0.weight", "alphaClassifier.classifier.0.bias", "alphaClassifier.classifier.1.weight", "alphaClassifier.classifier.1.bias", "alphaClassifier.classifier.1.running_mean", "alphaClassifier.classifier.1.running_var", "alphaClassifier.classifier.1.num_batches_tracked", "alphaClassifier.classifier.3.weight", "alphaClassifier.classifier.3.bias", "encoder.model.10.weight", "encoder.model.10.bias", "encoder.model.12.weight", "encoder.model.12.bias", "encoder.model.14.weight", "encoder.model.14.bias", "encoder.model.17.weight", "encoder.model.17.bias", "encoder.model.19.weight", "encoder.model.19.bias", "encoder.model.21.weight", "encoder.model.21.bias", "encoder.model.24.weight", "encoder.model.24.bias", "encoder.model.26.weight", "encoder.model.26.bias", "encoder.model.28.weight", "encoder.model.28.bias", "encoder.model.0.bias", "encoder.model.2.weight", "encoder.model.2.bias", "encoder.model.5.weight", "encoder.model.5.bias", "encoder.model.7.weight", "encoder.model.7.bias", "decoder.lin0.weight_g", "decoder.lin0.weight_v", "decoder.lin1.weight_g", "decoder.lin1.weight_v", "decoder.lin2.weight_g", "decoder.lin2.weight_v", "decoder.lin3.weight_g", "decoder.lin3.weight_v", "decoder.lin4.weight_g", "decoder.lin4.weight_v", "decoder.lin5.weight_g", "decoder.lin5.weight_v", "decoder.lin6.weight_g", "decoder.lin6.weight_v", "decoder.lin7.weight_g", "decoder.lin7.weight_v", "decoder.lin8.weight_g", "decoder.lin8.weight_v". 
        size mismatch for encoder.model.0.weight: copying a param with shape torch.Size([64, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 3, 7, 7]).

How can I fix this error? Is there anything I missed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants