Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running with --cpu does not work for animation_demo.py #16

Closed
tikitong opened this issue Sep 28, 2022 · 5 comments
Closed

Running with --cpu does not work for animation_demo.py #16

tikitong opened this issue Sep 28, 2022 · 5 comments

Comments

@tikitong
Copy link

python animation_demo.py --config config/end2end.yaml --checkpoint ./ckpt/final_3DV.tar --source_image_pth ./assets/EM.jpeg --driving_video_pth ./assets/02.mp4 --relative --adapt_scale --find_best_frame --cpu gives me:

/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/skimage/transform/_warps.py:105: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
  warn("The default mode, 'constant', will be changed to 'reflect' in "
/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/skimage/transform/_warps.py:110: UserWarning: Anti-aliasing will be enabled by default in skimage 0.15 to avoid aliasing artifacts when down-sampling images.
  warn("Anti-aliasing will be enabled by default in skimage 0.15 to "
animation_demo.py:32: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(f)
blend_scale:  1
/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp:2895.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/torchvision/models/_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "
/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=MobileNet_V2_Weights.IMAGENET1K_V1`. You can also use `weights=MobileNet_V2_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
creating the FLAME Decoder
/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/pytorch3d/io/obj_io.py:533: UserWarning: Mtl file does not exist: ./modules/data/template.mtl
  warnings.warn(f"Mtl file does not exist: {f}")
[W NNPACK.cpp:51] Could not initialize NNPACK! Reason: Unsupported hardware.
128it [03:03,  1.48s/it]
Best frame: 120
/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/torch/nn/functional.py:4216: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
  "Default grid_sample and affine_grid behavior has changed "
Traceback (most recent call last):
  File "animation_demo.py", line 216, in <module>
    relative=opt.relative, adapt_movement_scale=opt.adapt_scale, cpu=opt.cpu)
  File "animation_demo.py", line 83, in make_animation
    driving_initial = driving[:, :, 0].cuda()
  File "/Users/user/miniconda3/envs/safa3/lib/python3.7/site-packages/torch/cuda/__init__.py", line 211, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

it seems to come from the animation_demo.py file with the absence of the if not cpu condition in line 83.

How can I modify the file to solve this properly?

@rlaboiss
Copy link
Contributor

See PR #17

@Qiulin-W
Copy link
Owner

See PR #17

Thanks so much for the nice PRs

@rlaboiss
Copy link
Contributor

Thanks so much for the nice PRs

You are welcome. I am considering using SAFA for generating some stimuli video for one of my scientific studies, hence my interest in your software.

It was not easy to get it working for the modern version of the dependency modules, in particular torch and PyTorch3D. By the way, I could not make SAFA work for the newest version of pytorch (1.12.1) and pytorchvision (0.13.1). The latest PyTorch3D (0.7.1, recently released) works fine with pytorch 1.11.0 and pytorchvision 0.12.0.

At any rate, I have created a fork for SAFA. I submitted most of my changes as PRs to the upstream SAFA repository, but not everything. Please take a look at my forked repository and tell me what you think.

@Qiulin-W
Copy link
Owner

Thanks so much for the nice PRs

You are welcome. I am considering using SAFA for generating some stimuli video for one of my scientific studies, hence my interest in your software.

It was not easy to get it working for the modern version of the dependency modules, in particular torch and PyTorch3D. By the way, I could not make SAFA work for the newest version of pytorch (1.12.1) and pytorchvision (0.13.1). The latest PyTorch3D (0.7.1, recently released) works fine with pytorch 1.11.0 and pytorchvision 0.12.0.

At any rate, I have created a fork for SAFA. I submitted most of my changes as PRs to the upstream SAFA repository, but not everything. Please take a look at my forked repository and tell me what you think.

Feel free to make any adaptations for the newest version of PyTorch3D. But please do not use SAFA for any commercial purpose.

@rlaboiss
Copy link
Contributor

Feel free to make any adaptations for the newest version of PyTorch3D. But please do not use SAFA for any commercial purpose.

Sure. As I write before, I am looking for a face animation software that can produce controlled audiovisual stimuli for a perception study. No commercial purpose involved, only Science. And your paper in 3DV 2021 will get properly cited, in the case I will use SAFA in my study.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants