Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sparse Tensor Error! #5

Open
edavattathu-ronnie opened this issue Sep 28, 2023 · 3 comments
Open

Sparse Tensor Error! #5

edavattathu-ronnie opened this issue Sep 28, 2023 · 3 comments

Comments

@edavattathu-ronnie
Copy link

@TimoBolkart
I am getting this error, when trying to run the coarse stage inference:

loaded pretrained
resume_checkpoint(): found 1 models
Resuming progress from 600001 iteration
from model path ./runs/coarse\coarse__TEMPEH_final\checkpoints\model_00600000.pth
Traceback (most recent call last):
File "tester/test_global.py", line 78, in
Traceback (most recent call last):
File "", line 1, in
main()
File "C:\Users\RonniAdavattathuInno\anaconda3\envs\TEMPEH\lib\multiprocessing\spawn.py", line 105, in spawn_main
File "tester/test_global.py", line 75, in main
exitcode = _main(fd)
calibration_directory=calibration_directory, image_file_ext=image_file_ext, out_dir=out_dir)
File "C:\Users\RonniAdavattathuInno\anaconda3\envs\TEMPEH\lib\multiprocessing\spawn.py", line 115, in _main
File "tester/test_global.py", line 53, in run
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
execute_locally(test_config_fname)
File "tester/test_global.py", line 28, in execute_locally
run(config_fname=test_config_fname)
File "C:\GANs\TEMPEH\tester\global_tester.py", line 292, in run
tester.run()
File "C:\GANs\TEMPEH\tester\global_tester.py", line 120, in run
for data in self.dataloader:
File "C:\Users\RonniAdavattathuInno\anaconda3\envs\TEMPEH\lib\site-packages\torch\utils\data\dataloader.py", line 444, in iter
return self._get_iterator()
File "C:\Users\RonniAdavattathuInno\anaconda3\envs\TEMPEH\lib\site-packages\torch\utils\data\dataloader.py", line 390, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\RonniAdavattathuInno\anaconda3\envs\TEMPEH\lib\site-packages\torch\utils\data\dataloader.py", line 1077, in init
w.start()
File "C:\Users\RonniAdavattathuInno\anaconda3\envs\TEMPEH\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\RonniAdavattathuInno\anaconda3\envs\TEMPEH\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\RonniAdavattathuInno\anaconda3\envs\TEMPEH\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\RonniAdavattathuInno\anaconda3\envs\TEMPEH\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\RonniAdavattathuInno\anaconda3\envs\TEMPEH\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "C:\Users\RonniAdavattathuInno\anaconda3\envs\TEMPEH\lib\site-packages\torch\multiprocessing\reductions.py", line 142, in reduce_tensor
storage = tensor.storage()
File "C:\Users\RonniAdavattathuInno\anaconda3\envs\TEMPEH\lib\site-packages\torch_tensor.py", line 205, in storage
return torch._TypedStorage(wrap_storage=self._storage(), dtype=self.dtype)
NotImplementedError: Cannot access storage of SparseTensorImpl

@TimoBolkart
Copy link
Owner

Unfortunately, I have no experience with running the inference on Windows. The code and model itself were only tested on different Linux distributions. What Pytorch version are you using?

@edavattathu-ronnie
Copy link
Author

Hey Timo, thanks for getting back! And also thanks for the great work!
I did try with both: torch==1.12.1+cu116 and torch==1.13.1+cu117, but getting the same error, maybe I will jump to linux and try my luck there!
Seems like some issue going on with loading the pretrained weights? I am not entirely sure

@TimoBolkart
Copy link
Owner

This seems correct. Maybe try re-downloading the model, just to make sure that the downloaded file was not corrupted. Otherwise, I don't have any good idea on why you can't load the model on your end

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants