New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue using Pytorch quantized models #396
Comments
The problem was with installed Pytorch version. I upgraded it to 1.11.0 and everything worked as expected. |
We should update the requirements.txt then, right? @nik1806 |
Yes, we can update to latest work of PyTorch. |
Great! Can you open the new PR, please? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Traceback (most recent call last):
File "driver.py", line 221, in
main()
File "driver.py", line 197, in main
pilot = Pilot(app_configuration, controller, app_configuration.brain_path)
File "/root/BehaviorMetrics/behavior_metrics/pilot.py", line 69, in init
self.initialize_robot()
File "/root/BehaviorMetrics/behavior_metrics/pilot.py", line 115, in initialize_robot
self.configuration.experiment_model, self.configuration.brain_kwargs)
File "/root/BehaviorMetrics/behavior_metrics/brains/brains_handler.py", line 28, in init
self.load_brain(brain_path)
File "/root/BehaviorMetrics/behavior_metrics/brains/brains_handler.py", line 55, in load_brain
self.active_brain = Brain(self.sensors, self.actuators, model=self.model, handler=self, config=self.config)
File "/root/BehaviorMetrics/behavior_metrics/brains/f1/brain_f1_torch.py", line 64, in init
self.net = torch.jit.load(PRETRAINED_MODELS + model).to(self.device)
File "/usr/local/lib/python3.7/dist-packages/torch/jit/_serialization.py", line 161, in load
cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)
RuntimeError:
Unknown type name 'NoneType':
Serialized File "code/torch/torch/nn/quantized/modules/linear.py", line 14
return (qweight, bias, training, dtype)
def setstate(self: torch.torch.nn.quantized.modules.linear.LinearPackedParams,
state: Tuple[Tensor, Optional[Tensor], bool, int]) -> NoneType:
~~~~~~~~ <--- HERE
self.dtype = (state)[3]
_0 = (self).set_weight_bias((state)[0], (state)[1], )
The text was updated successfully, but these errors were encountered: