Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytorch 1.0 error #3

Closed
yr18 opened this issue Apr 9, 2019 · 2 comments
Closed

pytorch 1.0 error #3

yr18 opened this issue Apr 9, 2019 · 2 comments

Comments

@yr18
Copy link

yr18 commented Apr 9, 2019

Hello,

I have updated my pytorch to the latest 1.0 version but still use python 3.6
when I run the code I got the following tracebacks , I wonder if I can run the code with the latest pytorch 1.0? Thank you so much for your time and help!

Traceback (most recent call last):
File "C:\Users\Administrator\Desktop\pytorch-drl4vrp-master\trainer.py", line 390, in
train_vrp(args)
File "C:\Users\Administrator\Desktop\pytorch-drl4vrp-master\trainer.py", line 348, in train_vrp
train(actor, critic, **kwargs)
File "C:\Users\Administrator\Desktop\pytorch-drl4vrp-master\trainer.py", line 164, in train
tour_indices, tour_logp = actor(static, dynamic, x0)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "C:\Users\Administrator\Desktop\pytorch-drl4vrp-master\model.py", line 192, in forward
dynamic_hidden = self.dynamic_encoder(dynamic)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "C:\Users\Administrator\Desktop\pytorch-drl4vrp-master\model.py", line 17, in forward
output = self.conv(input)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\conv.py", line 187, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'weight'

@yr18 yr18 closed this as completed Apr 10, 2019
@nabilbenmerad
Copy link

nabilbenmerad commented Apr 25, 2019

So were you able to resolve this ? I'm getting a similar DataType error:

Traceback (most recent call last):
File "trainer.py", line 390, in
train_vrp(args)
File "trainer.py", line 348, in train_vrp
train(actor, critic, **kwargs)
File "trainer.py", line 164, in train
tour_indices, tour_logp = actor(static, dynamic, x0)
File "/home/nabimaru/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/nabimaru/Workspace/DCbrain/GraphRL/VRP/pytorch-drl4vrp-master/model.py", line 192, in forward
dynamic_hidden = self.dynamic_encoder(dynamic)
File "/home/nabimaru/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/nabimaru/Workspace/DCbrain/GraphRL/VRP/pytorch-drl4vrp-master/model.py", line 17, in forward
output = self.conv(input)
File "/home/nabimaru/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/nabimaru/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 187, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.DoubleTensor) and weight type (torch.cuda.FloatTensor) should be the same

Thank you in advance.

@mveres01
Copy link
Owner

It should be enough to modify a small snippet in tasks/vrp.py

        # All states will have their own intrinsic demand in [1, max_demand),
        # then scaled by the maximum load. E.g. if load=10 and max_demand=30,
        # demands will be scaled to the range (0, 3)
        demands = torch.randint(1, max_demand + 1, dynamic_shape)
        demands = demands.type(torch.FloatTensor) / max_load

        demands[:, 0, 0] = 0  # depot starts with a demand of 0
        self.dynamic = torch.cat((loads, demands), dim=1)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants