You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I possibly have found a major bug in the data loading code sg2im/data/vg.py, which hides all the relationship and the model only sees the type and number of objects.
In sg2im/data/vg.py line 66, the getitem function use a python set to store the object index that is involved in some relationships, keep a certain number of objects, and then keep the relationships whose subjects and objects are both kept. But the following simple example shows that this doesn't work.
s = set()
x = torch.LongTensor([1, 2, 3])
s.add(x[0])
x[0] in s # False
When adding
assert len(triples) == 0
to line 134, vg.py, the training can go through, which proves that the model does not see any relationship except for in_image
When generating images with the pre-trained model vg124.pt, the following two scene graphs generates almost the same images.
You are 100% correct, this is a bug, and should now be fixed in 57e1c08.
Prior to PyTorch 0.4, there were no PyTorch scalars so any indexing operation that would return a scalar just returned a Python scalar. All of the pretrained models were trained with PyTorch 0.3.0 so they were trained with proper data loading and should not be affected by this.
Hi,
I possibly have found a major bug in the data loading code sg2im/data/vg.py, which hides all the relationship and the model only sees the type and number of objects.
In sg2im/data/vg.py line 66, the getitem function use a python set to store the object index that is involved in some relationships, keep a certain number of objects, and then keep the relationships whose subjects and objects are both kept. But the following simple example shows that this doesn't work.
When adding
to line 134, vg.py, the training can go through, which proves that the model does not see any relationship except for in_image
When generating images with the pre-trained model vg124.pt, the following two scene graphs generates almost the same images.
I recommend to turn pytorch scalar tensor to python int object before put it in python set, and the pre-trained model may need some update.
Thanks
The text was updated successfully, but these errors were encountered: