Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: can't assign a numpy.int64 to a torch.FloatTensor #452

Open
goodproj13 opened this issue Feb 28, 2019 · 15 comments
Open

TypeError: can't assign a numpy.int64 to a torch.FloatTensor #452

goodproj13 opened this issue Feb 28, 2019 · 15 comments

Comments

@goodproj13
Copy link

Guys,
Did anyone know what caused this issue below? Many thanks in advance.
Torch: 0.40
Python: 3.6

We use voc format of KITTI dataset. The only thing I changed for the original code from this repo is to change the picture format from "jpg" to "png" which is the format for KITTI dataset. When I run ''python trainvel_net.py", I got error below:

Traceback (most recent call last):
File "trainval_net.py", line 209, in
imdb.num_classes, training=True)
File "/home/NewPartion/pycharm/faster-rcnn.pytorch/lib/roi_data_layer/roibatchLoader.py", line 54, in init
self.ratio_list_batch[left_idx:(right_idx+1)] = target_ratio
TypeError: can't assign a numpy.int64 to a torch.FloatTensor

@EMCP
Copy link
Contributor

EMCP commented Mar 14, 2019

please paste all the output from your training run.. so we can see if you enabled CUDA, etc...

@ma-xu
Copy link

ma-xu commented Apr 12, 2019

Did you solve this problem? I have the same issue ...

@goodproj13
Copy link
Author

goodproj13 commented Apr 12, 2019 via email

@ma-xu
Copy link

ma-xu commented Apr 12, 2019

Maybe I should try mmdetection or Detectron ...

@Weizhongjin
Copy link

Change original code to
temp = torch.ones(batch_size)*target_ratio
self.ratio_list_batch[left_idx:(right_idx+1)] = temp

@benedictflorance
Copy link
Contributor

benedictflorance commented Jun 4, 2019

@Weizhongjin

Change original code to
temp = torch.ones(batch_size)*target_ratio
self.ratio_list_batch[left_idx:(right_idx+1)] = temp

Doing the above fix throws this error:

**TypeError: mul() received an invalid combination of arguments - got (numpy.int64), but expected one of:
 * (Tensor other)
      didn't match because some of the arguments have invalid types: (numpy.int64)
 * (float other)
      didn't match because some of the arguments have invalid types: (numpy.int64)

**

Has anyone found a fix for this?

@benedictflorance
Copy link
Contributor

benedictflorance commented Jun 4, 2019

@EMCP

Called with args:
Namespace(batch_size=1, checkepoch=1, checkpoint=0, checkpoint_interval=10000, checksession=1, class_agnostic=False, cuda=True, dataset='hollywoodheads_scuta', disp_interval=100, lamda=0.1, large_scale=False, lr=0.002, lr_decay_gamma=0.1, lr_decay_step=6, mGPUs=True, max_epochs=5, net='vgg16', num_workers=0, optimizer='sgd', resume=False, save_dir='data/adaptation/experiments', session=1, start_epoch=1, use_tfboard=False)
loading our dataset...........
/export/livia/home/vision/bflorance/da-faster-rcnn-PyTorch/lib/model/utils/config.py:376: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  yaml_cfg = edict(yaml.load(f))
Using config:
{'ANCHOR_RATIOS': [0.5, 1, 2],
 'ANCHOR_SCALES': [4, 8, 16, 32],
 'CROP_RESIZE_WITH_MAX_POOL': False,
 'CUDA': False,
 'DATA_DIR': '/export/livia/home/vision/bflorance/da-faster-rcnn-PyTorch/data',
 'DEDUP_BOXES': 0.0625,
 'DSN_DIFF_WEIGHT': 100000,
 'EPS': 1e-14,
 'EXP_DIR': 'vgg16',
 'FEAT_STRIDE': [16],
 'GPU_ID': 0,
 'MATLAB': 'matlab',
 'MAX_NUM_GT_BOXES': 50,
 'MOBILENET': {'DEPTH_MULTIPLIER': 1.0,
               'FIXED_LAYERS': 5,
               'REGU_DEPTH': False,
               'WEIGHT_DECAY': 4e-05},
 'PIXEL_MEANS': array([[[102.9801, 115.9465, 122.7717]]]),
 'POOLING_MODE': 'align',
 'POOLING_SIZE': 7,
 'RESNET': {'FIXED_BLOCKS': 1, 'MAX_POOL': False},
 'RNG_SEED': 3,
 'ROOT_DIR': '/export/livia/home/vision/bflorance/da-faster-rcnn-PyTorch',
 'TEST': {'BBOX_REG': True,
          'HAS_RPN': True,
          'MAX_SIZE': 1000,
          'MODE': 'nms',
          'NMS': 0.3,
          'PROPOSAL_METHOD': 'gt',
          'RPN_MIN_SIZE': 16,
          'RPN_NMS_THRESH': 0.7,
          'RPN_POST_NMS_TOP_N': 300,
          'RPN_PRE_NMS_TOP_N': 6000,
          'RPN_TOP_N': 5000,
          'SCALES': [600],
          'SVM': False},
 'TRAIN': {'ASPECT_GROUPING': False,
           'BATCH_SIZE': 256,
           'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
           'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0],
           'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2],
           'BBOX_NORMALIZE_TARGETS': True,
           'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': True,
           'BBOX_REG': True,
           'BBOX_THRESH': 0.5,
           'BG_THRESH_HI': 0.5,
           'BG_THRESH_LO': 0.0,
           'BIAS_DECAY': False,
           'BN_TRAIN': False,
           'DISPLAY': 10,
           'DOUBLE_BIAS': True,
           'FG_FRACTION': 0.25,
           'FG_THRESH': 0.5,
           'GAMMA': 0.1,
           'HAS_RPN': True,
           'IMS_PER_BATCH': 1,
           'LEARNING_RATE': 0.01,
           'MAX_SIZE': 1000,
           'MOMENTUM': 0.9,
           'PROPOSAL_METHOD': 'gt',
           'RPN_BATCHSIZE': 256,
           'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
           'RPN_CLOBBER_POSITIVES': False,
           'RPN_FG_FRACTION': 0.5,
           'RPN_MIN_SIZE': 8,
           'RPN_NEGATIVE_OVERLAP': 0.3,
           'RPN_NMS_THRESH': 0.7,
           'RPN_POSITIVE_OVERLAP': 0.7,
           'RPN_POSITIVE_WEIGHT': -1.0,
           'RPN_POST_NMS_TOP_N': 2000,
           'RPN_PRE_NMS_TOP_N': 12000,
           'SCALES': [600],
           'SNAPSHOT_ITERS': 5000,
           'SNAPSHOT_KEPT': 3,
           'SNAPSHOT_PREFIX': 'res101_faster_rcnn',
           'STEPSIZE': [30000],
           'SUMMARY_INTERVAL': 180,
           'TRIM_HEIGHT': 600,
           'TRIM_WIDTH': 600,
           'TRUNCATED': False,
           'USE_ALL_GT': True,
           'USE_FLIPPED': True,
           'USE_GT': False,
           'WEIGHT_DECAY': 0.0005},
 'USE_GPU_NMS': True}
Loaded dataset `hollywoodheads_scuta_2007_train_s` for training
Set proposal method: gt
Appending horizontally-flipped training examples...
hollywoodheads_scuta_2007_train_s gt roidb loaded from /export/livia/home/vision/bflorance/da-faster-rcnn-PyTorch/data/cache/hollywoodheads_scuta_2007_train_s_gt_roidb.pkl
done
Preparing training data...
done
before filtering, there are 200 images...
after filtering, there are 200 images...
Source Train Size =  200
Loaded dataset `hollywoodheads_scuta_2007_train_t` for training
Set proposal method: gt
Appending horizontally-flipped training examples...
hollywoodheads_scuta_2007_train_t gt roidb loaded from /export/livia/home/vision/bflorance/da-faster-rcnn-PyTorch/data/cache/hollywoodheads_scuta_2007_train_t_gt_roidb.pkl
done
Preparing training data...
done
before filtering, there are 200 images...
after filtering, there are 200 images...
Target Train Size =  200
source 200 target 200 roidb entries
Traceback (most recent call last):
  File "da_trainval_net.py", line 257, in <module>
    s_imdb.num_classes, training=True)
  File "/export/livia/home/vision/bflorance/da-faster-rcnn-PyTorch/lib/roi_da_data_layer/roibatchLoader.py", line 55, in __init__
    self.ratio_list_batch[left_idx:(right_idx+1)] = target_ratio    # trainset ratio list ,each batch is same number
TypeError: can't assign a numpy.int64 to a torch.FloatTensor

@EMCP
Copy link
Contributor

EMCP commented Jun 4, 2019

Can you possibly upgrade to PyTorch 1.0 ? I've used exclusively 1.x and had zero issues for months now

@benedictflorance
Copy link
Contributor

@EMCP I'm working on domain adaptation (https://github.com/tiancity-NJU/da-faster-rcnn-PyTorch) and it uses the PyTorch 0.4 version of this repo. :(

@EMCP
Copy link
Contributor

EMCP commented Jun 4, 2019

okay i am abroad from my deep learning rig, so I cannot test PyTorch .4 until mid June... but I would diff the two versions and ensure you've got any/all bug fixes pushed to the 1.x version... I never ran the Master branch so I cannot get a sense if it's been properly patched or not..

IMO , the repo owner should just do a release branch for older PyTorch versions and stick to the bleeding edge in Master root, instead of this 1.x off on the side strategy.

@benedictflorance
Copy link
Contributor

self.ratio_list_batch[left_idx:(right_idx+1)] = torch.tensor(target_ratio.astype(np.float64)) # trainset ratio list ,each batch is same number
This fixed the issue for 0.4.0

benedictflorance added a commit to benedictflorance/faster-rcnn.pytorch that referenced this issue Jun 6, 2019
@EMCP
Copy link
Contributor

EMCP commented Jun 8, 2019

feel free to submit a PR and close @benedictflorance

@benedictflorance
Copy link
Contributor

Yeah, I've submitted one.
#573

jwyang added a commit that referenced this issue Jun 10, 2019
@ljk1072911239
Copy link

/lib/roi_data_layer/roibatchLoader.py
line 52, target_ratio = 1 change to:
target_ratio = np.array(1)

zrh0712 pushed a commit to zrh0712/faster-rcnn.pytorch that referenced this issue Sep 4, 2019
* commit '624608fd2f1fb332ef585062cfe51fabf718430d':
  Update bbox_transform.py
  Fix typo in faster_rcnn.py
  Fix typo in faster_rcnn.py
  Remove "for training" in get_roidb
  Fix issue jwyang#452 for PyTorch 0.4.0
  two pdb imports is redundant
@viniciusarruda
Copy link

Solving this definitely, in this line, change:

ratio_large = 2 # largest ratio to preserve.

to:

ratio_large = 2.0 # largest ratio to preserve.

PyTorch is creating a tensor with these values and its type is inferred from the data. A value of 2 will be inferred as int, thus, changing to a floating point value will fix it.
Note that this error only occurs when it falls in this if statement. The below if statement uses ratio = ratio_small where ratio_small = 0.5 is already a floating point value as defined in the beginning of the function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants