Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: operands could not be broadcast together with shapes (88,23) (88,24) #10

Closed
ZhangYK124 opened this issue Aug 8, 2018 · 3 comments
Assignees

Comments

@ZhangYK124
Copy link

s1@s1:~/Downloads/WSHP/data_generation$ bash demo.sh
length of json_dict 2
length of pascal_img 644
bbox [308, 272, 331, 406]
0 picked pascal 2010_003117_0
Traceback (most recent call last):
File "crop_pose_and_generate_testing_prior.py", line 97, in
prior = generate_prior_single_person(bbox, raw_pose, opt.PASCALMaskImgDir, pascal_poses, pascal_img_names, pascal_pose_dict, opt.n, opt.k)
File "/home/s1/Downloads/WSHP/data_generation/generate_prior_util.py", line 444, in generate_prior_single_person
morphingImg = morphing(pascal_mask_img, pascal_pose, pose, origin_size)
File "/home/s1/Downloads/WSHP/data_generation/generate_prior_util.py", line 351, in morphing
origin_body_part[origin_row_low:origin_row_high, origin_col_low:origin_col_high])
ValueError: operands could not be broadcast together with shapes (88,23) (88,24)

@luguansong
Copy link
Collaborator

Hi, this is probably caused by the incompatibility between python2 and python3. In my system, I use python->python2.7 and python3->python3.5. You can check the version of python in your system and retry.

@ZhangYK124
Copy link
Author

@luguansong 你好,今晚我再次遇到了这个问题,但是我感觉并不是因为python2和python3的不兼容性,我的也是python2->2.7,而且这个问题是在我批量处理图片的过程中,运行到某一张图片出现了这个问题,然后我就把这张图片单独拿出来进行测试,发现果然是这张图片的问题。
我在百度上搜索了一下,说是“ValueError: operands could not be broadcast together with shapes (88,23) (88,24)”是因为违反了ufunc的广播机制:
当我们使用ufunc函数对两个数组进行计算时,ufunc函数会对这两个数组的对应元素进行计算,因此它要求这两个数组有相同的大小(shape相同)。如果两个数组的shape不同的话,会进行如下的广播(broadcasting)处理:
1、让所有输入数组都向其中shape最长的数组看齐,shape中不足的部分都通过在前面加1补齐
2、输出数组的shape是输入数组shape的各个轴上的最大值
3、如果输入数组的某个轴和输出数组的对应轴的长度相同或者其长度为1时,这个数组能够用来计算,否则出错
4、当输入数组的某个轴的长度为1时,沿着此轴运算时都用此轴上的第一组值

原文:https://blog.csdn.net/qq_18433441/article/details/56834207

我不知道该怎么修改,希望你可以帮忙修改下,不知道该怎么联系你,我可以向你提供有问题的图片,我的邮箱是:2197253439@qq.com
非常感谢

@sayhi12345
Copy link

This could be the indexing issue with target_mask_img and origin_body_part. I fix the issue by changing division to floor division.

def morphing(origin_mask_img, origin_pose, target_pose, target_size):  # target_size [width, height]
    '''
    According to origin pose and target pose, morph the origin mask image so as to get the same pose as the target pose.

    :param origin_mask_img:
        Origin mask image, 1-channel, of labels 0-10 (0 for backgraound).
    :param origin_pose:
        1-dimension pose array, of shape (32, ).
    :param target_pose:
        1-dimension pose array, of shape (32, ).
    :param target_size:
        Target image size: [width, height].
    :return:
        Color image of morphed mask image, of size target_size.
    '''
    assert (len(origin_mask_img.shape) == 2)
    assert (len(origin_pose.shape) == 1)
    assert (len(target_pose.shape) == 1)

    target_mask_img = np.zeros((target_size[1], target_size[0]), dtype=np.uint8)
    # morphing for each part
    for label in range(1, 11):
        origin_size = np.array([origin_mask_img.shape[1], origin_mask_img.shape[0]], dtype=int)
        origin_body_part = origin_mask_img * (origin_mask_img == label)
        a = main_skeleton_lines[label][0]
        b = main_skeleton_lines[label][1]
        origin_pose_part_a = np.array([origin_pose[a * 2], origin_pose[a * 2 + 1]], dtype=float)
        origin_pose_part_b = np.array([origin_pose[b * 2], origin_pose[b * 2 + 1]], dtype=float)
        origin_pose_part_tensor = origin_pose_part_b - origin_pose_part_a
        target_pose_part_a = np.array([target_pose[a * 2], target_pose[a * 2 + 1]], dtype=float)
        target_pose_part_b = np.array([target_pose[b * 2], target_pose[b * 2 + 1]], dtype=float)
        target_pose_part_tensor = target_pose_part_b - target_pose_part_a
        origin_pose_part_length = np.sqrt(np.sum(np.square(origin_pose_part_tensor)))
        target_pose_part_length = np.sqrt(np.sum(np.square(target_pose_part_tensor)))
        # scaling ratio
        scale_factor = target_pose_part_length / origin_pose_part_length
        if scale_factor == 0:
            continue
        # rotating angle
        theta = - (np.arctan2(target_pose_part_tensor[1], target_pose_part_tensor[0]) - np.arctan2(
            origin_pose_part_tensor[1], origin_pose_part_tensor[0])) * 180 / np.pi

        ''' scale '''
        origin_size[0] *= scale_factor
        origin_size[1] *= scale_factor
        origin_pose_part_a *= scale_factor
        origin_pose_part_b *= scale_factor
        origin_body_part = cv2.resize(origin_body_part, (origin_size[0], origin_size[1]),
                                      interpolation=cv2.INTER_NEAREST)
        # print("finish scale", label)

        ''' translate to the center in case rotation out of the image '''
        origin_pose_part_center = (origin_pose_part_a + origin_pose_part_b) // 2
        origin_center = origin_size // 2
        tx = origin_center[0] - int(origin_pose_part_center[0])
        ty = origin_center[1] - int(origin_pose_part_center[1])
        tm = np.float32([[1, 0, tx], [0, 1, ty]])
        origin_body_part = cv2.warpAffine(origin_body_part, tm, (origin_size[0], origin_size[1]))
        # print("finish translate", label)

        ''' rotate '''
        rm = cv2.getRotationMatrix2D((origin_center[0], origin_center[1]), theta, 1)
        origin_body_part = cv2.warpAffine(origin_body_part, rm, (origin_size[0], origin_size[1]))
        origin_body_part = (origin_body_part != 0) * label
        # print("finish rotate", label)

        ''' crop and paste '''
        target_pose_part_center = (target_pose_part_a + target_pose_part_b) // 2
        target_pose_part_center[0] = int(target_pose_part_center[0])
        target_pose_part_center[1] = int(target_pose_part_center[1])
        if target_pose_part_center[1] >= origin_center[1]:
            origin_row_low = 0
            target_row_low = target_pose_part_center[1] - origin_center[1]
        else:
            origin_row_low = origin_center[1] - target_pose_part_center[1]
            target_row_low = 0
        if (target_size[1] - target_pose_part_center[1]) >= (origin_size[1] - origin_center[1]):
            origin_row_high = origin_size[1]
            target_row_high = target_pose_part_center[1] + origin_size[1] - origin_center[1]
        else:
            origin_row_high = origin_center[1] + target_size[1] - target_pose_part_center[1]
            target_row_high = target_size[1]
        if target_pose_part_center[0] >= origin_center[0]:
            origin_col_low = 0
            target_col_low = target_pose_part_center[0] - origin_center[0]
        else:
            origin_col_low = origin_center[0] - target_pose_part_center[0]
            target_col_low = 0
        if (target_size[0] - target_pose_part_center[0]) >= (origin_size[0] - origin_center[0]):
            origin_col_high = origin_size[0]
            target_col_high = target_pose_part_center[0] + origin_size[0] - origin_center[0]
        else:
            origin_col_high = origin_center[0] + target_size[0] - target_pose_part_center[0]
            target_col_high = target_size[0]
        origin_row_low = int(origin_row_low)
        target_row_low = int(target_row_low)
        origin_row_high = int(origin_row_high)
        target_row_high = int(target_row_high)
        origin_col_low = int(origin_col_low)
        target_col_low = int(target_col_low)
        origin_col_high = int(origin_col_high)
        target_col_high = int(target_col_high)
        target_mask_img[target_row_low:target_row_high, target_col_low:target_col_high] = np.maximum(
            target_mask_img[target_row_low:target_row_high, target_col_low:target_col_high],
            origin_body_part[origin_row_low:origin_row_high, origin_col_low:origin_col_high])
        # print("finish crop and paste", label)

    return paint(target_mask_img, merge=True)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants