Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Input 0 of layer conv1 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 64, 3] #19

Open
rakesh160 opened this issue Aug 12, 2020 · 8 comments

Comments

@rakesh160
Copy link

while running the command >python nets/test.py -g -v -se -m ./model/c3ae_model_v2_151_4.301724-0.962, I am getting a value error.

ValueError: Input 0 of layer conv1 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 64, 3]
image

Any help is much appreciated!

@StevenBanama
Copy link
Owner

Can you provide your enviroments (tensorflow version) ? I have test it locally (tensorflow 2.1), and work well.

@StevenBanama
Copy link
Owner

StevenBanama commented Aug 12, 2020

while running the command >python nets/test.py -g -v -se -m ./model/c3ae_model_v2_151_4.301724-0.962, I am getting a value error.

ValueError: Input 0 of layer conv1 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 64, 3]
image

Any help is much appreciated!

you can find that your inputs may invalid which needs (1, 64, 64, 3) and your inputs size shows it as (32, 64, 3).

@rakesh160
Copy link
Author

Thanks for the quick reply @StevenBanama .

I have tensorflow 2.3.

I am new to these things and just trying to test it on one of the test images.

inputs may invalid which needs (1, 64, 64, 3) and your inputs size shows it as (32, 64, 3) . Can you please elaborate a little on how to resolve the issue.

@StevenBanama
Copy link
Owner

You can do print the shape img before lines 102 like this to check the inputs.
'''
print(img.shape)
'''

@StevenBanama
Copy link
Owner

Thanks for the quick reply @StevenBanama .

I have tensorflow 2.3.

I am new to these things and just trying to test it on one of the test images.

inputs may invalid which needs (1, 64, 64, 3) and your inputs size shows it as (32, 64, 3) . Can you please elaborate a little on how to resolve the issue.

Firstly you can update your local repos. And then run it as below.

python nets/test.py -g -se -i assets/timg.jpg -m ./model/c3ae_model_v2_151_4.301724-0.962

@StevenBanama
Copy link
Owner

Thanks for the quick reply @StevenBanama .
I have tensorflow 2.3.
I am new to these things and just trying to test it on one of the test images.
inputs may invalid which needs (1, 64, 64, 3) and your inputs size shows it as (32, 64, 3) . Can you please elaborate a little on how to resolve the issue.

Firstly you can update your local repos. And then run it as below.

python nets/test.py -g -se -i assets/timg.jpg -m ./model/c3ae_model_v2_151_4.301724-0.962

Have you resolved it? Feeling free to promote the issue~

@KhizarAziz
Copy link

i have found the solution, so you just need to np.expand_dim() each image in tri_imgs array.. then pass tri_imgs array to model.predict(). it will work fine

@xiangdeyizhang
Copy link

def predict(models, img, save_image=False):
try:
bounds, lmarks = gen_face(MTCNN_DETECT, img, only_one=False)
ret = MTCNN_DETECT.extract_image_chips(img, lmarks, padding=0.4)
except Exception as ee:
ret = None
print(img.shape, ee)
if not ret:
print("no face")
return img, None
padding = 200
new_bd_img = cv2.copyMakeBorder(img, padding, padding, padding, padding, cv2.BORDER_CONSTANT)
bounds, lmarks = bounds, lmarks

colors = [(0, 0, 255), (0, 0, 0), (255, 0, 0)]
for pidx, (box, landmarks) in enumerate(zip(bounds, lmarks)):
    trible_box = gen_boundbox(box, landmarks)
    tri_imgs = []
    for bbox in trible_box:
        bbox = bbox + padding
        h_min, w_min = bbox[0]
        h_max, w_max = bbox[1]
       
        resized =cv2.resize(new_bd_img[w_min:w_max, h_min:h_max, :], (64, 64))
        cv2.imwrite("test2222.jpg", resized)
        tri_imgs.extend([cv2.resize(new_bd_img[w_min:w_max, h_min:h_max, :], (64, 64))])

    for idx, pbox in enumerate(trible_box):
        pbox = pbox + padding
        h_min, w_min = pbox[0]
        h_max, w_max = pbox[1]
        new_bd_img = cv2.rectangle(new_bd_img, (h_min, w_min), (h_max, w_max), colors[idx], 2)


    # 初始化一个列表
    k = []

    # 将图像转换为张量
    #img_tensor = np.transpose(resized, (2, 0, 1))

    
    img_tensor = np.expand_dims(resized, axis=0)
    print("shape",img_tensor.shape)

     # 将张量加入列表
    k.append(img_tensor)
    k.append(img_tensor)
    k.append(img_tensor)
    #img_tensor= tf.convert_to_tensor(brray)
    #print("out2=",type(img_tensor))
    result = models.predict(k)
    age, gender = None, None
    if result and len(result) == 3:
        age, _, gender = result
        age_label, gender_label = age[-1][-1], "F" if gender[-1][0] > gender[-1][1] else "M"
    elif result and len(result) == 2:
        age, _  = result
        age_label, gender_label = age[-1][-1], "unknown"
    else:
       raise Exception("fatal result: %s"%result)
    cv2.putText(new_bd_img, '%s %s'%(int(age_label), gender_label), (padding + int(bounds[pidx][0]), padding + int(bounds[pidx][1])), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (25, 2, 175), 2)
if save_image:
    print(result)
    cv2.imwrite("igg.jpg", new_bd_img)
return new_bd_img, (age_label, gender_label)

using these codes to replace the source file,can solve the problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants