Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparing a database of faces to the faces in a photo #468

Closed
TomGrozev opened this issue Sep 26, 2017 · 7 comments
Closed

Comparing a database of faces to the faces in a photo #468

TomGrozev opened this issue Sep 26, 2017 · 7 comments

Comments

@TomGrozev
Copy link

Hi,

So what I have is a folder will thousands of ID photos as the reference faces and hundreds of photos with multiple people in them and I need to find who (from the ID photos) is in each photo and the faces location in each image. I am having some trouble getting my head around facenet and the documentation doesn't refer to this. I was able to do a similar thing using https://github.com/ageitgey/face_recognition but this doesn't recognise faces very well.

Help is greatly appreciated.

Thanks in advance

@MaartenBloemen
Copy link
Contributor

Hi Tom!

The ID photos are 1 per person I take? If that is the case you'll need to use face verification (do 2 pictures belong to the same person), take a look at compare.py, you feed all your pictures to it and it will return you a distance matrix with number between 0 and 2 where 0 = 100% the same and 2 = 0% the same.
You will need to modify the "load_and_align_data" function in the script a bit though for it to work with pictures with multiple people in them.

Hope this helps a bit, if you have some more questions regarding this problem just let me know.

@TomGrozev
Copy link
Author

Ah thanks for this. So I managed to modify the load_and_align_data function so that it takes the input from a folder with the ID photos and all the photos from a folder with the unknown photos. The function is below:

def load_and_align_data(known_image_paths, unknwon_image_paths, image_size, margin, gpu_memory_fraction):

    minsize = 20 # minimum size of face
    threshold = [ 0.6, 0.7, 0.7 ]  # three steps's threshold
    factor = 0.709 # scale factor
    
    print('Creating networks and loading parameters')
    with tf.Graph().as_default():
        gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_memory_fraction)
        sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False))
        with sess.as_default():
            pnet, rnet, onet = align.detect_face.create_mtcnn(sess, None)
  
    known_nrof_samples = len(known_image_paths)
    known_img_list = [None] * known_nrof_samples
    for i in tqdm(range(known_nrof_samples)):
        print(os.path.expanduser(known_image_paths[i]))
        img = misc.imread(os.path.expanduser(known_image_paths[i]))
        img_size = np.asarray(img.shape)[0:2]
        bounding_boxes, _ = align.detect_face.detect_face(img, minsize, pnet, rnet, onet, threshold, factor)
        # det = np.squeeze(bounding_boxes[0,0:4])
        for det in bounding_boxes:
            bb = np.zeros(4, dtype=np.int32)
            bb[0] = np.maximum(det[0]-margin/2, 0)
            bb[1] = np.maximum(det[1]-margin/2, 0)
            bb[2] = np.minimum(det[2]+margin/2, img_size[1])
            bb[3] = np.minimum(det[3]+margin/2, img_size[0])
            cropped = img[bb[1]:bb[3],bb[0]:bb[2],:]
            aligned = misc.imresize(cropped, (image_size, image_size), interp='bilinear')
            prewhitened = facenet.prewhiten(aligned)
            known_img_list[i] = prewhitened
    known_images = np.stack(known_img_list)

    unknown_nrof_samples = len(unknwon_image_paths)
    unknown_img_list = [None] * unknown_nrof_samples
    for i in range(unknown_nrof_samples):
        img = misc.imread(os.path.expanduser(unknwon_image_paths[i]))
        img_size = np.asarray(img.shape)[0:2]
        bounding_boxes, _ = align.detect_face.detect_face(img, minsize, pnet, rnet, onet, threshold, factor)
        # det = np.squeeze(bounding_boxes[0,0:4])
        for det in tqdm(bounding_boxes):
            bb = np.zeros(4, dtype=np.int32)
            bb[0] = np.maximum(det[0]-margin/2, 0)
            bb[1] = np.maximum(det[1]-margin/2, 0)
            bb[2] = np.minimum(det[2]+margin/2, img_size[1])
            bb[3] = np.minimum(det[3]+margin/2, img_size[0])
            cropped = img[bb[1]:bb[3],bb[0]:bb[2],:]
            aligned = misc.imresize(cropped, (image_size, image_size), interp='bilinear')
            prewhitened = facenet.prewhiten(aligned)
            unknown_img_list[i] = prewhitened
    unknown_images = np.stack(unknown_img_list)
    finImages = [known_images, unknown_images]
    return finImages`

I am getting this error:

ValueError: Cannot feed value of shape (2, 2, 160, 160, 3) for Tensor u'input:0', which has shape '(?, 160, 160, 3)'

Thanks for the help

@MaartenBloemen
Copy link
Contributor

See the answer in #362 for that ValueError

@TomGrozev
Copy link
Author

Thx but I am already using .jpg, I think the issue is more with the fact that what is returned from the load_and_align_data function is two numpy stacks in a list like [stack1, stack2] so when I do something like feed_dict = { images_placeholder: images[1], phase_train_placeholder:False } and reference one of the two it works but what I want to do is compare all of the faces from one with the other.

@TomGrozev
Copy link
Author

@MaartenBloemen Hey I'm still stuck on this, got any ideas?

@davidsandberg
Copy link
Owner

Hi,
You need to check the dimensions of the numpy array in your feed_dict. Looks like you are stacking arrays in the wrong dimension. Result should be an array of dimension 4x160x160x3 instead of 2x2x160x160x3.

@Leedonggeon
Copy link

Leedonggeon commented Nov 19, 2017

@TomGrozev
hey i'm interested in this subject nowadays, i'am also stuck on this, did u find the answer?
how to run the compare.py
python compare.py x x
what argument i will deliver?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants