Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compare 2 images #16

Closed
kaishijeng opened this issue May 17, 2016 · 13 comments
Closed

compare 2 images #16

kaishijeng opened this issue May 17, 2016 · 13 comments

Comments

@kaishijeng
Copy link

Do you have a similar utility to compare two jpeg face images and determine whether both are the same person or not, like compare.py in openface?

Thanks,

@davidsandberg
Copy link
Owner

Currently there is no utility like that. It wouldn't be too much work to modify for example "validate_on_lfw" to take two jpeg images instead, but that would assume that the images has already been face-aligned. A nicer solution would be to integrate face alignment, for example from openface, but that would be some more work. But definitly worth doing!

@kaishijeng
Copy link
Author

Thanks

On Tue, May 17, 2016 at 6:26 AM, David Sandberg notifications@github.com
wrote:

Currently there is no utility like that. It wouldn't be too much work to
modify for example "validate_on_lfw" to take two jpeg images instead, but
that would assume that the images has already been face-aligned. A nicer
solution would be to integrate face alignment, for example from openface,
but that would be some more work. But definitly worth doing!


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#16 (comment)

@kaishijeng
Copy link
Author

I modify validate_on_lfw.py for comparing 2 aligned face images and have
one question about dim of features for each person. Based on the code
below if images contains one face image,
feed_dict = { images_placeholder: images, phase_train_placeholder: False }
emb1 = sess.run([embeddings], feed_dict=feed_dict)

I expect emb1 should be one dim array with size 128. But emb1.shape shows
9x128.
Does facenet need 9x128 values/person face for recognition?
Openface uses 128 values/person face.

Thanks,
FC

On Tue, May 17, 2016 at 8:51 AM, kaishi Jeng kaishi.jeng@gmail.com wrote:

Thanks

On Tue, May 17, 2016 at 6:26 AM, David Sandberg notifications@github.com
wrote:

Currently there is no utility like that. It wouldn't be too much work to
modify for example "validate_on_lfw" to take two jpeg images instead, but
that would assume that the images has already been face-aligned. A nicer
solution would be to integrate face alignment, for example from openface,
but that would be some more work. But definitly worth doing!


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#16 (comment)

@davidsandberg
Copy link
Owner

I guess you have changed the batch_size to 2? If that is the case you should be able to get a tensor with the two embeddings of size 2x128 (if i remember correctly). And then you can just compute the l2 distance between the embeddings.

@kaishijeng
Copy link
Author

I change batch_size to 1 so that I can compute emb for each image.
WIth this change, I expect dim of return from sess.run should be 1x128,
but it is 9x128 which is weird

FC

On Fri, May 20, 2016 at 1:59 AM, David Sandberg notifications@github.com
wrote:

I guess you have changed the batch_size to 2? If that is the case you
should be able to get a tensor with the two embeddings of size 2x128 (if i
remember correctly). And then you can just compute the l2 distance between
the embeddings.


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#16 (comment)

@kaishijeng
Copy link
Author

If I set batch_size =2, then dim of return from sess.run becomes 18x128.
Why is it not 2x128?

FC

On Fri, May 20, 2016 at 2:06 AM, kaishi Jeng kaishi.jeng@gmail.com wrote:

I change batch_size to 1 so that I can compute emb for each image.
WIth this change, I expect dim of return from sess.run should be 1x128,
but it is 9x128 which is weird

FC

On Fri, May 20, 2016 at 1:59 AM, David Sandberg notifications@github.com
wrote:

I guess you have changed the batch_size to 2? If that is the case you
should be able to get a tensor with the two embeddings of size 2x128 (if i
remember correctly). And then you can just compute the l2 distance between
the embeddings.


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#16 (comment)

@davidsandberg
Copy link
Owner

Hi again,
I tried to set batch_size = 1 and look at the dimensions of some tensors, and they are as expected:
images_placeholder: Tensor: Tensor("input:0", shape=(1, 96, 96, 3), dtype=float32)
embeddings: Tensor: Tensor("embeddings:0", shape=(1, 128), dtype=float32)
Have you checked the same tensors?

@kaishijeng
Copy link
Author

My aligned face image size is 144. If I redo face alignment to size 96, the
embeddings shape is (1,128) for one image.
Not sure why face image 144 will make embeddedings shape (9, 128)

On Fri, May 20, 2016 at 1:25 PM, David Sandberg notifications@github.com
wrote:

Hi again,
I tried to set batch_size = 1 and look at the dimensions of some tensors,
and they are as expected:
images_placeholder: Tensor: Tensor("input:0", shape=(1, 96, 96, 3),
dtype=float32)
embeddings: Tensor: Tensor("embeddings:0", shape=(1, 128), dtype=float32)
Have you checked the same tensors?


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#16 (comment)

@davidsandberg
Copy link
Owner

Ok, that makes more sense. The function that generates the inference graph only works for input images of size 96. The key is the line
resh1 = tf.reshape(pool6, [-1, 896])
With 96 pixel images the pool6 shape is (1, 1, 1, 896)
but with 144 pixel images it is(1, 3, 3, 896).
It would probably be better to throw an exception when this happens.

@kaishijeng
Copy link
Author

Thanks for compare.py utility.
Is your pretrained model, model-20160306.ckpt-500000, not comparable with the latest code? I got the following error when running compare.py:

Traceback (most recent call last):
File "./compare.py", line 80, in
main()
File "./compare.py", line 58, in main
saver.restore(sess, ckpt.model_checkpoint_path)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1104, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 332, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 572, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 652, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 672, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.NotFoundError: Tensor name "incept5b/in4_conv1x1_55/weights/ExponentialMovingAverage" not found in checkpoint files ./models/facenet/20160514-234418/model.ckpt-1000
[[Node: save/restore_slice_331 = RestoreSlice[dt=DT_FLOAT, preferred_shard=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/restore_slice_331/tensor_name, save/restore_slice_331/shape_and_slice)]]
Caused by op u'save/restore_slice_331', defined at:
File "./compare.py", line 80, in
main()
File "./compare.py", line 52, in main
saver = tf.train.Saver(ema.variables_to_restore())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 845, in init
restore_sequentially=restore_sequentially)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 515, in build
filename_tensor, vars_to_save, restore_sequentially, reshape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 271, in _AddRestoreOps
values = self.restore_op(filename_tensor, vs, preferred_shard)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 186, in restore_op
preferred_shard=preferred_shard)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/io_ops.py", line 201, in _restore_slice
preferred_shard, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 325, in _restore_slice
preferred_shard=preferred_shard, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 693, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2186, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1170, in init
self._traceback = _extract_stack()

@davidsandberg
Copy link
Owner

No, there's a new model that is equivalent that you could download from
https://drive.google.com/file/d/0B5MzpY9kBtDVVFRyU2JCVmZXUEk/view?usp=sharing

@kaishijeng
Copy link
Author

It works OK with new model

Thanks

On Sun, May 22, 2016 at 1:15 PM, David Sandberg notifications@github.com
wrote:

No, there's a new model that is equivalent that you could download from

https://drive.google.com/file/d/0B5MzpY9kBtDVVFRyU2JCVmZXUEk/view?usp=sharing


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#16 (comment)

@ManyuChang
Copy link

@kaishijeng Have you do it successfully?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants