-
Notifications
You must be signed in to change notification settings - Fork 13.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Making a new pickle file for every image that gets added. #1042
Comments
We had similiar problem, |
Where is the array ? Where do you save the array of such embedding for future use ? As of now I store them in pickle. however whenever a new face is added I retrain for all the images then add the new face's embedding together in a pickle file. Couldnt find a way of appending in a pickle file. |
Would it be possible using facial_recognition API to query a database for facial embedding and then compare it to identify someone ? I am using MongoDB to store Facial Embeddings against the name. |
Has It been possible to store the encoding in the database and then use it to compare faces..? And then appending the new encoding if a new face is added! |
What you can do is, run the encodings and store them as pickle. Then when you get new image, run encodings for that new image only. Read pickle file, append new data to it and then save the pickle again. Let me know if that is clear now. |
i think this topic is similar with my problem, i just did what @jsjaskaran said. when i get new face encoding from new image, it append to existing pickle file. but my problem is, how to delete the spesific face encoding ? for example i want to delete one person face from my database. any solution ? |
Okay my first question would why you would wanna delete a face encoding? Lets start from beginning :-
Let me know if this could work. |
so i have project for attendance system using face recognition, so if someone resign from the company, all of data from that employee must be delete. i will try your advice, it does make sense. thank you |
Yes, possible. I had the same issue mentioned in this thread, I have implemented using PostgreSQL, you can use other databases as well. I have shared the details here for starters. |
What I Did
In order to save facial embedding for each face I save them in a .pickle file. However, let us say I have 1000 images, embedding for which are saved in a .pickle file.
When a new image is introduced, I have to run the encoding for 1001 images now. This increases training time a lot.
I tried using appending in .yaml files too, but then .match, that I am using to recognize face, doesnt work as desired.
Any solution ?
The text was updated successfully, but these errors were encountered: