Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Face Training - Timeout #92

Open
tommyjlong opened this issue Feb 18, 2021 · 4 comments
Open

Face Training - Timeout #92

tommyjlong opened this issue Feb 18, 2021 · 4 comments

Comments

@tommyjlong
Copy link

Today, I built the CPU version:

 $sudo docker run -e VISION-DETECTION=True -e VISION-FACE=True -e MODE=High -v localstorage:/datastore -p 83:5000 --name ds_object_face deepquestai/deepstack
DeepStack: Version 2021.02.1
/v1/vision/face
---------------------------------------
/v1/vision/face/recognize
---------------------------------------
/v1/vision/face/register
---------------------------------------
/v1/vision/face/match
---------------------------------------
/v1/vision/face/list
---------------------------------------
/v1/vision/face/delete
---------------------------------------
/v1/vision/detection
---------------------------------------
---------------------------------------
v1/backup
---------------------------------------
v1/restore

Run this curl command

curl -X POST -F 'image=@image1.JPG' -F 'userid="Fred"' 'http://DSIPADDR:83/v1/vision/face/register'

After 60secs it returns: {"success":false,"error":"failed to process request before timeout","duration":0}

DeepStack Log: [GIN] 2021/02/18 - 22:36:12 | 500 | 1m0s | DSIPADDR | POST /v1/vision/face/register

@tommyjlong
Copy link
Author

I wanted to report that I installed the Windows10 2021.02.1 CPU version and it works for face register. So the error I reported above appears to pertain only to the Docker build of 2021.02.1 CPU.

@pnewnam
Copy link

pnewnam commented Mar 9, 2021

I ran into the same problem running on Docker. It looked like I had plenty of memory but it appears that when running face training there is a memory spike from 0.5Gb to 4Gb. As a result there was probably not enough memory to allow for that spike and face training. I stopped a couple of other containers and restarted Deep Stack with more than 4Gb of memory overhead. Face training worked perfectly and there was a move up to 4Gb from 0.5Gb. I then restarted the Deep Stack container and it resettled at 0.5Gb again. So looks like for face training to work in the docker container you need at least 4Gb of free memory prior to starting face training. As a side not the logs did not indicate any issue.

@tommyjlong
Copy link
Author

tommyjlong commented Apr 15, 2021

Other users on the Home Assistant Forum have reported that this issue does NOT exist in this particular Docker image: deepquestai/deepstack:cpu-x5-beta

@JurajNyiri
Copy link

Having the same issue on Mac M1 Mini, using the latest, or the first available arm image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants