Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LeftPupil and Right pupil None #52

Open
ctippur opened this issue Oct 12, 2020 · 1 comment
Open

LeftPupil and Right pupil None #52

ctippur opened this issue Oct 12, 2020 · 1 comment

Comments

@ctippur
Copy link

ctippur commented Oct 12, 2020

Hello,

Thanks for initiating this effort.

I have a stationary video taken from my phone where I am not moving. I changed the code to read from a file and other part of the code remain the same as example.py.

I see the video being played but the left and right pupil coordinates are None. Has this got to do with ambient light?

I do see that the video is clear. Please see a sample image.
Screen Shot 2020-10-12 at 9 24 41 AM

What am I doing wrong?

S

cap = cv2.VideoCapture('/Users/shekartippur/playground/tflite/myvideo.mp4')

while True:
    ret, frame = cap.read()
    # We send this frame to GazeTracking to analyze it
    gaze.refresh(frame)

    frame = gaze.annotated_frame()
    text = ""

    if gaze.is_blinking():
        text = "Blinking"
    elif gaze.is_right():
        text = "Looking right"
    elif gaze.is_left():
        text = "Looking left"
    elif gaze.is_center():
        text = "Looking center"

    cv2.putText(frame, text, (90, 60), cv2.FONT_HERSHEY_DUPLEX, 1.6, (147, 58, 31), 2)

    left_pupil = gaze.pupil_left_coords()
    right_pupil = gaze.pupil_right_coords()
    cv2.putText(frame, "Left pupil:  " + str(left_pupil), (90, 130), cv2.FONT_HERSHEY_DUPLEX, 0.9, (147, 58, 31), 1)
    cv2.putText(frame, "Right pupil: " + str(right_pupil), (90, 165), cv2.FONT_HERSHEY_DUPLEX, 0.9, (147, 58, 31), 1)

    cv2.imshow("Demo", frame)

    if cv2.waitKey(1) == 27:
        break

@keshariS
Copy link
Contributor

keshariS commented Jul 2, 2021

The whole face should be visible in the video for the dlib library to successfully detect the face, and eventually the landmarks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants