Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Facial Expressivity NaN #134

Closed
joshwongg opened this issue Aug 8, 2024 · 6 comments
Closed

Facial Expressivity NaN #134

joshwongg opened this issue Aug 8, 2024 · 6 comments

Comments

@joshwongg
Copy link

Hi,

I've been trying to run the facial expressivity function, but all values returned are NaN. I'm running the code through Miniconda.
This is the code that I've been using:

import openwillis as ow
import tensorflow as tf
import pandas as pd

physical_devices = tf.config.list_physical_devices('GPU')
if physical_devices:
    tf.config.experimental.set_memory_growth(physical_devices[0], True)

config = tf.compat.v1.ConfigProto(gpu_options = tf.compat.v1.GPUOptions(per_process_gpu_memory_fraction=0.8)
#device_count = {'GPU': 1}
)
config.gpu_options.allow_growth = True
session = tf.compat.v1.Session(config=config)
tf.compat.v1.keras.backend.set_session(session)

filepath = r"C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\expression test.mp4"
baseline_filepath = r"C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\baseline test.mp4"

framewise_loc, framewise_disp, summary = ow.facial_expressivity(
    filepath=filepath,
    baseline_filepath=baseline_filepath
)

print(type(framewise_loc), type(framewise_disp), type(summary))
print(framewise_loc, framewise_disp, summary)

if isinstance(framewise_loc, list) and isinstance(framewise_disp, list) and isinstance(summary, list):
    if len(framewise_loc) == len(framewise_disp) == len(summary):
        data = {
            'Framewise Location': framewise_loc,
            'Framewise Displacement': framewise_disp,
            'Summary': summary
        }
        df = pd.DataFrame(data)
    else:
        print("Error: Lists are of different lengths.")
else:
    data = {
        'Framewise Location': [framewise_loc],
        'Framewise Displacement': [framewise_disp],
        'Summary': [summary]
    }
    df = pd.DataFrame(data)
    
    df.to_excel(r"C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\Admin - CTP\Excel Openwillis Trial.xlsx", index=False, engine='openpyxl')


print("Framewise Location:", framewise_loc)
print("Framewise Displacement:", framewise_disp)
print("Summary:", summary)

Any help is very much appreciated! Thanks

@GeorgeEfstathiadis
Copy link
Contributor

Hi Josh, are you getting any kind of error messages or is the function not logging anything at all?

Additionally I would ensure file paths are correct and can be opened fine using cv2 in Python. You could try something like this:

import cv2

# Check if the files can be opened
expression_cap = cv2.VideoCapture(filepath)
baseline_cap = cv2.VideoCapture(baseline_filepath)

if not expression_cap.isOpened():
    print(f"Error: Cannot open video file {filepath}")
if not baseline_cap.isOpened():
    print(f"Error: Cannot open video file {baseline_filepath}")

expression_cap.release()
baseline_cap.release()

@joshwongg
Copy link
Author

Hi George, I've used the following code to test file paths as well as if video quality allows for face detection

import cv2
import mediapipe as mp

mp_face_detection = mp.solutions.face_detection
mp_drawing = mp.solutions.drawing_utils

video_path = r"C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\joshtrialthink.mp4"
cap = cv2.VideoCapture(video_path)

if not cap.isOpened():
    raise ValueError(f"Error opening video file: {video_path}")

with mp_face_detection.FaceDetection(min_detection_confidence=0.2) as face_detection:
    while cap.isOpened():
        ret, frame = cap.read()
        if not ret:
            break

        rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
        results = face_detection.process(rgb_frame)

        if results.detections:
            for detection in results.detections:
                mp_drawing.draw_detection(frame, detection)

        cv2.imshow('Face Detection', frame)
        
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

cap.release()
cv2.destroyAllWindows()

Everything seems to work well, however, I know have a new error which appears when using the facial expressivity function which did not occur the last time we spoke:

TensorFlow version: 2.15.0
ERROR:root:Face not detected by mediapipe file: ('C:\\Users\\jjsw972\\OneDrive - The University of Newcastle\\Desktop\\joshtrialthink.mp4',) & Error: OpenCV(4.10.0) :-1: error: (-5:Bad argument) in function 'VideoCapture'
> Overload resolution failed:
>  - Expected 'filename' to be a str or path-like object
>  - VideoCapture() missing required argument 'apiPreference' (pos 2)
>  - Argument 'index' is required to be an integer
>  - VideoCapture() missing required argument 'apiPreference' (pos 2)

INFO:root:Face not detected by mediapipe in file ('C:\\Users\\jjsw972\\OneDrive - The University of Newcastle\\Desktop\\joshtrialthink.mp4',)

I'm not sure if this is a problem with tensorflow, as the same videos work fine when i use a face detection / file path check.

@GeorgeEfstathiadis
Copy link
Contributor

Does this error occur when running the code you shared in your first message? or are you using a different script?

Because from the error message one possible issue that comes up is that it looks like you set filepath to ('C:\\Users\\jjsw972\\OneDrive - The University of Newcastle\\Desktop\\joshtrialthink.mp4',), which is a Tuple. Instead you would want to set it to 'C:\\Users\\jjsw972\\OneDrive - The University of Newcastle\\Desktop\\joshtrialthink.mp4'. Is that the case?

@joshwongg
Copy link
Author

Amazing, thanks for picking that up now! It works and doesn't produce the NaN now.

From running the code, the output comes out as so, in both in the PowerShell and excel spreadsheet:

Summary:    overall_mean  lower_face_mean  upper_face_mean  ...  lips_std  eyebrows_std  mouth_openness_std
0      0.000867         0.000809         0.000899  ...  0.003784      0.004089            0.245542

I'm just wondering now, how can I see the rest of the values (such as such as lips/eyebrows mean or overall/lower/upper std)?

Thanks!

@GeorgeEfstathiadis
Copy link
Contributor

Hm I believe that's an issue with how you are printing/saving the resulting dataframes. Instead what you could do is save the files in csv format from Python. Something like that should do the trick:

if isinstance(framewise_loc, pd.DataFrame):
    framewise_loc.to_csv(r"C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\Framewise_Location.csv", index=False)
if isinstance(framewise_disp, pd.DataFrame):
    framewise_disp.to_csv(r"C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\Framewise_Displacement.csv", index=False)
if isinstance(summary, pd.DataFrame):
    summary.to_csv(r"C:\Users\jjsw972\OneDrive - The University of Newcastle\Desktop\Summary.csv", index=False)

@joshwongg
Copy link
Author

Amazing, that works great! Thanks for all your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants