Skip to content

Commit

Permalink
Making sure everything runs well, adding a general test suite scrip f…
Browse files Browse the repository at this point in the history
…or easier testing.
  • Loading branch information
unknown authored and unknown committed Feb 5, 2016
1 parent 17a76f5 commit 059a97e
Show file tree
Hide file tree
Showing 24 changed files with 110 additions and 169 deletions.
17 changes: 10 additions & 7 deletions Readme.txt
Expand Up @@ -69,21 +69,24 @@ After landmark detection is done clm_model stores the landmark locations and loc

Head Pose:

// Head pose is stored in the following format (X, Y, Z, rot_x, roty_y, rot_z), translation is in millimeters with respect to camera and rotation is in radians around X,Y,Z axes with the convention R = Rx * Ry * Rz, left-handed positive sign, the rotation can be either with respect to camera or the camera plane (for visualisation we want rotation with respect to camera plane)
// Head pose is stored in the following format (X, Y, Z, rot_x, roty_y, rot_z)
// translation is in millimeters with respect to camera centre
// Rotation is in radians around X,Y,Z axes with the convention R = Rx * Ry * Rz, left-handed positive sign
// The rotation can be either with respect to camera or world coordinates (for visualisation we want rotation with respect to world coordinates)

There are four methods in total that can return the head pose

//Getting the head pose w.r.t. camera assuming orthographic projection
Vec6d GetPoseCamera(CLM& clm_model, double fx, double fy, double cx, double cy, CLMParameters& params);

//Getting the head pose w.r.t. camera plane assuming orthographic projection
Vec6d GetPoseCameraPlane(CLM& clm_model, double fx, double fy, double cx, double cy, CLMParameters& params);
//Getting the head pose w.r.t. world coordinates assuming orthographic projection
Vec6d GetPoseWorld(CLM& clm_model, double fx, double fy, double cx, double cy, CLMParameters& params);

//Getting the head pose w.r.t. camera with a perspective camera correction
Vec6d GetCorrectedPoseCamera(CLM& clm_model, double fx, double fy, double cx, double cy, CLMParameters& params);

//Getting the head pose w.r.t. camera plane with a perspective camera correction
Vec6d GetCorrectedPoseCameraPlane(CLM& clm_model, double fx, double fy, double cx, double cy, CLMParameters& params);
//Getting the head pose w.r.t. world coordinates with a perspective camera correction
Vec6d GetCorrectedPoseWorld(CLM& clm_model, double fx, double fy, double cx, double cy, CLMParameters& params);

// fx,fy,cx,cy are camera callibration parameters needed to infer the 3D position of the head with respect to camera, a good assumption for webcams providing 640x480 images is 500, 500, img_width/2, img_height/2

Expand Down Expand Up @@ -114,7 +117,7 @@ Parameters for output
-of3D <location of output 3D landmark points file>, the file format is as follows: frame_number, timestamp(seconds), confidence, detection_success, X_1, X_2 ... X_n, Y_1, Y_2, ... Y_n, Z_1, Z_2, ... Z_n
-ov <location of tracked video>

-cp <1/0, should rotation be measured with respect to the camera plane or camera, see Head pose section for more details>
-world_coord <1/0, should rotation be measured with respect to the world coordinates or camera, see Head pose section for more details>

Model parameters (apply to images and videos)
-mloc <the location of CLM model>
Expand Down Expand Up @@ -169,7 +172,7 @@ Parameters for output
-simalignvid <output video file of aligned faces>, outputs similarity aligned faces to a video (need HFYU video codec to read it)
-simaligndir <output directory for aligned face image>, same as above but instead of video the aligned faces are put in a directory

-cp <1/0>, should rotation be measured with respect to the camera plane or camera, see Head pose section for more details>
-world_coord <1/0>, should rotation be measured with respect to the camera or world coordinates, see Head pose section for more details>

Additional parameters for output

Expand Down
12 changes: 6 additions & 6 deletions exe/FeatureExtraction/FeatureExtraction.cpp
Expand Up @@ -414,7 +414,7 @@ void visualise_tracking(Mat& captured_image, const CLMTracker::CLM& clm_model, c
// A rough heuristic for box around the face width
int thickness = (int)std::ceil(2.0* ((double)captured_image.cols) / 640.0);

Vec6d pose_estimate_to_draw = CLMTracker::GetCorrectedPoseCameraPlane(clm_model, fx, fy, cx, cy);
Vec6d pose_estimate_to_draw = CLMTracker::GetCorrectedPoseWorld(clm_model, fx, fy, cx, cy);

// Draw it in reddish if uncertain, blueish if certain
CLMTracker::DrawBox(captured_image, pose_estimate_to_draw, Scalar((1 - vis_certainty)*255.0, 0, vis_certainty * 255), thickness, fx, fy, cx, cy);
Expand Down Expand Up @@ -465,9 +465,9 @@ int main (int argc, char **argv)

// Get the input output file parameters

// Indicates that rotation should be with respect to camera plane or with respect to camera
bool use_camera_plane_pose;
CLMTracker::get_video_input_output_params(files, depth_directories, pose_output_files, tracked_videos_output, landmark_output_files, landmark_3D_output_files, use_camera_plane_pose, arguments);
// Indicates that rotation should be with respect to camera or world coordinates
bool use_world_coordinates;
CLMTracker::get_video_input_output_params(files, depth_directories, pose_output_files, tracked_videos_output, landmark_output_files, landmark_3D_output_files, use_world_coordinates, arguments);

bool video_input = true;
bool verbose = true;
Expand Down Expand Up @@ -915,9 +915,9 @@ int main (int argc, char **argv)

// Work out the pose of the head from the tracked model
Vec6d pose_estimate_CLM;
if(use_camera_plane_pose)
if(use_world_coordinates)
{
pose_estimate_CLM = CLMTracker::GetCorrectedPoseCameraPlane(clm_model, fx, fy, cx, cy);
pose_estimate_CLM = CLMTracker::GetCorrectedPoseWorld(clm_model, fx, fy, cx, cy);
}
else
{
Expand Down
6 changes: 3 additions & 3 deletions exe/MultiTrackCLM/MultiTrackCLM.cpp
Expand Up @@ -142,8 +142,8 @@ int main (int argc, char **argv)
clm_parameters.push_back(clm_params);

// Get the input output file parameters
bool use_camera_plane_pose;
CLMTracker::get_video_input_output_params(files, depth_directories, pose_output_files, tracked_videos_output, landmark_output_files, landmark_3D_output_files, use_camera_plane_pose, arguments);
bool use_world_coords;
CLMTracker::get_video_input_output_params(files, depth_directories, pose_output_files, tracked_videos_output, landmark_output_files, landmark_3D_output_files, use_world_coords, arguments);
// Get camera parameters
CLMTracker::get_camera_params(device, fx, fy, cx, cy, arguments);

Expand Down Expand Up @@ -397,7 +397,7 @@ int main (int argc, char **argv)
int thickness = (int)std::ceil(2.0* ((double)captured_image.cols) / 640.0);

// Work out the pose of the head from the tracked model
Vec6d pose_estimate_CLM = CLMTracker::GetCorrectedPoseCameraPlane(clm_models[model], fx, fy, cx, cy);
Vec6d pose_estimate_CLM = CLMTracker::GetCorrectedPoseWorld(clm_models[model], fx, fy, cx, cy);

// Draw it in reddish if uncertain, blueish if certain
CLMTracker::DrawBox(disp_image, pose_estimate_CLM, Scalar((1-detection_certainty)*255.0,0, detection_certainty*255), thickness, fx, fy, cx, cy);
Expand Down
12 changes: 6 additions & 6 deletions exe/SimpleCLM/SimpleCLM.cpp
Expand Up @@ -120,7 +120,7 @@ void visualise_tracking(Mat& captured_image, Mat_<float>& depth_image, const CLM
// A rough heuristic for box around the face width
int thickness = (int)std::ceil(2.0* ((double)captured_image.cols) / 640.0);

Vec6d pose_estimate_to_draw = CLMTracker::GetCorrectedPoseCameraPlane(clm_model, fx, fy, cx, cy);
Vec6d pose_estimate_to_draw = CLMTracker::GetCorrectedPoseWorld(clm_model, fx, fy, cx, cy);

// Draw it in reddish if uncertain, blueish if certain
CLMTracker::DrawBox(captured_image, pose_estimate_to_draw, Scalar((1 - vis_certainty)*255.0, 0, vis_certainty * 255), thickness, fx, fy, cx, cy);
Expand Down Expand Up @@ -173,9 +173,9 @@ int main (int argc, char **argv)

// Get the input output file parameters

// Indicates that rotation should be with respect to camera plane or with respect to camera
bool use_camera_plane_pose;
CLMTracker::get_video_input_output_params(files, depth_directories, pose_output_files, tracked_videos_output, landmark_output_files, landmark_3D_output_files, use_camera_plane_pose, arguments);
// Indicates that rotation should be with respect to world or camera coordinates
bool use_world_coordinates;
CLMTracker::get_video_input_output_params(files, depth_directories, pose_output_files, tracked_videos_output, landmark_output_files, landmark_3D_output_files, use_world_coordinates, arguments);

// The modules that are being used for tracking
CLMTracker::CLM clm_model(clm_parameters.model_location);
Expand Down Expand Up @@ -381,9 +381,9 @@ int main (int argc, char **argv)

// Work out the pose of the head from the tracked model
Vec6d pose_estimate_CLM;
if(use_camera_plane_pose)
if(use_world_coordinates)
{
pose_estimate_CLM = CLMTracker::GetCorrectedPoseCameraPlane(clm_model, fx, fy, cx, cy);
pose_estimate_CLM = CLMTracker::GetCorrectedPoseWorld(clm_model, fx, fy, cx, cy);
}
else
{
Expand Down
11 changes: 6 additions & 5 deletions exe/SimpleCLMImg/SimpleCLMImg.cpp
Expand Up @@ -310,7 +310,7 @@ int main (int argc, char **argv)

// Loading image
Mat read_image = imread(file, -1);

// Loading depth file if exists (optional)
Mat_<float> depth_image;

Expand Down Expand Up @@ -342,6 +342,7 @@ int main (int argc, char **argv)
fy = fx;
}


// if no pose defined we just use a face detector
if(bounding_boxes.empty())
{
Expand All @@ -368,7 +369,7 @@ int main (int argc, char **argv)
bool success = CLMTracker::DetectLandmarksInImage(grayscale_image, depth_image, face_detections[face], clm_model, clm_parameters);

// Estimate head pose and eye gaze
Vec6d headPose = CLMTracker::GetPoseCamera(clm_model, fx, fy, cx, cy);
Vec6d headPose = CLMTracker::GetCorrectedPoseWorld(clm_model, fx, fy, cx, cy);

// Gaze tracking, absolute gaze direction
Point3f gazeDirection0(0, 0, -1);
Expand Down Expand Up @@ -425,7 +426,7 @@ int main (int argc, char **argv)

if (clm_parameters.track_gaze)
{
Vec6d pose_estimate_to_draw = CLMTracker::GetCorrectedPoseCameraPlane(clm_model, fx, fy, cx, cy);
Vec6d pose_estimate_to_draw = CLMTracker::GetCorrectedPoseWorld(clm_model, fx, fy, cx, cy);

// Draw it in reddish if uncertain, blueish if certain
CLMTracker::DrawBox(read_image, pose_estimate_to_draw, Scalar(255.0, 0, 0), 3, fx, fy, cx, cy);
Expand Down Expand Up @@ -480,7 +481,7 @@ int main (int argc, char **argv)
CLMTracker::DetectLandmarksInImage(grayscale_image, bounding_boxes[i], clm_model, clm_parameters);

// Estimate head pose and eye gaze
Vec6d headPose = CLMTracker::GetPoseCamera(clm_model, fx, fy, cx, cy);
Vec6d headPose = CLMTracker::GetCorrectedPoseWorld(clm_model, fx, fy, cx, cy);

// Gaze tracking, absolute gaze direction
Point3f gazeDirection0(0, 0, -1);
Expand Down Expand Up @@ -515,7 +516,7 @@ int main (int argc, char **argv)

if (clm_parameters.track_gaze)
{
Vec6d pose_estimate_to_draw = CLMTracker::GetCorrectedPoseCameraPlane(clm_model, fx, fy, cx, cy);
Vec6d pose_estimate_to_draw = CLMTracker::GetCorrectedPoseWorld(clm_model, fx, fy, cx, cy);

// Draw it in reddish if uncertain, blueish if certain
CLMTracker::DrawBox(read_image, pose_estimate_to_draw, Scalar(255.0, 0, 0), 3, fx, fy, cx, cy);
Expand Down
2 changes: 1 addition & 1 deletion exe/SimpleCLMImg/SimpleCLMImg_vs2013.vcxproj
Expand Up @@ -81,7 +81,7 @@
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>$(SolutionDir)\lib\local\CLM\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<AdditionalIncludeDirectories>$(SolutionDir)\lib\local\CLM\include;$(SolutionDir)\lib\local\FaceAnalyser\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<OpenMPSupport>false</OpenMPSupport>
<EnableEnhancedInstructionSet>StreamingSIMDExtensions2</EnableEnhancedInstructionSet>
</ClCompile>
Expand Down
11 changes: 6 additions & 5 deletions lib/local/CLM/include/CLMTracker.h
Expand Up @@ -91,17 +91,18 @@ namespace CLMTracker
//================================================================
// Helper function for getting head pose from CLM parameters

// The head pose returned is in camera space, however, the orientation can be either with respect to camera itself or the camera plane
// Return the current estimate of the head pose, this can be either in camera or world coordinate space
// The format returned is [Tx, Ty, Tz, Eul_x, Eul_y, Eul_z]
Vec6d GetPoseCamera(const CLM& clm_model, double fx, double fy, double cx, double cy);
Vec6d GetPoseCameraPlane(const CLM& clm_model, double fx, double fy, double cx, double cy);
Vec6d GetPoseWorld(const CLM& clm_model, double fx, double fy, double cx, double cy);

// Getting a head pose estimate from the currently detected landmarks, with appropriate correction due to orthographic camera issue
// Getting a head pose estimate from the currently detected landmarks, with appropriate correction for perspective
// This is because rotation estimate under orthographic assumption is only correct close to the centre of the image
// These methods attempt to correct for that (Experimental)
// These methods attempt to correct for that
// The pose returned can be either in camera or world coordinates
// The format returned is [Tx, Ty, Tz, Eul_x, Eul_y, Eul_z]
Vec6d GetCorrectedPoseCamera(const CLM& clm_model, double fx, double fy, double cx, double cy);
Vec6d GetCorrectedPoseCameraPlane(const CLM& clm_model, double fx, double fy, double cx, double cy);
Vec6d GetCorrectedPoseWorld(const CLM& clm_model, double fx, double fy, double cx, double cy);

//===========================================================================

Expand Down
2 changes: 1 addition & 1 deletion lib/local/CLM/include/CLM_utils.h
Expand Up @@ -70,7 +70,7 @@ namespace CLMTracker
// Helper functions for parsing the inputs
//=============================================================================================
void get_video_input_output_params(vector<string> &input_video_file, vector<string> &depth_dir,
vector<string> &output_pose_file, vector<string> &output_video_file, vector<string> &output_landmark_files, vector<string> &output_3D_landmark_files, bool& camera_plane_pose, vector<string> &arguments);
vector<string> &output_pose_file, vector<string> &output_video_file, vector<string> &output_landmark_files, vector<string> &output_3D_landmark_files, bool& world_coordinates_pose, vector<string> &arguments);

void get_camera_params(int &device, float &fx, float &fy, float &cx, float &cy, vector<string> &arguments);

Expand Down
19 changes: 9 additions & 10 deletions lib/local/CLM/src/CLMTracker.cpp
Expand Up @@ -53,7 +53,7 @@
using namespace CLMTracker;
using namespace cv;

// Getting a head pose estimate from the currently detected landmarks (rotation with respect to camera)
// Getting a head pose estimate from the currently detected landmarks (rotation with respect to point camera)
// The format returned is [Tx, Ty, Tz, Eul_x, Eul_y, Eul_z]
Vec6d CLMTracker::GetPoseCamera(const CLM& clm_model, double fx, double fy, double cx, double cy)
{
Expand All @@ -72,11 +72,11 @@ Vec6d CLMTracker::GetPoseCamera(const CLM& clm_model, double fx, double fy, doub
}
}

// Getting a head pose estimate from the currently detected landmarks (rotation with respect to camera plane)
// Getting a head pose estimate from the currently detected landmarks (rotation in world coordinates)
// The format returned is [Tx, Ty, Tz, Eul_x, Eul_y, Eul_z]
Vec6d CLMTracker::GetPoseCameraPlane(const CLM& clm_model, double fx, double fy, double cx, double cy)
Vec6d CLMTracker::GetPoseWorld(const CLM& clm_model, double fx, double fy, double cx, double cy)
{
if(!clm_model.detected_landmarks.empty() && clm_model.params_global[0] != 0 && clm_model.tracking_initialised)
if(!clm_model.detected_landmarks.empty() && clm_model.params_global[0] != 0)
{
double Z = fx / clm_model.params_global[0];

Expand Down Expand Up @@ -107,11 +107,11 @@ Vec6d CLMTracker::GetPoseCameraPlane(const CLM& clm_model, double fx, double fy,

// Getting a head pose estimate from the currently detected landmarks, with appropriate correction due to orthographic camera issue
// This is because rotation estimate under orthographic assumption is only correct close to the centre of the image
// This method returns a corrected pose estimate with respect to the camera plane (Experimental)
// This method returns a corrected pose estimate with respect to world coordinates (Experimental)
// The format returned is [Tx, Ty, Tz, Eul_x, Eul_y, Eul_z]
Vec6d CLMTracker::GetCorrectedPoseCameraPlane(const CLM& clm_model, double fx, double fy, double cx, double cy)
Vec6d CLMTracker::GetCorrectedPoseWorld(const CLM& clm_model, double fx, double fy, double cx, double cy)
{
if(!clm_model.detected_landmarks.empty() && clm_model.params_global[0] != 0 && clm_model.tracking_initialised)
if(!clm_model.detected_landmarks.empty() && clm_model.params_global[0] != 0)
{
// This is used as an initial estimate for the iterative PnP algorithm
double Z = fx / clm_model.params_global[0];
Expand Down Expand Up @@ -152,9 +152,8 @@ Vec6d CLMTracker::GetCorrectedPoseCameraPlane(const CLM& clm_model, double fx, d
}
}

// Getting a head pose estimate from the currently detected landmarks, with appropriate correction due to orthographic camera issue
// This is because rotation estimate under orthographic assumption is only correct close to the centre of the image
// This method returns a corrected pose estimate with respect to a point camera (NOTE not the camera plane) (Experimental)
// Getting a head pose estimate from the currently detected landmarks, with appropriate correction due to perspective projection
// This method returns a corrected pose estimate with respect to a point camera (NOTE not the world coordinates) (Experimental)
// The format returned is [Tx, Ty, Tz, Eul_x, Eul_y, Eul_z]
Vec6d CLMTracker::GetCorrectedPoseCamera(const CLM& clm_model, double fx, double fy, double cx, double cy)
{
Expand Down
10 changes: 5 additions & 5 deletions lib/local/CLM/src/CLM_utils.cpp
Expand Up @@ -98,7 +98,7 @@ void create_directories(string output_path)

// Extracting the following command line arguments -f, -fd, -op, -of, -ov (and possible ordered repetitions)
void get_video_input_output_params(vector<string> &input_video_files, vector<string> &depth_dirs,
vector<string> &output_pose_files, vector<string> &output_video_files, vector<string> &output_2d_landmark_files, vector<string> &output_3D_landmark_files, bool& camera_plane_pose, vector<string> &arguments)
vector<string> &output_pose_files, vector<string> &output_video_files, vector<string> &output_2d_landmark_files, vector<string> &output_3D_landmark_files, bool& world_coordinates, vector<string> &arguments)
{
bool* valid = new bool[arguments.size()];

Expand All @@ -107,8 +107,8 @@ void get_video_input_output_params(vector<string> &input_video_files, vector<str
valid[i] = true;
}

// By default use rotation with respect to camera (not camera plane)
camera_plane_pose = false;
// By default use rotation with respect to camera (not world coordinates)
world_coordinates = false;

string root = "";
// First check if there is a root argument (so that videos and outputs could be defined more easilly)
Expand Down Expand Up @@ -170,9 +170,9 @@ void get_video_input_output_params(vector<string> &input_video_files, vector<str
valid[i+1] = false;
i++;
}
else if (arguments[i].compare("-cp") == 0)
else if (arguments[i].compare("-world_coord") == 0)
{
camera_plane_pose = true;
world_coordinates = true;
}
else if (arguments[i].compare("-help") == 0)
{
Expand Down
12 changes: 6 additions & 6 deletions matlab_runners/Action Unit Experiments/DISFA_valid_res.txt
@@ -1,12 +1,12 @@
AU1 results - corr 0.740, ccc - 0.729
AU2 results - corr 0.702, ccc - 0.623
AU4 results - corr 0.845, ccc - 0.822
AU2 results - corr 0.703, ccc - 0.624
AU4 results - corr 0.845, ccc - 0.821
AU5 results - corr 0.668, ccc - 0.661
AU6 results - corr 0.638, ccc - 0.624
AU9 results - corr 0.722, ccc - 0.703
AU12 results - corr 0.864, ccc - 0.853
AU6 results - corr 0.638, ccc - 0.623
AU9 results - corr 0.721, ccc - 0.702
AU12 results - corr 0.863, ccc - 0.852
AU15 results - corr 0.652, ccc - 0.638
AU17 results - corr 0.572, ccc - 0.506
AU20 results - corr 0.506, ccc - 0.488
AU20 results - corr 0.505, ccc - 0.487
AU25 results - corr 0.921, ccc - 0.919
AU26 results - corr 0.552, ccc - 0.185
Expand Up @@ -81,8 +81,10 @@

%% now do the actual evaluation that the collection has been done
f = fopen('DISFA_valid_res.txt', 'w');
au_res = zeros(1, numel(rel_preds));
for au = 1:numel(rel_preds)
[ accuracies, F1s, corrs, ccc, rms, classes ] = evaluate_au_prediction_results( preds_all(:,au), labels_all(:,au));
fprintf(f, 'AU%d results - corr %.3f, ccc - %.3f\n', rel_preds(au), corrs, ccc);
au_res(au) = ccc;
end
fclose(f);
Binary file modified matlab_runners/Feature Point Experiments/results/fps_yt.mat
Binary file not shown.
@@ -1,3 +1,3 @@
Model, mean, median
CLNF: 0.0573, 0.0526
CLNF: 0.0561, 0.0512
CLM: 0.0683, 0.0603
Binary file not shown.
Binary file not shown.

0 comments on commit 059a97e

Please sign in to comment.