Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question Regarding Output Files #286

Open
DestroytheCity opened this issue Nov 29, 2022 · 4 comments
Open

Question Regarding Output Files #286

DestroytheCity opened this issue Nov 29, 2022 · 4 comments

Comments

@DestroytheCity
Copy link

DestroytheCity commented Nov 29, 2022

Have successfully gotten FMC off the ground for our project, but we had some questions regarding the output files produced.

  1. In the Mediapipe_body+3d+xyz.csv file produced via the Alpha GUI, what is the unit for time? We see that our videos are roughly 19 seconds long and contain 2063 frames, yet we have 1753 measurements of each key point. Is this the number when all cameras are active and tracking?
  2. How are the x,y,z coordinates established? Are they consistent between videos using the same calibration or different calibrations within the same hallway?

EDIT: For the Mediapipe_body+3d+xyz.csv, also wondering what the coordinates represent/what the unit for each measurement is.

@ArcticFox31
Copy link

following

@jonmatthis
Copy link
Member

jonmatthis commented Dec 10, 2022

We're building docs to explain a lot of this in detail (and improving the organization of our data output as well), but briefly -

  1. Each row of the csv file is the 3d data from each synchronized frame from your cameras. We should definitely put a 'timestamp' column in there (and will do so soon), but in the mean time you can estimate the timestamps as matching the mean framerate of the cameras you used to make the recordings.

The number of rows in the csv matches the number of 'synchronized' frames and if things worked well that number should precisely match the number of frames in the videos in the synchronized_videos folder (and all of those videos should have precisely the same number of frames).

If that is not the case in your data, could you share another link and let me know the filenames of the videos that have 2063 frame vs the 1753 rows in the body 3d csv file?

  1. The xyz coordinates are derived from the anipose based calibration. If you set the charuco_board size correctly (as the length in mm of one edge of a black square in the board you used in the calibration) then the units should be in mm.

The coordinates system should match if you are using the same calibration (and the have not moved at all since that calibration).

In theory, if the cameras haven't moved and you perform a new calibration, the numbers should still match (assuming the calibration is providing an accurate measurement of the actual positions of the cameras), but that would need to be validated.

You can always open up the 'camera_calibration.toml' file as a text file and compare the translation data for each camera to see if the numbers make sense i.e. the distances in mm match the actual locations (...but there might be weird compression or rotation happening that would complicate simple measurements, so be careful!)

the origin (0,0,0) of the 3D space is based on the location of Camera0 (we're currently working on methods to realign the 3d world so that the XY plane is the ground plane and +Z points opposite gravity)

@DestroytheCity
Copy link
Author

Thank you so much for the reply! I really appreciate the guidance from your team to help move our project forward. We are trying to harness FMC for analyzing changes in an individual's walk between trials (keeping everything constant besides the patient), so understanding accuracy and precision is quite important to us.

We were looking into validating the accuracy both within a video and between videos by using the key point coordinates to 'measure' the length of long bones. Alternatively, we also have access to OPAL ADPM Sensors , which are equipped with 3-axis gyroscope, 3-axis accelerometer, 3-axis magnetometer, allowing them to detect a plethora of parameters relevant to gait analysis (such as gait cycle duration,s need, elevation at mid-swing, foot strike angle, step duration stride length, etc). We are thinking of comparing stride velocity/length generated from key point data using FMC to sensor data. I noticed that your team is working on understanding the accuracy of FMC, and was wondering how you are aiming to go about this process.

@DestroytheCity
Copy link
Author

DestroytheCity commented Dec 15, 2022

It does seem that the translation data numbers seem pretty far off from the actual camera locations. We also noticed a relatively high standard deviation in the measurement of long bones as well as abnormal measurements (same length of the bone in both the X and Y axis). Perhaps the two are correlated/have to do with w/the calibration process? Attached is our output data for reference

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants