Automatic Edge Error Judgment in Figure Skating Using 3D Pose Estimation from a Monocular Camera and IMUs
Ryota Tanaka, Tomohiro Suzuki, Kazuya Takeda, Keisuke Fujii, Automatic Edge Error Judgment in Figure Skating Using 3D Pose Estimation from a Monocular Camera and IMUs, 6th International ACM Workshop on Multimedia Content Analysis in Sports at ACM Multimedia 2023
This is the official code for "Automatic Edge Error Judgment in Figure Skating Using 3D Pose Estimation from a Monocular Camera and IMUs".
MMSports_tanaka_digest.mp4
You can download the IMU dataset as CSV files from IMU_data/dataset
and video data from Google Drive.
Video data are pre-processed so that only the skaters are cut out from the bounding box, and the timing of the take-off is aligned.
You can validate our paper's data using the following code.
Note: The GIF image in the example command execution below is played back at 20x speed.
![](https://private-user-images.githubusercontent.com/102862947/278212411-b088c223-fbd9-45b7-83ca-f15b496a73c2.gif?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjE2NjI3MzgsIm5iZiI6MTcyMTY2MjQzOCwicGF0aCI6Ii8xMDI4NjI5NDcvMjc4MjEyNDExLWIwODhjMjIzLWZiZDktNDViNy04M2NhLWYxNWI0OTZhNzNjMi5naWY_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNzIyJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDcyMlQxNTMzNThaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT02MzJiZGVjMDRkYWRlZDA5ODA1NjIwNzIwODdmYmFmOWM2MGYzYTliMmNmNDczYTMyMDlhMzM2ODU0MDMwM2ZhJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.bFxXlj1zEy0c6DNPeXDVkDuGFmC9-dmajVch0ZHnNB0)
Upload the video you want to judge for edge error Video_data/demo/video
. However, the video is supposed to be at 240 fps with a fixed viewpoint.
For video file extensions, .mp4 and .mov are recommended.
Run the following code to the Video_data/
directory.
python demo/main.py --video sample_video.mov
First, it detects who jumps in the video on a bbox basis. Next, 2D pose estimation of the target person is performed based on the detected bbox. Finally, 3D pose estimation is performed based on the estimated 2D pose to judge edge errors. The result is displayed as either "EDGE ERROR" or "NOT EDGE ERROR," with the prediction accuracy displayed simultaneously.
Ryota Tanaka - tanaka.ryota@g.sp.m.is.nagoya-u.ac.jp