A Python program that puts a jump counter over any jump rope video
During the pandemic, many people started to resort to at-home workouts to stay fit during the lockdown. As a result, jump roping saw a steep rise in popularity. People were attracted to jump roping because it is one of the most efficient forms of cardio that can be performed just about anywhere. I recently picked up jump roping myself, and I wanted to create a program that helped keep track of exactly how many jumps I perform during each one of my sets. I decided to use the OpenCV, Keras, and Tensorflow libraries to help me construct a deep learning model that can count the number of 'jumps' I perform in a set.
The workflow to develop this program was as follows:
-
Gather jump roping videos
Before starting, I had to gather videos of people jump roping across the internet such as this one over here. In order to diversify the data, I also asked my friends to send videos of themselves jump roping so that I can have data across different environments. It was important to ensure that the data I had was diversified because different people will have slightly different jump roping forms and different backgrounds. I ended up using 15 short clips (0:30 - 1:00 long) of people jump roping.
-
Preprocess each video frame by frame using the Farneback Dense Optical Flow Algorithm
Once I got the videos, I used the Farneback Dense Optical Flow Algorithm to help me process each video frame by frame. The code for this process can be found here. An example of what some of the preprocessed frames looked like is shown in figure 1.
Fig. 1:
-
Manually sort and label the frames
After obtaining the frames, I needed to sort and label the data. I broke down a jump into 3 separate movements. The upwards movement, the downwards movement, and the landing movement. To accurately train the data, I decided to have at least 1000 images for each category of movement. An example of what all 3 of these movements looked like for a typical jump can be found in Figure 2.
Fig. 2:
-
Train a Convolutional Neural Network using the sorted data
After sorting and manually labeling the necessary number of images, I was finally ready to train a Convolutional Neural Network. Using the Keras and Tensorflow libraries, I could train a model that would recognize each of the different types of movements found in a jump. The code for this model can be found here. The model obtained had an accuracy of 98.45%, which was excellent. Additionally, the model does not appear to be overfitting since the training loss and validation loss are both decreasing as the model gains experience. A graph of the training and validation loss can be found in Figure 3.
Fig. 3:
-
Develop a pipeline that can count the number of jumps
Once I successfully trained the model to accurately determine which 'movement' someone is doing while jumping, I was ready to create a pipeline to count the number of jumps in a video. The program had to preprocess each video frame by frame using the Farneback method. As the video was being processed, each frame was being fed into our model to determine whether the person was moving up, down, or landing. The basic formula of a jump is someone jumping up, coming down, and then landing. Once someone landed after jumping up, I would add one 'jump' to their jump counter. The model had some trouble recognizing the different movements of people before/after they started jump roping. So I had to also add some code that would prevent the model from falsely flagging random movements as jumps. After getting the current number of repetitions, I used OpenCV's VideoWriter feature to add text over the video that would display the current number of jumps. The full code for this pipeline can be found here.
-
Run test videos through the pipeline to confirm model efficacy
Finally, I had to test the pipeline to make sure that it was accurately counting my jumps. I recorded a video of myself jumping that can be found over here to determine the true accuracy of the model. Thankfully, the results seemed to be fairly accurate. In the above video, I jumped a total of 107 times, and the counter counted exactly 107 jumps. Figure 4 shows a screenshot of what the output video looked like. Clicking on the image will redirect you to the actual video the program returned.
Fig. 4:
Overall, this program appeared to be a success, but it still has some work to do. As previously mentioned, the program has some trouble recognizing what some of the non-jump rope related movements are (such as someone walking in or out of the frame). In future versions, I will gather data to patch this minor bug. That being said, this program is useful enough to help people track their jump roping workouts.
As mentioned in the conclusion, I would like to improve the program's recognition of non-jump rope related movements. Additionally, I would like to also train the model to recognize different jump rope tricks such as the double under, the boxer skip, and the alternate foot jump. That way, users can get a more accurate breakdown of what their workout session consisted of.