I did martial arts for about 15 years, with 4 of those years teaching and assistant teaching the classes at my studio. Before I had the official role of teaching, I was helping anyone who asked for help that were in the same class as me. While teaching I help students in all of the classes offered at the studio. The students range in age of 5+.
So this project was intended to be available for the students of the martial arts studio that I worked at. Mainly aimed at the younger students as the supplementary videos on YouTube are harder for them to understand and most of the students are under 21.
The project is a real-time action recognition system for martial arts using machine learning and computer vision to classify human actions from video and provide feedback on whether the sequence of actions is correct.
It was created with the idea that students who are unable to attend classes for long stretches of time are able to get feedback on whether they are practicing correctly. As reaching instructors outside of class is not very easy. While there are YouTube vidoes to reference for the studio, it is not very clear and difficult for younger students to understand.
The model doesn't include all of the actions, MediaPipe can detect more than one person at a time but if they are too close to one another then the landmarks may not be mapping correctly. At the moment the model is only equipped to handle one person and tracking their actions and their sequences. Which is fine for now, but if I wanted to exand it to see if two students are interacting correctly for one-steps I would probably not be able to use MediaPipe.
Future additions to the project would be adding more actions for the model to detect, working on the feedback and adding in more sequences. Right now the sequences that the model is checking only one if them is a standard sequence that all students learn, the other two are just shorter ones to check if it is working. All of the standard sequences should be added in and the feedback needs to be improved to that students can know what they did wrong and how it should be fixed.
Along with adding in more angles for each action, as the data that was collected for each action was based on the user facing the camera. Angles where the user is to their left and to their right needs to be added to for some of the \ standard sequences.
You can reach me at:
roundtree.olivia@gmail.com