AI Trainer  : Video-Based Squat Analysis

Authors

  • Prof. Anuja Garande Artificial Intelligence & Data Science Engineering, Zeal College of Engineering and Research, Pune, India Author
  • Kushank Patil Artificial Intelligence & Data Science Engineering, Zeal College of Engineering and Research, Pune, India Author
  • Rasika Deshmukh Artificial Intelligence & Data Science Engineering, Zeal College of Engineering and Research, Pune, India Author
  • Siddhi Gurav Artificial Intelligence & Data Science Engineering, Zeal College of Engineering and Research, Pune, India Author
  • Chaitanya Yadav Artificial Intelligence & Data Science Engineering, Zeal College of Engineering and Research, Pune, India Author

DOI:

https://doi.org/10.32628/IJSRSET2411221

Keywords:

Biomechanics, Fitness Training, Joint Angles, Mediapipe Pose Estimation, Physical Therapy, Real-Time Feedback, Squat Form Assessment, User Interface (UI) design, Video-Based Squat Analysis

Abstract

This research proposes a video-based system for analyzing human squats and providing real-time feedback to improve posture. The system leverages MediaPipe, an open-source pose estimation library, to identify key body joints during squats. By calculating crucial joint angles (knee flexion, hip flexion, ankle dorsiflexion), the system assesses squat form against established biomechanical principles. Deviations from these principles trigger real-time feedback messages or visual cues to guide users towards optimal squat posture. The paper details the system architecture, with a client-side application performing pose estimation and feedback generation. The methodology outlines data collection with various squat variations, system development integrating MediaPipe, and evaluation through user testing with comparison to expert evaluations. Key features include real-time feedback and customizable thresholds for user adaptation. Potential applications encompass fitness training, physical therapy, and sports training. Finally, the paper explores future work possibilities like mobile integration, advanced feedback mechanisms, and machine learning for automatic threshold adjustments. This research offers a valuable tool for squat analysis, empowering users to achieve their fitness goals with proper form and reduced injury risk.

Downloads

Download data is not yet available.

References

Rodríguez-Moreno, I.; Martínez-Otzeta, J.M.; Sierra, B.; Rodriguez, I.; Jauregi, E. Video Activity Recognition: State-of-the-Art. Sensors 2019, 19, 3160. DOI: https://doi.org/10.3390/s19143160

Wren, C.R.; Azarbayejani, A.J.; Darrell, T.J.; Pentland, A.P. Integration Issues in Large Commercial Media Delivery Systems; SPIE: Washington, DC, USA, 1996.

Elgammal, A.; Harwood, D.; Davis, L. Non-parametric model for background subtraction. In Computer Vision—ECCV 2000; Springer: Berlin, Germany, 2000; pp. 751–767. DOI: https://doi.org/10.1007/3-540-45053-X_48

Barnich, O.; Van Droogenbroeck, M. ViBE: A powerful random technique to estimate the background in video sequences. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009; pp. 945–948. DOI: https://doi.org/10.1109/ICASSP.2009.4959741

McFarlane, N.J.B.; Schofield, C.P. Segmentation and tracking of piglets in images. Mach. Vis. Appl. 1995, 8, 187–193. DOI: https://doi.org/10.1007/BF01215814

Lucas, B.D.; Kanadee, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the Imaging Understanding Workshop, Pittsburgh, PA, USA, 24–28 August 1981; pp. 121–130.

Horn, B.K.; Schunck, B.G. Determining optical flow. In Techniques and Applications of Image Understanding; Technical Symposium East; International Society for Optics and Photonics: Washington, DC, USA, 1981; Volume 17, pp. 185–203. DOI: https://doi.org/10.1016/0004-3702(81)90024-2

Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999. DOI: https://doi.org/10.1109/ICCV.1999.790410

Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005.

Comaniciu, D.; Meer, P. Mean shift analysis and applications. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999. DOI: https://doi.org/10.1109/ICCV.1999.790416

Koppula, H.S.; Gupta, R.; Saxena, A. Learning human activities and object affordances from RGB-D videos. Int. J. Robot. Res. 2013, 32, 951–970. DOI: https://doi.org/10.1177/0278364913478446

Ni, B.; Pei, Y.; Moulin, P.; Yan, S. Multilevel Depth and Image Fusion for Human Activity Detection. IEEE Trans. Cybern. 2013, 43, 1383–1394. [PubMed] DOI: https://doi.org/10.1109/TCYB.2013.2276433

Wang, J.; Liu, Z.; Wu, Y.; Yuan, J. Learning Actionlet Ensemble for 3D Human Action Recognition. IEEE Trans. Pattern Anal. Machin. Intel. 2014, 36, 914–927. [PubMed] DOI: https://doi.org/10.1109/TPAMI.2013.198

Shan, J.; Akella, S. 3D human action segmentation and recognition using pose kinetic energy. In Proceedings of the 2014 IEEE International Workshop on Advanced Robotics and Its Social Impacts, Evanston, IL, USA, 11–13 September 2014. DOI: https://doi.org/10.1109/ARSO.2014.7020983

Cippitelli, E.; Gasparrini, S.; Gambi, E.; Spinsante, S. A Human Activity Recognition System Using Skeleton Data from RGBD Sensors. Comput. Intel. Neurosci. 2016, 2016, 1–14. DOI: https://doi.org/10.1155/2016/4351435

Gao, Y.; Xiang, X.; Xiong, N.; Huang, B.; Lee, H.J.; Alrifai, R.; Jiang, X.; Fang, Z. Human Action Monitoring for Healthcare based on Deep Learning. IEEE Access 2018, 6, 52277–52285. DOI: https://doi.org/10.1109/ACCESS.2018.2869790

Adama, D.A.; Lotfi, A.; Langensiepen, C.; Lee, K.; Trindade, P. Human activity learning for assistive robotics using a classifier ensemble. Soft Comp. 2018, 22, 7027–7039. DOI: https://doi.org/10.1007/s00500-018-3364-x

Albu, V. Measuring Customer Behavior with Deep Convolutional Neural Networks; BRAIN. Broad Research in Artificial Intelligence and Neuroscience: Bacau, Romania, 2016; pp. 74–79.

Majd, L. Human action recognition using support vector machines and 3D convolutional neural networks. Intern. J. Adv. Intel. Inf. 2017, 3, 47–55. DOI: https://doi.org/10.26555/ijain.v3i1.89

Murad, A.; Pyun, J.-Y. Deep Recurrent Neural Networks for Human Activity Recognition. Sensors 2017, 17, 2556. DOI: https://doi.org/10.3390/s17112556

Qin, Z.; Zhang, Y.; Meng, S.; Qin, Z.; Choo, K.-K.R. Imaging and fusing time series for wearable sensors based human activity recognition. Inf. Fusion 2020, 53, 80–87. . DOI: https://doi.org/10.1016/j.inffus.2019.06.014

Downloads

Published

07-04-2024

Issue

Section

Research Articles

How to Cite

[1]
Prof. Anuja Garande, Kushank Patil, Rasika Deshmukh, S. Gurav, and Chaitanya Yadav, “AI Trainer  : Video-Based Squat Analysis”, Int J Sci Res Sci Eng Technol, vol. 11, no. 2, pp. 172–179, Apr. 2024, doi: 10.32628/IJSRSET2411221.

Most read articles by the same author(s)

Similar Articles

1-10 of 140

You may also start an advanced similarity search for this article.