Vol.12, No.3, August 2023.                                                                                                                                                                               ISSN: 2217-8309

                                                                                                                                                                                                                        eISSN: 2217-8333


TEM Journal



Association for Information Communication Technology Education and Science

Method Development Through Landmark Point Extraction for Gesture Classification With Computer Vision and MediaPipe


Suherman Suherman, Adang Suhendra, Ernastuti Ernastuti


© 2023 Suherman Suherman, published by UIKTEN. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. (CC BY-NC-ND 4.0)


Citation Information: TEM Journal. Volume 12, Issue 3, Pages 1677-1686, ISSN 2217-8309, DOI: 10.18421/TEM123-49, August 2023.


Received: 16 March 2023.

Revised:   04 July 2023.
Accepted: 19 July 2023.
Published: 28 August 2023.




Examining the physical movements of students during their educational quests holds great significance as these nonverbal cues can exert a substantial influence on academic performance, and boost, learning outcomes, Consequently, numerous researchers are engaged in exploring the domain of gesture categorization employing machine learning techniques. Initially, we conducted an observation of students’ movements in a virtual learning environment during face-to-face interactions with their teachers. This procedure yielded a roster of thirteen motion-based behaviors, encompassing actions such as tilting the head towards either direction, lowering and lifting the head, as well as gesturing with the right and left hand towards the head and neck area, and positioning the shoulders in a front and lateral direction. This research offers a technique for establishing a set of criteria for categorizing students’ gesticulations in online learning by utilizing the comprehensive MediaPipe holistic library and OpenCV to detect, pose and extract salient landmarks. This endeavor culminated in the attainment of a percentage-based metric indicative of gesture identification efficacy pertaining to the aforementioned thirteen motion-based activities.


Keywords –Gesture, machine learning, online learning, MediaPipe.



Full text PDF >  



Copyright © 2023 UIKTEN
Copyright licence: All articles are licenced via Creative Commons CC BY-NC-ND 4.0 licence