Sergey Mukhametov - Profile on Academia.edu (original) (raw)

Uploads

Papers by Sergey Mukhametov

Research paper thumbnail of Flipped Classroom im Physikunterricht der Sekundarstufe I – Auswirkungen auf die Veränderung des individuellen Interesses im Bereich der E-Lehre

Die Zukunft des MINT-Lernens – Band 2

ZusammenfassungInteressen stellen zentrale motivationale Komponenten bei der Auseinandersetzung m... more ZusammenfassungInteressen stellen zentrale motivationale Komponenten bei der Auseinandersetzung mit einem Lerngegenstand dar und können als wichtige Prädiktoren für das Lernen angesehen werden. Die Attraktivität einer Lernumgebung kann durch unterschiedliche sogenannte Catch-Komponenten gesteigert werden und so ein situationales Interesse bei den Lernenden auslösen. Dieses gilt es durch den Einsatz sogenannter Hold-Komponenten aufrechtzuerhalten, um individuelles Interesse zu fördern. In diesem Zusammenhang spielen emotionale und wertbezogene Valenzen der Lernenden eine entscheidende Rolle, die durch das Erleben von Autonomie, Kompetenz und sozialer Eingebundenheit positiv beeinflusst werden können. Diese drei psychologischen Grundbedürfnisse werden im Flipped Classroom durch ein selbstbestimmtes und individuelles Lernen mit Erklärvideos, interaktiven Quizfragen und Aufgaben und einen vermehrt auf kooperative und kollaborative Arbeitsphasen ausgerichteten Unterricht in einer besonde...

Research paper thumbnail of Unterstützung von Experimenten zu Linsensystemen mit Simulationen, Augmented und Virtual Reality: Ein Praxisbericht

Research paper thumbnail of The impact of an interactive visualization and simulation tool on learning quantum physics: Results of an eye-tracking study

The impact of an interactive visualization and simulation tool on learning quantum physics: Results of an eye-tracking study

arXiv (Cornell University), Feb 13, 2023

Research paper thumbnail of Enhancing STEM-Learning with Augmented Reality in Museums - Embodied Learning and Eye-Tracking

Enhancing STEM-Learning with Augmented Reality in Museums - Embodied Learning and Eye-Tracking

Research paper thumbnail of Comparing two methods to overcome interaction blindness on public displays

Comparing two methods to overcome interaction blindness on public displays

Proceedings of the 5th ACM International Symposium on Pervasive Displays - PerDis '16, 2016

Nowadays overcoming interaction blindness in designing interaction methods on public displays is ... more Nowadays overcoming interaction blindness in designing interaction methods on public displays is still a challenging task. In this work, we present our study on evaluating the effectiveness of two methods (animation and video) in overcoming people's interaction blindness on gesture-based public displays. Our study shows that an animation-based method attracts more users, and thus possibly reduces interaction blindness compared to a video. The study also suggests the animation may persuade more users to interact but be less effective in teaching them the correct way of interaction. However further study should be made to achieve a more general conclusion.

Research paper thumbnail of Applying Direction Map Criteria to a Complex Large Scale Environment

Applying Direction Map Criteria to a Complex Large Scale Environment

Research paper thumbnail of Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4

Sensors

Remote eye tracking has become an important tool for the online analysis of learning processes. M... more Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in ...

Research paper thumbnail of ARETT: Augmented Reality Eye Tracking Toolkit for Head Mounted Displays

Sensors

Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (... more Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the use...

Research paper thumbnail of Interaktion im öffentlichen Raum: Von der qualitativen Rekonstruktion ihrer multimodalen Gestalt zur automatischen Detektion mit Hilfe von 3-D-Sensoren

Research paper thumbnail of Using mobile eye tracking to capture joint visual attention in collaborative experimentation

Using mobile eye tracking to capture joint visual attention in collaborative experimentation

2021 Physics Education Research Conference Proceedings

Research paper thumbnail of The Predictive Power of Eye-Tracking Data in an Interactive AR Learning Environment

Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers

Learning through embodiment is a promising concept, potentially capable to remove many layers of ... more Learning through embodiment is a promising concept, potentially capable to remove many layers of abstraction hindering the learning process. Walk the Graph, our HoloLens2-based AR application, provides an inquiry-based learning setting for understanding graphs through the full-body movement of the user. In this paper, as part of our ongoing work to build an AI framework to quantify and predict the learning gain of the user, we examine the predictive potential of gaze data collected during the app usage. To classify users into groups with different learning gains, we construct a map of areas of interest (AOI) based on the gaze data itself. Subsequently, using a sliding window approach, we extract engineered features from the collected in-app as well as gaze data. Our experimental results have shown that a Support Vector Machine with selected features achieved the highest F1 score (0.658; baseline: 0.251) compared to other approaches including a K-Nearest Neighbor and a Random Forest Classifier although in each of the cases the lion's share of the predictive power is indeed provided by the gaze-based features. CCS CONCEPTS • Applied computing → Interactive learning environments.

Research paper thumbnail of Flipped Classroom im Physikunterricht der Sekundarstufe I – Auswirkungen auf die Veränderung des individuellen Interesses im Bereich der E-Lehre

Die Zukunft des MINT-Lernens – Band 2

ZusammenfassungInteressen stellen zentrale motivationale Komponenten bei der Auseinandersetzung m... more ZusammenfassungInteressen stellen zentrale motivationale Komponenten bei der Auseinandersetzung mit einem Lerngegenstand dar und können als wichtige Prädiktoren für das Lernen angesehen werden. Die Attraktivität einer Lernumgebung kann durch unterschiedliche sogenannte Catch-Komponenten gesteigert werden und so ein situationales Interesse bei den Lernenden auslösen. Dieses gilt es durch den Einsatz sogenannter Hold-Komponenten aufrechtzuerhalten, um individuelles Interesse zu fördern. In diesem Zusammenhang spielen emotionale und wertbezogene Valenzen der Lernenden eine entscheidende Rolle, die durch das Erleben von Autonomie, Kompetenz und sozialer Eingebundenheit positiv beeinflusst werden können. Diese drei psychologischen Grundbedürfnisse werden im Flipped Classroom durch ein selbstbestimmtes und individuelles Lernen mit Erklärvideos, interaktiven Quizfragen und Aufgaben und einen vermehrt auf kooperative und kollaborative Arbeitsphasen ausgerichteten Unterricht in einer besonde...

Research paper thumbnail of Unterstützung von Experimenten zu Linsensystemen mit Simulationen, Augmented und Virtual Reality: Ein Praxisbericht

Research paper thumbnail of The impact of an interactive visualization and simulation tool on learning quantum physics: Results of an eye-tracking study

The impact of an interactive visualization and simulation tool on learning quantum physics: Results of an eye-tracking study

arXiv (Cornell University), Feb 13, 2023

Research paper thumbnail of Enhancing STEM-Learning with Augmented Reality in Museums - Embodied Learning and Eye-Tracking

Enhancing STEM-Learning with Augmented Reality in Museums - Embodied Learning and Eye-Tracking

Research paper thumbnail of Comparing two methods to overcome interaction blindness on public displays

Comparing two methods to overcome interaction blindness on public displays

Proceedings of the 5th ACM International Symposium on Pervasive Displays - PerDis '16, 2016

Nowadays overcoming interaction blindness in designing interaction methods on public displays is ... more Nowadays overcoming interaction blindness in designing interaction methods on public displays is still a challenging task. In this work, we present our study on evaluating the effectiveness of two methods (animation and video) in overcoming people's interaction blindness on gesture-based public displays. Our study shows that an animation-based method attracts more users, and thus possibly reduces interaction blindness compared to a video. The study also suggests the animation may persuade more users to interact but be less effective in teaching them the correct way of interaction. However further study should be made to achieve a more general conclusion.

Research paper thumbnail of Applying Direction Map Criteria to a Complex Large Scale Environment

Applying Direction Map Criteria to a Complex Large Scale Environment

Research paper thumbnail of Mobile Eye-Tracking Data Analysis Using Object Detection via YOLO v4

Sensors

Remote eye tracking has become an important tool for the online analysis of learning processes. M... more Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in ...

Research paper thumbnail of ARETT: Augmented Reality Eye Tracking Toolkit for Head Mounted Displays

Sensors

Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (... more Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the use...

Research paper thumbnail of Interaktion im öffentlichen Raum: Von der qualitativen Rekonstruktion ihrer multimodalen Gestalt zur automatischen Detektion mit Hilfe von 3-D-Sensoren

Research paper thumbnail of Using mobile eye tracking to capture joint visual attention in collaborative experimentation

Using mobile eye tracking to capture joint visual attention in collaborative experimentation

2021 Physics Education Research Conference Proceedings

Research paper thumbnail of The Predictive Power of Eye-Tracking Data in an Interactive AR Learning Environment

Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers

Learning through embodiment is a promising concept, potentially capable to remove many layers of ... more Learning through embodiment is a promising concept, potentially capable to remove many layers of abstraction hindering the learning process. Walk the Graph, our HoloLens2-based AR application, provides an inquiry-based learning setting for understanding graphs through the full-body movement of the user. In this paper, as part of our ongoing work to build an AI framework to quantify and predict the learning gain of the user, we examine the predictive potential of gaze data collected during the app usage. To classify users into groups with different learning gains, we construct a map of areas of interest (AOI) based on the gaze data itself. Subsequently, using a sliding window approach, we extract engineered features from the collected in-app as well as gaze data. Our experimental results have shown that a Support Vector Machine with selected features achieved the highest F1 score (0.658; baseline: 0.251) compared to other approaches including a K-Nearest Neighbor and a Random Forest Classifier although in each of the cases the lion's share of the predictive power is indeed provided by the gaze-based features. CCS CONCEPTS • Applied computing → Interactive learning environments.