This document discusses using multimodal data and machine learning methods for analyzing learning across multiple contexts. It describes several studies that collected eye tracking, physiological, video, and other data from participants in contexts like playing Pacman, self-assessment tests, debugging programs, educational games, and collaborative concept mapping. Machine learning models were developed to predict outcomes like test scores, effort, and performance using features from the multimodal data. The document discusses the value of collecting multimodal data, developing explainable AI pipelines, and generalizing models across different learning contexts and tasks. It concludes by considering opportunities for using online learning system logs and designing more similar learning contexts.
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
2022_11_11 «AI and ML methods for Multimodal Learning Analytics»
1. AI and ML methods for
Multimodal Learning Analytics
Kshitij Sharma
Department of Computer Science
Norwegian University of Science and Technology, Trondheim
1
24. ML pipeline (Feature extraction)
• Logs: Reading-writing (R-W) episodes; Use of debugger; Use of variable view
• E4: mean and SD of BVP, TMP, and EDA and the mean of HR
24
27. Context 4: Game based learning
• Learning Context → Motion based educational games
• 40 Participants
• 30 Minutes of solving mathematics problems
• Multimodal data → eye-tracking, motion, EDA, HRV, system logs
• Outcome → game performance
27
28. Context 4: Game based learning
Towards designing AI agent to
support students
28
30. Context 4: Game based learning
• Information processing Index
• Cognitive load
• Mean HR
• Grab-match Differential
Most important features for the agent
30
31. Context 5: Collaborative Concept Map
• Learning Context → Video based Learning + Synthesis
• 82 Participants
• 20 Minutes of concept map creation
• Multimodal data → eye-tracking, audio, dialogues, system logs
• Outcome → Collaborative Concept map correctness and individual
learning gain
31
32. Context 6: Collaborative ITS
• Learning Context → Intelligent Tutoring Systems
• 50 Participants
• 45 Minutes of concept map creation
• Multimodal data → eye-tracking, audio, dialogues, system logs
• Outcome → Learning gain
32
38. Generalizability across contexts (individual
learning)
Train Using Test Using NRMSE on test dataset
Pacman, Self-Assessment Debugging 9.24 (1.6)
Pacman, Debugging Self-Assessment 8.27 (2.1)
Self-Assessment,
Debugging
Pacman 8.26 (1.9)
Data Used: Facial videos and Wristband data (HRV, EDA)
38
39. Generalizability across contexts (individual
learning)
Train Using Test Using NRMSE on test dataset
Pacman, Self-Assessment,
Debugging
Motion based
game
10.94 (1.4)
Pacman, Self-Assessment,
Motion based game
Debugging 9.74 (1.1)
Pacman, Debugging, Motion
based game
Self-Assessment 9.27 (0.9)
Self-Assessment, Debugging,
Motion based game
Pacman 10.08 (1.3)
Data Used: Wristband data (HRV, EDA)
41. Generalizability across contexts (collaborative
learning)
Temporal Features from Eye-tracking measurements
(cognitive load, information processing index, entropy, stability,
anticipation, fixation durations)
41
42. Generalizability across contexts (collaborative
learning)
Data Used: Eye-tracking
Train Using Test Using NRMSE on test dataset
All Individual Contexts Collaborative
Concept map
19.89 (5.4)
All Individual Contexts Collaborative ITS 21.16 (6.2)
All Individual Contexts +
Collaborative ITS
Collaborative
Concept map
6.7 (1.5)
All Individual Contexts +
Collaborative Concept map
Collaborative ITS 6.5 (1.2)
42
43. What’s next???
• Online learning → System logs?
• Online learning → System logs and generated multimodal data?
• The variance in these contexts was huge → can we design something
with more similarities within the contexts?
43