Using Artificial Intelligence to Transform Online Video Lectures into Effective and Inclusive Agent Based Presentations
NSF RETTL
Students in introductory STEM courses frequently encounter video lectures that include an instructor standing next to a progression of slides. A challenge with these video lectures is that students may lose interest in science when they see the instructors as unhelpful (e.g., when they portray negative emotions while teaching) or unwelcoming (e.g., when they do not reflect the gender, ethnic, and racial diversity of the audience). This project aims to develop, validate, study, and make publicly available an artificial-intelligence (AI)-based framework that takes existing online instructional video lectures on introductory STEM topics created by human instructors and transforms them into instructionally effective and inclusive agent-based presentations. This work is intended to help improve science instruction and attract students from under-represented groups.
The research team will apply artificial intelligence methods to extract gesture and voice from instructional videos with human instructors and transform them into instruction delivered by a diverse set of emotionally and socially sensitive onscreen animated embodied agents. Experimental research studies will investigate: 1) whether students learn better from an AI-transformed version of a video lecture than the original instructor-made video lecture; and 2) which features of onscreen agents in AI-transformed video lectures produce improved learning outcomes and processes, comparing delivery from an onscreen agent who does or does not match the student’s gender, ethnicity, or race. Overall, results from the project will impact access to high-quality inclusive STEM learning experiences.