Homepage

Accelerate Learning Science Research
using MOOClets

We create tools that empower instructors and researchers to improve and personalize real-world courses using adaptive experimentation.

XPRIZE Digital Learning Challenge

We are Adaptive Experimentation Accelerator, the winner of the XPRIZE Digital Learning Challenge sponsored by IES. The goals of this $1M competition are to modernize, accelerate, and improve the ways in which we identify effective learning tools and processes that improve learning outcomes.

Why do adaptive experiments work better?

  1. Better student outcomes
  2. More statistical power for multiple arms
  3. Better ability to serve statistical minority groups

As part of this competition, we are developing a cross-platform infrastructure that supports both traditional and adaptive experiments using MOOClets. Adaptive experiments change how often conditions are assigned as data is collected, by using increasing evidence to give a condition more frequently to future students. This can accelerate the use of data to enhance and personalize learning for future students, by integrating machine learning algorithms with statistical models and human decision-making.

Our team integrates expertise in education research, experimental psychology, human-computer interaction, large-scale software development, educational data mining, statistics, and machine learning. We have collectively deployed over 150 field experiments with more than 300 000 learners from 2013 to 2022, working with over 50 instructors and 40 researchers.

Read more about the Intelligent Adaptive Interventions Lab at https://intadaptint.org, about Open Learning Initiative at https://oli.cmu.edu, and about Hints Lab at https://go.ncsu.edu/hintslab.

Faculty Members

Joseph Jay Williams

Intelligent Adaptive Interventions Lab, UToronto

Web
Norman Bier

Open Learning Initiative, CMU

Web
John Stamper

DataShop, HCII, CMU

Web
Thomas Price

HINTsLab, NCSU

Web
Anna Rafferty

Artificial Intelligence in Education, Carleton

Web
Sofia S Villar

Design & Analysis of Randomized Trials, MRC, Cambridge

Web
Nina Deliu

Adaptive Experimentation, Statistics, Sapienza

Web

Digital Learning Challenge Development Team

Steven Moore
CMU

PhD Student in Computer Science at Carnegie Mellon University

WebLinkedIn
Ilya Musabirov
UofT

PhD Student in Computer Science at University of Toronto

WebLinkedIn
Pan Chen
UofT

PhD Student in Computer Science at University of Toronto

WebLinkedIn
Mohi Reza
UofT

PhD Student in Computer Science at University of Toronto

WebLinkedIn
Harsh Kumar
UofT

PhD Student in Computer Science at University of Toronto

Web
Koby Choy
UofT

Alumni of IAI lab at UofT

LinkedIn
Jiakai Shi
UofT

MEng Student at Unversity of Toronto, working as a research assistant at the IAI lab

LinkedIn
Raphael Gachuhi
CMU

Senior Software Engineer at Carnegie Mellon

LinkedIn
Tanvi Domadia
CMU

Project Manager, Learning Engineer at Open Learning Initiative

LinkedIn
Gene Hastings
CMU

DevSecOps Specialist at Carnegie Mellon

Partner Organizations

The MOOClet Framework

How can educational platforms be instrumented to accelerate the use of research to improve students’ experiences? We show how modular components of any educational interface – e.g. explanations, homework problems, even emails – can be implemented using the novel MOOClet software architecture.
Researchers and instructors can use these augmented MOOClet components for (1) Iterative Cycles of Randomized Experiments that test alternative versions of course content; (2) Data-Driven Improvement using adaptive experiments that rapidly use data to give better versions of content to future students, on the order of days rather than months. A MOOClet supports both manual and automated improvement using reinforcement learning; (3) Personalization by delivering alternative versions as a function of data about a student’s characteristics or subgroup, using both expert-authored rules and data mining algorithms. We provide an open-source web service for implementing MOOClets that has been used with thousands of students. The MOOClet framework provides an ecosystem that transforms online course components into collaborative micro-laboratories where instructors, experimental researchers, and data mining/machine learning researchers can engage in perpetual cycles of experimentation, improvement, and personalization.

Mohi Reza, Juho Kim, Ananya Bhattacharjee, Anna N. Rafferty, and Joseph Jay Williams. 2021. The MOOClet Framework: Unifying Experimentation, Dynamic Improvement, and Personalization in Online Courses. In Proceedings of the Eighth ACM Conference on Learning @ Scale (L@S 21). Association for Computing Machinery, New York, NY, USA, 15–26. https://doi.org/10.1145/3430895.3460128