Accelerate Learning Science Research
We create tools that empower instructors and researchers to improve and personalize real-world courses using adaptive experimentation.
XPRIZE Digital Learning Challenge
We are Adaptive Experimentation Accelerator, the winner of the XPRIZE Digital Learning Challenge sponsored by IES. The goals of this $1M competition are to modernize, accelerate, and improve the ways in which we identify effective learning tools and processes that improve learning outcomes.
Why do adaptive experiments work better?
- Better student outcomes
- More statistical power
- Better ability to serve statistical minority groups
As part of this competition, we are developing a cross-platform infrastructure that supports both traditional and adaptive experiments using MOOClets. Adaptive experiments change how often conditions are assigned as data is collected, by using increasing evidence to give a condition more frequently to future students. This can accelerate the use of data to enhance and personalize learning for future students, by integrating machine learning algorithms with statistical models and human decision-making.
Our team integrates expertise in education research, experimental psychology, human-computer interaction, large-scale software development, educational data mining, statistics, and machine learning. We have collectively deployed over 150 field experiments with more than 300 000 learners from 2013 to 2022, working with over 50 instructors and 40 researchers.
Read more about the Intelligent Adaptive Interventions Lab at https://intadaptint.org, about Open Learning Initiative at https://oli.cmu.edu, and about Hints Lab at https://go.ncsu.edu/hintslab.
You are also more than welcome to give feedback on this website, parts of which are being adaptively tested using the same MOOClet toolkit that made us a finalist.
Digital Learning Challenge Development Team
AdExAcc Behavioral Interventions Grad Students
Computer Science Education Interventions
Online learning tools, CS Education, Mindset Reframing, Reflection Systems
Attributional theory, Goal Pursuit, Behaviour-Change interventions
Mental Health Interventions, Mindfulness, CS Education
Stress Reappraisal Interventions, Goalsetting and Planning
Open Education Resource Platforms, CS Education
Mental Health Interventions, Goalsetting
The MOOClet Framework
How can educational platforms be instrumented to accelerate the use of research to improve students’ experiences? We show how modular components of any educational interface – e.g. explanations, homework problems, even emails – can be implemented using the novel MOOClet software architecture.
Researchers and instructors can use these augmented MOOClet components for (1) Iterative Cycles of Randomized Experiments that test alternative versions of course content; (2) Data-Driven Improvement using adaptive experiments that rapidly use data to give better versions of content to future students, on the order of days rather than months. A MOOClet supports both manual and automated improvement using reinforcement learning; (3) Personalization by delivering alternative versions as a function of data about a student’s characteristics or subgroup, using both expert-authored rules and data mining algorithms. We provide an open-source web service for implementing MOOClets that has been used with thousands of students. The MOOClet framework provides an ecosystem that transforms online course components into collaborative micro-laboratories where instructors, experimental researchers, and data mining/machine learning researchers can engage in perpetual cycles of experimentation, improvement, and personalization.
Mohi Reza, Juho Kim, Ananya Bhattacharjee, Anna N. Rafferty, and Joseph Jay Williams. 2021. The MOOClet Framework: Unifying Experimentation, Dynamic Improvement, and Personalization in Online Courses. In Proceedings of the Eighth ACM Conference on Learning @ Scale (L@S 21). Association for Computing Machinery, New York, NY, USA, 15–26. https://doi.org/10.1145/3430895.3460128