Of course, as D’Mello puts it, “we can’t install a $20,000 butt-sensor chair in every school in America.” So D’Mello, along with Heffernan, is working on a less elaborate, less expensive alternative: judging whether a student is bored, confused or frustrated based only on the pattern of his or her responses to questions. Heffernan and a collaborator at Columbia’s Teachers College, Ryan Baker, an expert in educational data mining, determined that students enter their answers in characteristic ways: a student who is bored, for example, may go for long stretches without answering any problems (he might be talking to a fellow student, or daydreaming) and then will answer a flurry of questions all at once, getting most or all correct. A student who is confused, by contrast, will spend a lot of time on each question, resort to the hint button frequently and get many of the questions wrong.
“Right now we’re able to accurately identify students’ emotions from their response patterns at a rate about 30 percent better than chance,” Baker says. “That’s about where the video cameras and posture sensors were a few years ago, and we’re optimistic that we can get close to their current accuracy rates of about 70 percent better than chance.” Human judges of emotion, he notes, reach agreement on what other people are feeling about 80 percent of the time.
Koedinger is convinced that learning is so unfathomably complex that we need the data generated by computers to fully understand it. “We think we know how to teach because humans have been doing it forever,” he says, “but in fact we’re just beginning to understand how complicated it is to do it well.”
As an example, Koedinger points to the spacing effect. Decades of research have demonstrated that people learn more effectively when their encounters with information are spread out over time, rather than massed into one marathon study session. Some teachers have incorporated this finding into their classrooms — going over previously covered material at regular intervals, for instance. But optimizing the spacing effect is a far more intricate task than providing the occasional review, Koedinger says: “To maximize retention of material, it’s best to start out by exposing the student to the information at short intervals, gradually lengthening the amount of time between encounters.” Different types of information — abstract concepts versus concrete facts, for example — require different schedules of exposure. The spacing timetable should also be adjusted to each individual’s shifting level of mastery. “There’s no way a classroom teacher can keep track of all this for every kid,” Koedinger says. But a computer, with its vast stores of memory and automated record-keeping, can. Koedinger and his colleagues have identified hundreds of subtle facets of learning, all of which can be managed and implemented by sophisticated software.
Yet some educators maintain that however complex the data analysis and targeted the program, computerized tutoring is no match for a good teacher. It’s not clear, for instance, that Koedinger’s program yields better outcomes for students. A review conducted by the Department of Education in 2010 concluded that the product had “no discernible effects” on students’ test scores, while costing far more than a conventional textbook, leading critics to charge that Carnegie Learning is taking advantage of teachers and administrators dazzled by the promise of educational technology. Koedinger counters that “many other studies, mostly positive,” have affirmed the value of the Carnegie Learning program. “I’m confident that the program helps students learn better than paper-and-pencil homework assignments.”