An NPR “program” about Knewton, reviewing their approach and pondering their future.
[gview file=”https://publicservicesalliance.org/wp-content/uploads/2015/10/Meet-The-Mind-Reading-Robo-Tutor-In-The-Sky-NPR-Ed-NPR.pdf”]Right now it seems like adaptive learning tools exist in a kind of iterative place somewhere between snake oil and revolution. They are incredibly promising, though, as yet, not fully realized, tools.
“Comments, context, facial gestures, eye contact and so on: There is a lot of bandwidth
there,” Feldstein says. “Knewton really has a very narrow bandwidth in terms of what they
can observe about the student relative to what a human teacher can observe.”
The quote above is so true. The data is just one piece of the puzzle. How to interact with the human is difficult. Lately, I have been testing best practices with a two year old. Henry is very interested in Face Time with grandma and grandpa, yet he is very active and on the go. IMO, our real time in the park is what helps him stay with us “in the ipad”. He knows us and wants to connect.
Adaptive learning has a lot of potential, but after the initial excitement we see how much more development is needed. “Reading” the student can be much more elaborate and comprehensive than what is available now.
For example, smart phones and tablets, and now watches, come with a number of sensors that give a richer feedback than Knewton can. Is blood pressure or pulse elevated, is the pupil dilated, is the person fidgety and moving around a lot? Is their breathing deep or shallow?
Will video analysis of facical expressions become viable in real time? No doubt some form of brain scan in real time will at least be attempted in coming years…will we have big data on which brain scans indicate quality learning?