Adaptive learning, big data, and the meaning of learning

Knewton defines adaptive learning as “A teaching method premised on the idea that the curriculum should adapt to each user.” In a blog post, Knewton’s COO, David Liu, expanded on this definition. Here is an extract:

You have to understand and have real data on content… Is the instructional content teaching what it was intended to teach? Is the assessment accurate in terms of what it’s supposed to assess? Can you calibrate that content at scale so you’re putting the right thing in front of a student, once you understand the state of that student?

The idea of putting the right thing in front of a students is very cool. That’s part of what we do here at Lectica. But what does Liu mean by learning?

Curiosity got the better of me, so I set out to do some investigating.

What is Knewton‘s conception of learning?

In Knewton’s white paper on adaptive learning the authors describe how their technology works.

To provide continuously adaptive learning, Knewton analyzes learning materials based on thousands of data points — including concepts, structure, difficulty level, and media format — and uses sophisticated algorithms to piece together the perfect bundle of content for each student, constantly. The system refines recommendations through network effects that harness the power of all the data collected for all students to optimize learning for each individual student.

They go on to discuss several impressive technological innovations. I have to admit, the technology is cool, but what is their learning model and how is Knewton’s technology being used to improve learning and teaching?

Unfortunately, Knewton does not seem to operate with a clearly articulated learning model in mind. In any case, I couldn’t find one. But based on the sample items and feedback examples shown in their white paper and on their site, what Knewton means by learning is the ability to consistently get right answers on tests and quizzes, and the way to learn (get more answers right) is to get more practice on the kind of items students are not yet consistently getting right.

In fact, Knewton appears to be a high tech application of the correctness-focused learning model that’s dominated public education since No Child Left Behind. From my perspective, it is yet another example of what it looks like when we throw technology at a problem without engaging in a deep enough analysis of that problem.

We’re in the middle of an education crisis, but it’s not because children aren’t getting enough answers right on tests and quizzes. It’s because our efforts to improve education consistently fail to ask the most important questions, like “Why do we educate our children?” and “What are the outcomes that would be genuine evidence of success?”

Don’t get me wrong. My colleagues and I love technology, and we leverage it shamelessly. But we don’t believe technology is the answer. The answer lies in developing a deep understanding of how learning works and what we need to do to support the kind of learning that produces outcomes we ought to care about.




Award-winning educator, scholar, & consultant, Dr. Theo Dawson, discusses a wide range of topics related to learning and development.

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Theo Dawson

Theo Dawson

Award-winning educator, scholar, & consultant, Dr. Theo Dawson, discusses a wide range of topics related to learning and development.

More from Medium

Subnational Data Innovation Research Blog Series: The Advent of Data Infrastructure

How can you evaluate the effectiveness of cooperation with suppliers?

Earth Snowflake Analytics Empowers Intelligent Investigation

Why IBM repriced it’s Cloud Pak for Data as-a-Service

IBM Cloud Pak for Data as-a-Service