MindLog™ compared with Lexiles©
Since we first introduced MindLog, several people have asked us how the Lexile tool developed by Metametrics relates to MindLog. The short answer is that they are quite different. Here comes a longer answer…
Lexile and Quantile tools
The Lexile literacy scores included in many student report cards today are designed to align students’ scores on reading tests with appropriate readings.
The Lexile tool is grounded in a quantitative analysis of the vocabulary in sets of readings typically assigned to children in different grades. The results of this analysis were used to standardize a reading difficulty scale. This scale is called the Lexile Scale.
In the Lexile system, reading proficiency is determined with tests that consist primarily of conventional multiple choice items. Scores on these tests are mapped to the Lexile Scale and Lexile scores are reported to teachers, who use them to select literature in the appropriate Lexile range for individual students. Only texts that have been mapped to the Lexile scale can be included in these literature selections.
Most Lexile test items are likely to resemble the sample reading comprehension item that follows:
Passage (Example Lexile Level: 850L)
The ocean’s surface is constantly moving, but beneath the waves, a world of intricate currents operates in relative silence. These deep ocean currents, also known as the global conveyor belt, are driven by differences in water temperature and salinity. Cold, salty water is denser and sinks toward the ocean floor near the poles. This cooler, heavier water then flows along the bottom of the ocean, moving toward the equator. As it travels, it gradually warms, becomes less dense, and rises toward the surface, completing the cycle.
Test Item Example (Multiple Choice)
Based on the passage, what is the primary cause of deep ocean currents?
A. The movement of surface waves
B. The melting of polar ice caps
C. Differences in water temperature and salinity
D. The rotation of the Earth
Based on responses to questions like these, students are awarded a Lexile score, a percentile score (which compares the Lexile score to national norms for the student’s grade), and a recommendation for the Lexile range in which a test-taker should practice reading.
Metametrics claims that tests composed of items like these are measures of reading proficiency. As I have argued elsewhere, tests composed of items with correct answers cannot measure proficiency, unless we’re willing to define proficiency as the ability to make correct choices on multiple choice items and agree that proficiency is entirely about correctness.
Tests of correctness tend to steer instruction toward a focus on correctness, which rewards forms of instruction that undermine optimal mental development.
For a deeper critique of conventional test items, see: What PISA measures. What we measure.
MindLog
Rather than being focused on a construct like literacy, MindLog is designed to foster and measure mental development. There are no test items in MindLog, which is best described as a reflective learning journal that supports and measures mental development.
In MindLog, learners respond to open-ended prompts designed to stimulate interest and thought—no prompts with correct answers allowed.
Mindloggers make weekly entries, educators or peers provide comments on new entries, and Mindloggers are asked to set tiny learning goals. (If you are already familiar with our work, you may notice that this process supports micro-VCoLing, a learning practice that leverages the brain’s built-in learning mechanisms.)
Here is an example of a MindLog entry by Francis, with a prompt, response, micro-task, and comment from her teacher, Rosa Patel:
MindLog as a learning tool
The MindLog prompt in the example above was designed to get students thinking about values and how difficult it can be to make good decisions when more than one perspective is involved. Rather than trying to write correctly or present a correct answer, Francis, uses his mental skills and existing knowledge to grapple with the problem. The micro-task he sets provides Francis with another opportunity to learn by bringing in another perspective, and Rosa’s feedback affirms and bolsters Francis’ self-authored micro-task.
MindLog prompts can be about any subject — math, science, literature, conflict, social science, geography, etc. Whatever the subject, MindLog can enhance learning by allowing students to apply what they are learning to open-ended questions without right answers.
MindLog as a measurement tool
The Lectical Scores awarded to MindLog reflections are measures of the complexity level of responses on a lifespan scale. Lectical Scores are calculated by CLAS, our computerized developmental assessment system.
You may have noticed that Francis’ reflection is informally structured and contains grammatical and punctuation errors. MindLog’s scoring system does not measure these aspects of performance.
Reflections in MindLog are usually scored about once a month (if students are writing weekly reflections). After Francis has received two scores, he will see a chart like the one below. Notice that there are no scores on this chart. Scores are omitted to ensure that all students, regardless of where their scores fall, have an equal opportunity to see themselves as growers.
Rosa (the teacher) is shown a version of the growth chart that also contains scores.
Lexiles vs. MindLog
So far, our descriptions of Lexiles and MindLog make it pretty clear that Lexiles and MindLog are different in fundamental ways. The chart below provides a more detailed account of these differences and their implications.
Lexiles and MindLog are different KINDS of things
Lexile measurements are intended to represent reading proficiency. Lexile scores and score ranges are determined with high-support correctness-focused tests that measure how well students can select or write correct answers to questions targeting specific aspects of texts.
The Lexile score awarded to an individual student is used to inform the selection of reading materials for that student.
MindLog is a skill-focused tool designed to support optimal mental development by maximizing the developmental benefits of regular reflective practice. In MindLog, students respond to prompts that ask them to think through a situation, problem, or scenario. There are no correct answers to MindLog prompts.
The system used to score MindLog entries measures the complexity level of student reflections, and results are presented to students & teachers in the form of a growth chart. (At the time of this writing, students are not shown their scores on these charts until they are 15 or 16.) Feedback provided by teachers and peers, along with micro-tasks chosen by students, add to the learning value of MindLog reflections.
MindLog is further designed to support educator development by making it easy for educators to continuously monitor student development. Repeated exposure to growth charts, in combination with student reflections, accelerates the growth of educators’ expertise in identifying “what comes next” for individual learners.
Interestingly, the work on Lexiles began at around the same time that we began the task of refining and automating our developmental assessment system. At the time, there was quite a bit of discussion about the atheoretical and purely quantitative approach taken by founders of Metametrics vs. the approach we took, which is grounded in a well-established developmental theory backed at that time by 80 years of research. We knew their approach would go to market quickly and that our approach would take longer to scale, but we did not realize we’d be 30 years behind them!
