Rethinking assessment for diverse learners

chrstphre, flickr.com, CC-BY 2.0
chrstphre, flickr.com, CC-BY 2.0

Imagine applying for a driver’s license in a foreign country. The forms are in an unfamiliar alphabet and the road signs are unknown to you. Although you’re an experienced driver, the words and symbols on the driver’s test will likely keep you from getting your license.

This is what school feels like for many learners. Learners who master the sophisticated probability and statistics required for popular games like Pokémon and Magic, but who cannot pass a math test. Learners who may not read or write well, but who can take a car apart and put it back together. Or create beautiful 3D animations, produce music videos, or program a game. These learners aren’t deficient; they just learn and produce in ways that academic tests don’t measure.

Many learners have unrecognized talents that are not nurtured and leveraged. Learners with cognitive differences, like ADD, dyslexia, and autism, may overcome these difficulties if their own goals are at stake. Interest-driven learning is good for all learners, but is most crucial for those who have executive functioning issues.

Students with self-regulation and impulse control difficulties often will pay attention or keep calm for projects they love because the work itself engages them. Those struggling with tasks like planning, deciphering words and symbols, and remembering sequences of required steps will persist and even excel when motivated by their own passions.

“Interest-driven learning is good for all learners, but is most crucial for those who have executive functioning issues.”

Learners with cognitive differences often bring unique strengths like creativity, divergent thinking, collaborative problem-solving, or extreme thoroughness. This asset model of cognitive differences has become evident to industries wanting these diverse skills in their workforces. Programs like Microsoft’s Autism Hiring Program have led the way for tapping into assets previously viewed as “learning disabled.”

So how do researchers measure these cognitive assets? How do educators leverage them to provide individualized learning for diverse learners? These endeavors require assessments of implicit knowledge—knowledge that learners may not be able to express in words on a test, but that they demonstrate through everyday behaviors.

Chemist and philosopher Michael Polanyi wrote in 1966: “We know more than we can tell.” He argued that implicit knowledge—what we know but cannot tell—provides the foundation for all learning. Researchers study how organizations build collective implicit knowledge and how assembly-line workers gain implicit skills for greater efficiencies, but using implicit learning in educational settings has perplexed researchers.

“How do educators leverage cognitive assets to provide individualized learning for diverse learners?”

Educators require evidence of explicit knowledge—knowledge that is expressed on a test, usually in writing. Unfortunately, such tests, even those used for screening cognitive abilities, do not typically measure learners’ everyday knowledge and abilities.

Last year, I attended a research conference about Dyslexia, STEM, and Learning Differences hosted by the US National Science Foundation and the Institute of Educational Sciences. There are promising interventions to help early math learners with dyslexia and dyscalculia (e.g. Fuchs&Fuchs), but they often use intricate word problems for pre/post-tests to study their efficacy. In this case, the assessment is a barrier to the learners they are trying to reach.

A good implicit learning assessment would measure knowledge that is demonstrated naturally through activities rather than a test. This is hard to do, but fortunately the timing is perfect to tackle this challenge now.

Data mining helps to better capture what learners really know

Learners spend more and more time in digital environments. Digital games are where many choose to spend time. A well-designed learning game entices players to build knowledge and skills required to advance in the game. A well-designed learning game also produces evidence that players have learned the knowledge and skills, and thus can serve as an implicit learning assessment.

“A well-designed learning game produces evidence that players have learned the knowledge and skills, and thus can serve as an implicit learning assessment.”

Game-based, implicit-learning assessments leverage the time learners spend in computer games with state-of-the-art machine learning methods that use data mining tools to look for evidence of learning. By logging activity in digital games, researchers can track a player’s every move, along with the corresponding game feedback, and tag each record with a timestamp and an anonymous player ID.

Using specialized tools, such as EdGE at TERC’s game-based, learning data architecture, DataArcade, digital log data is visualized as a simulated video replay of an individual’s game play. This playback tool enables remote and efficient labeling of the common strategies in the game. Figure 1 shows an example of researchers’ labelling of evidence of computational thinking in the logic puzzle game called Zoombinis.

Figure 1: Sample Data Arcade playback screen

Figure 1: Sample Data Arcade playback screen

For each game we study, our research team watches playback videos from a broad range of players and agrees upon what constitutes evidence of potential learning. We then they build machine-learning models to automatically detect the evidence at scale. This scalable, implicit-learning assessment opens new opportunities for inclusive teaching and learning.

Recent research used DataArcade and data mining to build automated detectors of implicit science learning in a physics game called Impulse. This work showed that when teachers bridge that game-based learning to classroom content, students’ explicit science learning is improved. When a student uses more force in the game to move heavy particles than light particles, a teacher can say, “Let me tell you what Sir Isaac Newton said about that.”

“When teachers bridge game-based learning to classroom content, students’ explicit science learning is improved.”

By using data mining to look at what learners do, not just what they say, we can better capture what they really know. Automated detectors can see when each individual learner is reaching a crucial “Aha!” moment or is struggling. Using these methods in environments where students choose to persist at complex problem-solving can capture a diverse audience of learners.

Not only can teachers leverage this information for differentiated learning, but designers can make experiences that adapt to each learner’s strengths and weaknesses.  For example, information from implicit-learning assessments can be passed back into a game experience in real-time to customize the learning based on each learner’s behaviors.

Implicit learning assessments are not limited to games. Any digital environment where learners’ activity can be tracked offers the basis for a learning assessment. Introductory coding environments, such as Scratch or App Inventor from MIT, could be rich sandboxes where implicit learning is measured.

“Going to where the learning is, and using tools to look under the hood, is likely to reveal all kinds of cognitive assets we never realized.”

Going to where the learning is, and using tools to look under the hood, is likely to reveal all kinds of cognitive assets we never realized. Imagine when every teacher can see what every learner really knows.

The purpose of the biannual IMBES Conference is to facilitate cross-cultural collaborations in biology, education and the cognitive and developmental sciences. Our objectives are to improve the state of knowledge in and dialogue between education, biology, and the developmental and cognitive sciences; create and develop resources for scientists, practitioners, public policy makers, and the public; and create and identify useful information, research directions, and promising educational practices. The 2018 conference took place in Los Angeles, California.

The author of this blog post, Jodi Asbell-Clarke, was among the presenters at the conference.

Weekly newsletter