How can children (or computers) learn abstract relations?

fxxu, Pixabay.com, CC0 1.0
fxxu, Pixabay.com, CC0 1.0

Around the age of 6, a child starts to understand what a synonym or an antonym is, thus laying the foundation for more complex ways of thinking. But how does a child do this? A new computational model offers explanations.

Two centuries ago, the poet S.T. Coleridge remarked that the creative mind needs to pay attention not just to “things”, but to “relations among things”. His point applies not just to poetry, but also to math and science. To understand algebra, children need to learn that a “variable” called is something that takes on a value (even if we don’t know what the value of is). A variable is defined by its relation to potential values, not by any specific feature.

Similarly, in chemistry, a “catalyst” is a substance that increases the rate of a chemical reaction—the definition refers to a relation, not to any particular feature of a substance. Understanding the relation paves the way to metaphor. To grasp what it means to say that Martin Luther King was a “catalyst for racial justice”, we need to transfer a relational concept from the realm of chemistry into that of social policy.

But how might children begin to learn abstract relations? In early elementary school, children are taught such concepts as “antonym” and “synonym”. Usually we think of a concept as being defined by what its examples have in common. But what (for example) do pairs of words that form antonyms have in common? “Rich” is the opposite of “poor” because they contrast on degree of wealth. “Hot” is the “opposite of “cold” because they contrast on temperature. “War” is the opposite of “peace” because they contrast with respect to aggression between people or states.

If we line up these diverse examples of antonyms—rich:poor, hot:cold, war:peace—where is their commonality? Imagine you’re a young child who hasn’t already learned the concept of “antonym”—how would you even get started?

For several years, I’ve been working with my colleague Hongjing Lu (a cognitive psychologist with expertise in computational modeling) and others to build a computer system capable of learning a wide range of abstract relations from examples of word pairs. Besides “antonym” and “synonym”, we’ve tackled relations like “category member” (robin:bird and hammer:tool) and “prevention” (water:thirst and medicine:disease).

We’ve taken the tack of integrating two very different types of learning. On the one hand, children learn implicitly from sheer exposure to the massive amounts of information that flow over them—from looking at the visual world, and listening to language. This type of learning has much in common with current work in artificial intelligence called deep learning, which involves algorithms for extracting patterns from huge amounts of data. In our own work, we build upon deep learning systems that learn to represent the meanings of individual words from massive electronic data bases of texts.

“Imagine you’re a young child who hasn’t already learned the concept of ‘antonym’—how would you even get started?”

Children (and adults) can also learn explicitly by carefully analyzing a small number of examples of a concept. To learn a relation between a pair of words, the learner may begin by calculating the difference between their values on semantic features. For the next pair offered as an example of the same relation, feature differences can also be calculated. Then—as the first step toward real abstraction—features that produce similar differences across word pairs can be aligned with one another.

For example, a feature that produces a large difference between rich and poor would be aligned with a different feature that yields a large difference between hot and cold. After processing several such examples, a computer system (and a child) can learn how to decide whether or not a new pair of words—an example never seen during learning—has the pattern of feature differences that signals the concept “antonym”.

We’re still far from a complete understanding of how children learn abstract relations. But we’re making progress in finding simple procedures that begin to strip away the superficial differences between examples of the same relation, and focus attention on deeper commonalities. This is the beginning of the road to metaphor and other manifestations of creative thinking.

The purpose of the biannual IMBES Conference is to facilitate cross-cultural collaborations in biology, education and the cognitive and developmental sciences. Our objectives are to improve the state of knowledge in and dialogue between education, biology, and the developmental and cognitive sciences; create and develop resources for scientists, practitioners, public policy makers, and the public; and create and identify useful information, research directions, and promising educational practices. The 2018 conference took place in Los Angeles, California.

The author of this blog post, Keith J. Holyoak, was among the presenters at the conference.

Weekly newsletter