A new study by Kimberly Brink and her colleagues looks at how children view social robots and what factors build their trust in a robot teacher.

As artificial intelligence and robotic technology continue to advance, the idea of a robot as a teacher or a companion for a child is no longer confined to science fiction. Several real-life social robots, or autonomous machines that interact and communicate with humans, are already working and assisting people in places like schools, hospitals, and the home.

NAO, a humanoid robot designed by French firm Aldebaran Robotics, has been used as a teaching resource for kids with autism in schools since 2013. The program aims to improve their social interaction and verbal communication skills with interactive educational games. And the international research project L2TOR, funded by the Horizon 2020 program of the European Commission, designed a child-friendly tutor robot to teach preschoolers a second language. In the home, social robots, like AvatarMind’s iPal, play with children and act as companions.

But what attributes does a robot need for a child to trust and feel comfortable with it? For adults, a robot that appears too human-like leads to feelings of unease. This is a well-known phenomenon called the Uncanny Valley. Does the Uncanny Valley exist for young children as well, or is it something that develops?

“The biggest factor that influenced children’s willingness to trust the accurate robot seemed to be the robot’s agency — its ability to think, make decisions, or know things.”

Learning more about the way children perceive social robots could help engineers and designers develop a better teaching tool. Also, uncovering the origins and mechanisms of the Uncanny Valley — previously unstudied in children — will likely lead to the design of robots that do not trigger any negative reactions.

Kimberly Brink, a doctoral candidate in developmental psychology and a co-coordinator of the Living Lab Program at the University of Michigan, explored these questions in two new studies. The first study, published December 2017 in Child Development, asked how children’s perception of robots affected their comfort level with them. The second, presented at last year’s Society of Research in Child Development (SRCD) Biennial Meeting, tested the willingness of children to learn from robots and what attributes made a robot teacher more trustworthy.

Younger children don’t mind if a robot appears human-like

For the first study, Brink and her colleagues recruited 240 children ranging in age from 3 to 18 at the Natural History Museum in Ann Arbor. Children watched videos of either a machine-like robot or a human-like robot. Afterwards, the kids were asked questions such as “Do you feel the robot is nice or creepy?” and “Does the robot make you feel weird or happy?”

Up to age 8, the children rated both the machine-like and human-like robots as not very creepy. But from 9 years old onwards, differences in creepiness began to appear and increased with age: the closely human-like robot was much creepier. These results suggest that the Uncanny Valley is something that develops around the age of 9.

“The findings have interesting implications for the design of robots, in that these features might matter for different age groups,” said Brink. “Maybe you want your robots to be more agent-like for younger children, but you might want to be careful if you’re going to use those for older kids.”

“In light of these findings, it will be intriguing to see whether and how robot teachers will establish themselves in educational settings in the future.”

In the second experiment (not yet published), the researchers had 67 three-year-olds watch videos of two robots naming different objects. Children first watched four videos where the robots named familiar objects, like a baby doll or a brush. One robot always got the answers right, while the other always got them wrong. Then the robots began to name objects that were novel and unfamiliar to the children. Brink and colleagues wanted to know: which robot would the children trust to give them the real name for the new toy?

The children decidedly trusted the accurate robot and believed that the word it attributed to the novel object was the true name.

Brink and colleagues also wanted to know if children considered the minds of robots when they decided which one to trust. So they asked the children if they believed the robots had human-like minds. For example, did children think the robots could think, make decisions, feel hungry, or feel afraid? The biggest factor that influenced children’s willingness to trust the accurate robot seemed to be the robot’s agency — its ability to think, make decisions, or know things. As perceptions of agency increased, the children were more likely to trust the accurate robot.

In light of these findings, it will be intriguing to see whether and how robot teachers will establish themselves in educational settings in the future.

Keep up to date with the BOLD newsletter