More and more schools are teaching children the five social-emotional competencies: self-awareness, self-management, social awareness, relationship skills, and responsible decision-making. Cultivating these competencies can help children become lifelong learners, caring friends, and productive and independent members of society.

More on this topic
The benefits of helping children understand emotions

While children have traditionally learned these skills through practice with teachers and peers, technology has played a more prominent role over the last several decades. For example, educational television that poses questions directly to the viewer can develop children’s social-emotional skills. We wondered whether Conversational Agents (CAs) that emulate a person-to-person experience, like Alexa, Siri, and Google Home, could also support children’s social-emotional learning (SEL). CAs offer an enriched version of this pseudo-interactive approach – they are able to ask questions, provide feedback, and initiate voice-based interaction. With a predicted 42% of the US population using a CA or voice assistant in 2022, this could be a promising approach.

In the US, there are currently over 77,000 Alexa “Skills” – applications created by developers that allow users to interact with Amazon Echo and Alexa devices. We wondered if any of these are designed to support children’s SEL. If so, how effective are they? And how do parents feel about them?

What commercial products are currently available?

Of the 3,767 Alexa Skills designed for children, we found only 42 that attempted to support interpersonal abilities or self-awareness in a way that might foster SEL – and these Skills involved interaction styles that we believe are insufficient to support learning. We saw the same simplistic interaction patterns over and over again, which we labeled The Bulldozer, The One-Track Mind, The Delegator, and The Lecturer.

“We saw the same simplistic interaction patterns over and over again.”

Bulldozer Skills prompted the user to provide input, but then continued on their conversational path no matter how the user responded. One-Track-Mind Skills compelled users to engage in a narrow conversation on a single topic, for example through forced-choice responses. If the user’s reply deviated from the defined script, the CA would respond that it did not understand, or repeat the question, or stop working entirely. Delegator Skills prompted interactions between users, but did not actually participate in those interactions. And lastly, Lecturer Skills talked at users without interacting with them.

We developed a method we called the hamburger test. When chatting with the CA, we responded nonsensically with the word “hamburger” to see how the device would react. If  the CA moved on seamlessly regardless of the nonsensical answer, acting as if the user had responded appropriately, we concluded that it had failed the hamburger test. Of the 42 Skills, 52% failed that test.

A CA failing the hamburger test.
Alexa: You’re perfect just the way you are. Want to know something else?
User: Hamburger.
Alexa: Wonderful! Want to hear another good thing?

What do parents see as opportunities and barriers?

We asked 26 parents how they felt about the possibility of using Conversational Agents for social-emotional learning. Many saw CAs as an opportunity to foster and support kids’ boundless curiosity about the world around them. Parents reported that they loved their children’s curiosity, but sometimes felt burdened by it. They imagined that CAs might provide answers and opportunities for exploration as children asked their endless questions.

Parents also imagined that CAs could help children practice conversational turn-taking or active listening. They saw potential for CAs to help children develop an awareness of their own emotional states, suggesting that CAs might prompt kids to self-reflect or check in on their own feelings. Parents hoped that doing so could help children learn to attune themselves to others – to become more profoundly aware of and responsive to other people. This might involve picking up on subtle cues, becoming conscious of others’ emotions, and developing an intuitive understanding of others’ feelings and perspectives.

“We believe that parents should have a say in shaping the technology that is meant to help their kids.”

Yet parents also voiced concerns. They pointed out that conversation requires more than just speech, and worried that CAs could never be effective teachers without emotion, tonal shifts, and body language. Some also worried that CAs would promote behaviors and norms that might conflict with their values. Some even feared that devices might take on parenting responsibilities and pose a threat to parent-child bonding. One parent explained that she wanted to prioritize “having that connection with [her] kid versus, like, kind of shoveling [her child] off to Alexa.” These worries reflect a concern about devices undermining human-to-human relationships.

What next for children’s conversational technology?

Our research leaves us with many questions. Is it a good idea to allow CAs to wade into this delicate, high-stakes context? Is it possible to design systems that foster curiosity and attunement without overriding parents’ values or delivering flat interactions that lack tone and interpersonal awareness? Does the lack of existing products for this purpose mean that they can’t be built, or just that they haven’t been built yet?

More on this topic
Ensuring EdTech is truly educational

Given the opportunities parents envision, it is worth continuing to investigate whether Conversational Agents can be used to enhance children’s social-emotional learning. We have begun exploring these new questions by creating a working prototype, informed by parents’ feedback, for children to use at home with siblings. Will it be acceptable to families? Only time will tell. We believe that parents should have a say in shaping the technology that is meant to help their kids. Regardless of what we find, we hope the lessons drawn from our studies will be helpful in creating the next generation of conversational technology for children.    

Footnotes

This article is based on a 2022 paper.

Keep up to date with the BOLD newsletter