Tomer Ullman is a cognitive psychology researcher at Harvard University. Tomer studies people’s everyday reasoning about the physics of objects and the psychology of other people. He hopes to have an impact on education through his study of human-like machines and machine-like humans. Annie Brookman-Byrne talks with Tomer about the most exciting questions he’s working on, and the connection between AI and cognitive development.
Annie Brookman-Byrne: What are you trying to understand about common-sense reasoning?
Tomer Ullman: I study the kind of everyday thinking we all share. I’m particularly fascinated by intuitive physics and intuitive psychology – how we interact with and intuitively understand objects and people. Think of it this way, if you see me throw a mug across the room, even if you’re not a physicist, you can think about the physics of the situation: Where will the mug land? Will it shatter? If the mug shatters, how far will the shards scatter? And even if you’re not a psychologist, you’ll probably be thinking about my mental state: Why did Tomer do that? Is he angry? Is he trying to prove a point?
Reasoning about these questions seems so obvious that we don’t realize it’s kind of magical, and seems to be based on sophisticated computations happening automatically and quickly in our minds. This kind of intuitive reasoning also appears to develop relatively early, so in my work I trace it back to early childhood, to examine what is there from the beginning and what is learned.
ABB: What first drew you to study this, and has the field changed over time?
TU: My father is a computer scientist who studies cognitive processes, and my mother is a clinical psychologist, so my interest in cognitive science is not a big mystery. I’ve always been interested in science in general, and in how the mind works in particular.
But specifically cognitive development and intuitive reasoning wasn’t something I was even thinking about when I started my PhD. Not because I was opposed to exploring that area, but because I didn’t even realize it was a topic to be studied. It was during the first years of my PhD that my eyes were opened to this part of the scientific world, as I attended seminars and heard talks by people in the field. I guess there’s a lesson here for people thinking of doing a PhD – it’s good to have a plan, but it’s more than fine to change it.
“What do children come into the world with, and how do they learn the rest?”
The field has always been an exciting place to be, but the past decade or so has seen a rekindling of interest in interactions between AI and cognitive development. These two fields have always been relevant to one another: Back in 1950 when Alan Turing introduced the idea of the Turing Test to assess a machine’s ability to exhibit intelligent behavior akin to that of a human being – before we were even had terms like “cognitive development” and “artificial intelligence” – he suggested that the way to pass that test was to build a machine that learns like a child. And that raises certain questions: What do children come into the world with, and how do they learn the rest? These questions echo the questions posed in AI and Machine Learning: What should I build into my machine, and how will it learn the rest?
Connections between the fields of AI and cognitive development have ebbed and flowed over time, but now seems to be a time of great resurgence and excitement: Many researchers in AI are looking to the recent findings in cognitive development for inspiration as they explore how machines should learn and think, and those who study cognitive development are looking at the latest machines as models for how children’s minds might work. I feel quite fortunate to be part of these fields as the stars are aligning again.
“Researchers in AI are looking to the recent findings in cognitive development for inspiration.”
ABB: How will your research affect education?
TU: I hope that my research on computational models, which is inspired by how children learn, will help others build more human-like machines that will inhabit the world our children grow up in, and help them learn. I’m also excited to investigate what might be described as the opposite of human-like machines: machine-like humans. Have you ever interacted with someone who was behaving in a “robotic” way? Someone who seemed to be following a script without really thinking about what they were doing? If that robotic person was a teacher, I’m sure you paid less attention, and you may have disengaged from whatever they were trying to teach you.
So, some current questions for me are: When others behave like automatons, how do we notice this, why do we care, and what are the consequences for learning? It’s not that being on ‘auto-pilot’ is always bad, there are many situations where we expect other people to behave that way and it’s fine. But, if your teacher seems robotic, and that means you stop caring about what they say and do, then it’s a problem for the current rush to make learning more automated. I hope my research can highlight potential dangers of learning in an automated environment, as well as identifying ways to address them.
“Having children has changed how I think about research.”
ABB: You told me you have children – has working in this field changed how you parent?
TU: This is a bit cliché, but it’s probably been the other way around – having children has changed how I think about research. Certainly, I’ve tried all kinds of classic experiments on my kids, and that’s been fun, but the bigger influence has been when I think about weird stuff my children do, things I don’t have a good psychological explanation for. Several of my current research projects have been directly inspired by my children, so thanks for that, kiddos.
ABB: What ideas are you pursuing next?
TU: I’m quite keen to explore pretense, imagination, and more specifically visual pretense. If I hold a block up and say, “This is a truck, okay?” children and adults immediately accept that premise. It’s easy to imagine that a block is practically anything… right? Well, sort of. It seems to make more sense to pretend that a block is a truck than that a block is a pile of spaghetti, for example. In our research we’re finding that people have systematic preferences when it comes to visual imagination, mostly about physical properties like shape, size, and orientation. This is another brick in the research wall I’m building, which examines intuitive physics as the basis for a lot of imagery and imagination.
Another project has to do with “loopholes.” Imagine the following: A child is watching a movie on their phone. The parents say, “Time to put the phone down!” And sure enough, the child puts the phone down on the table… but keeps watching the movie. This kind of intentional misunderstanding is called a loophole, and it’s everywhere. It happens in law, literature, fable, history, and everyday life. It’s also becoming more of a big deal for AI. We’re currently looking at the development of this behavior. When does it emerge and why? We’re finding that it starts around the age of 5 or 6, and it’s a suspicious coincidence that I first became interested in loopholes when my son was that age.
Footnotes
This work would not be possible without the generous support of institutions that recognize the value of asking new questions.
Tomer Ullman is an Assistant Professor at Harvard University’s Department of Psychology and a Jacobs Foundation Research Fellow 2022-2024. Ullman’s work focuses on common-sense reasoning and building formal models that explain how children and adults learn new theories to explain the world around them. In particular, he’s interested in how people develop an intuitive sense of the behavior of objects and other persons. A better understanding of this process will go a long way towards explaining the basics cogs and springs of human reasoning, and help build a bridge between foundational intuitive concepts and formal learning in math, physics, logic, and science more broadly.
Website: www.tomerullman.org
Twitter/X: @tomerullman
Bluesky: @tomerullman
This interview has been edited for clarity.