Cognitive Artificial Intelligence: Building better machines (and babies!)

Ramanella, Pixabay.com, CC0 1.0
Ramanella, Pixabay.com, CC0 1.0

Imagine a car travelling 60 mph. In the back seat, a baby sound asleep, and in the front, the baby’s parents — also asleep. One day soon such a scene will not make the hairs on the back of our necks stand up. Instead, we will rest just as easy as these parents, knowing that AI (Artificial Intelligence) has given us self-driving cars and the safest roads in human history.

AI promises to take humans and our flawed intelligence out of machines. Machines are meant to replace us — but only where they can do better, of course! Sometimes we program them to do certain tasks, but increasingly machines can learn on their own, faster than we could ever teach them.

Why, then, do I think we should put babies and their inchoate intelligence into machines? I am a cognitive scientist who studies human cognitive development, and my research in CogSci (Cognitive Science) convinces me that babies — like the one in the back of the car — have a lot to teach machines and will help them learn better. Indeed, one of the most exciting collaborations in the coming years will be between CogSci and AI.

“Indeed, one of the most exciting collaborations in the coming years will be between CogSci and AI.”

Not only will babies help us build better machines, but machines will help us build better babies! OK, that’s a bit of an exaggeration. Still, AI promises to help us scientists better probe the origins and development of human thought. With what scientists learn, we may then design educational programs that, in a sense, help us build better babies.

Putting the baby in the machine

Contemporary cognitive science understands a baby’s intelligence as founded on at least three cognitive capacities. The first is a series of domain-specific knowledge systems that allow us to recognize and interact with particular facets of human life, such as physical objects, other agents with their own goals, and the spaces we navigate. The second is a set of learning mechanisms that enables us to build efficiently and effectively on this rudimentary knowledge. And finally, there is our readiness for language.

These three capacities emerge early in human development — they may even be innate — and are the foundation of our intellectual and cultural flourishing. I suggest using them as a starting point to develop AI from CogSci.

Why? Well, one of the challenges of building AI from scratch is deciding what knowledge to start with. Some believe that AI is most elegant or powerful when it emerges from nothing, written on a blank slate, coded only with ideal learning mechanisms. When humans learn, we sometimes use something like Bayes Rule, a mathematical way to update our understanding of the world given new information. Even babies do this! This algorithm exists in every human mind but also in the abstract realm of mathematics, which means it can be programmed into a computer. With such mathematical tools, the best AI should be able to learn anything and everything … and simply.

“AI promises to help us scientists better probe the origins and development of human thought. With what scientists learn, we may then design educational programs that, in a sense, help us build better babies.”

But our most foundational knowledge isn’t learned; it has already been “learned” for us through evolution. Our evolutionary inheritance is a gift of knowledge — knowledge about objects, agents, and spaces, for example. As babies learn, their starting point is this common sense human intelligence. If we want AI to have human intelligence, it too should start with our inherited knowledge. We should give AI both mathematical and cognitive tools.

Building better machines

But wait: Is our goal really for AI to have human intelligence? In some cases, no: We want machines to perform better than humans, like self-driving cars with infrared vision and perfect traffic prediction.

Other cases are not so clear. What if a self-driving car faces the moral dilemma known as the Trolley Problem? Perhaps an impersonal algorithm would provide consistent fairness in such impossible situations. Or perhaps cold calculations are too inhuman, or at least inappropriately non-human, for moral decisions. If so, modeling human moral reasoning will be just as important as modeling impersonal physics.

“As babies learn, their starting point is this common sense human intelligence. If we want AI to have human intelligence, it too should start with our inherited knowledge.”

I argue that there are at least two areas where it’s clear that we should want AI to look like human intelligence, allowing AI to better understand us and us to better understand AI.

AI that understands us could better capture the complex behavior of human societies, from business transactions to international relations. This AI could predict more precisely what markets or nations and the humans who make them run will really do. Likewise, AI that we can understand could better explain such complex behavior to us. The goal of science has traditionally been to explain the world rather than just predict its behavior. AI can do all the complicated computation it wants, but without a common vocabulary grounded in a common intelligence, we may not be able to understand its results.

Building better babies

AI modeled after human intelligence may allow us to better understand and perhaps improve human cognition. By taking theories from basic research, like the three capacities I outlined above, cognitive scientists will be able to actually test whether human knowledge can be built from the foundations our developmental theories postulate.

“A first step will be to move beyond controlled laboratory settings to the environments in which human knowledge actually grows.”

Our efforts will be most effective if we test CogSci-based AI and babies’ natural intelligence in tandem on a large-scale, with nearly identical stimuli and outcome measures. A first step will be to move beyond controlled laboratory settings to the environments in which human knowledge actually grows. With portable or online developmental labs like Lookit, we can also overcome the challenge of large-scale data collection with babies and reach larger, more diverse populations.

As we refine our knowledge of foundational human cognitive capacities, we can build those capacities into AI, generating tests for both machine and baby. And we can use results from one to understand the other. Let’s encourage AI and CogSci to toddle together, driving each other forward.

Weekly newsletter

Newsletter icon