When IBM’s Watson supercomputer triumphed over two top Jeopardy champions in February 2011, the media buzzed with talk of artificial intelligence (AI), just as it had fourteen years earlier when Watson’s predecessor, IBM’s Deep Blue, won a match with world chess champion Garry Kasparov. Bloggers, journalists, and radio hosts were asking a question as old as the field of computer science itself: When will computing machines surpass human intelligence?
But both Deep Blue and Watson exemplify a “disembodied” approach to intelligence that has been strongly challenged by cognitive scientists, especially roboticists, in recent years. In my last article, I explored the evolving recognition that human and animal cognition is an embodied affair, involving not only our brains but our entire bodies and even the surrounding environment. This new understanding of cognition has enormous implications for the design of intelligent machines. Truly intelligent machines must also possess an embodied intelligence that goes beyond the skills of Deep Blue and Watson and enables machines to perceive and interact with their environments.1
After its predecessor Deep Thought lost to Kasparov in 1989, Deep Blue did a bit better in its first match against him in 1996, winning one game. In their 1997 rematch, Deep Blue returned to beat Kasparov using software that Kasparov denounced as targeted especially at his style of play. Kasparov spoke to Reuters of feeling as though his “alien opponent” showed “signs of intelligence” – although of a kind that is not exactly human. Indeed, Deep Blue determined its next move using methods that no human player could match, checking thousands of combinations of moves each turn.
Watson beat its competition in similarly alien ways. The primary new skill it displayed lay in deciphering the often cryptic Jeopardy clues in order to pursue a relatively straightforward search for factual information in a huge database. Although Watson did not have direct access to the Internet during the competition, engineers had built much of its database by searching the World Wide Web and storing a local copy of the information for later use.(a)
What Watson, Deep Blue, and many other attempts to build intelligent computers have in common is that they represent a disembodied approach to intelligence. Chess, with its closed set of rules and limited space of movements on a board that can easily be represented on a computer, and Jeopardy, with its ritual formula of answers and questions, provided engineers with programming challenges that did not require any coordination of complex physical movements. The only truly mechanical part of either design was Watson’s ability to press the buzzer.
This disembodied approach to intelligence has been favored by scientists since the early days of computer development. In 1950, Alan Turing proposed his now-famous “Turing Test” of human-machine equivalence. If an astute judge cannot tell a machine and a human apart after a series of open-ended written exchanges, the machine would be judged to possess human-level intelligence.(b) The machine participants in Turing’s “Imitation Game” were disembodied by design in order to remove what he regarded as irrelevant cues. Turing believed that limiting the interaction to typewritten text had “the advantage of drawing a fairly sharp line between the physical and intellectual capacities of a man.”2
But designing a computer that can, for example, carry on an ordinary conversation has turned out to be a particularly tough nut to crack. Much of what we do in conversation reflects deep knowledge of how we are situated in physical space, as well as exquisite sensitivity to the nuances of spoken language. Watson lacked live speech recognition, but instead received the Jeopardy answers as digitally encoded text while the human contestants heard the clue from host Alex Trebek. Apple’s Siri has voice recognition capability, but is notoriously prone to contextual misinterpretation (although its programmers also seem to have a good sense of humor). These purely verbal approaches to machine cognition seem destined to produce machines with an uncannily alien intelligence that does not resemble human cognition.
In the early days of AI, it was assumed that the hardest jobs for robots would be things like playing chess and responding to questions – quintessentially human activities that no dog or ostrich can achieve. It turned out, however, that walking and finding food was just as difficult, if not harder.(c) Roboticists learned that what had taken evolution so long to accomplish was not going to be achieved within just a few years of hardware and software engineering. Careful observation of animals revealed that nature had a lot of tricks at its disposal and that both human and animal intelligence involve our entire bodies, not just our brains.
This is why the kinds of artificial intelligence found in Deep Blue, Watson, and the iPhone’s Siri are so “alien”. Although two of them talk (albeit a bit strangely), none of them walk. Nor are they capable of seeking out the basic materials and energy they need to sustain themselves. They cannot interpret sensory information beyond narrow types of input, such as human speech, that they are programmed to recognize. In other words, these systems lack some of the most fundamental properties of intelligent life.
Existing artificial systems lack not only these sensory capacities, but also some of the most rudimentary forms of short-term memory, with very little ability to flexibly link what they are currently doing to what they did ten seconds, ten minutes, or ten hours ago. Without sensory perception or the demands of moving to find food and mates and avoid danger, Deep Blue, Watson, and Siri are far less human than dogs or even ostriches.
Fortunately, robotics has changed since Turing’s time. Our growing understanding of the physical embodiment and environmental embeddedness of human and animal cognitive systems is providing scientists with inspiration for the design of artificially intelligent machines. The field of applied robotics has been progressing in the development of embodied cognition, with goals that are arguably more practical than producing a machine that can hold a convincing conversation or triumph in a television game show.
Autonomous robots are designed to be freestanding, mobile, and capable of operating in a variety of environments. These are not the industrial robots that perform repetitive tasks in fixed factory situations. Whether it is iRobot’s Roomba vacuum or Google’s self-driving car, the autonomous robot has to sense physical events in its environment and react appropriately to changes that cannot be predicted more than a few moments in advance.
The embodied approach to intelligence has also spawned robots that can glean information about objects through sight and touch and even rearrange them to visualize and solve problems. Examples include the Massachusetts Institute of Technology’s Ripley robot, designed for social interactions so that it can better learn language, and the European robot ARMAR-III, created by scientists at Germany’s Karlsruhe Institute of Technology to interact with humans in common household situations.(d) By interacting with objects in their environment, Ripley and ARMAR-III learn the physics of everyday actions and are able to obey commands.
The groundwork for robots with real-time reactivity to their surroundings was originally laid in the field of cybernetics, centered on the analysis of feedback loops in the control of animal behavior.(e) Cyberneticists developed the idea that a balance of positive and negative feedback loops is necessary to maintain a system within a functional range. Feedback loops allow the results of previous actions to influence future actions; for example, shivering raises body temperature, which can stop the shivering.
A key idea from cybernetics is that cognition involves the precise coordination of sensorimotor feedback loops in constant interaction with the environment. These loops exploit the temporal dynamics not only of nervous systems, but also of the physical bodies and environments in which they are embedded. As an example of the importance of modifications to the environment in such feedback loops, seeing food diminish in front of you may be a stronger cue to stop eating than an internal signal of being full. People tricked with a bottomless soup bowl eat on average two-thirds more but don’t report being any more full.
Looking at human cognitive capacities from an embodied and embedded perspective changes both our understanding of the human mind and our view of how to replicate human-like intelligence in machines. In my book Moral Machines, coauthored with Wendell Wallach, we suggest that our everyday moral capacities depend not so much on abstract calculations of utility or conformity to rules, but on the ability of embodied and embedded agents to detect and respond to signs of discomfort and distress caused by their own actions, and perhaps even to have some version of emotion.3 The development of machines with such capabilities awaits progress in our understanding of how biological intelligence is embodied. As these developments continue, the machines we build may come to seem less alien and perhaps become at least as comprehensible to us as we are to one another.
Colin Allen is Provost Professor of Cognitive Science and of History & Philosophy of Science in the College of Arts and Sciences at Indiana University, Bloomington, where he has been a faculty member since 2004. He also holds an adjunct appointment in the Department of Philosophy and is a faculty member of IU’s Center for the Integrative Study of Animal Behavior and Program for Neuroscience. He became director of IU’s Cognitive Science program in July 2011. Allen’s main area of research is on the philosophical foundations of cognitive science, particularly with respect to nonhuman animals. Allen has also published on other topics in the philosophy of mind and philosophy of biology and artificial intelligence. He coauthored Moral Machines: Teaching Robots Right from Wrong (Oxford University Press 2009) with Wendell Wallach and Species of Mind: The Philosophy and Biology of Cognitive Ethology (MIT Press 1998) with Marc Bekoff.
If you liked this article, you may also be interested in:
See all the latest robotics news on Robohub, or sign up for our weekly newsletter.