My workday is over. What do I want to do now? I picture calling my wife to suggest dinner at that nice Italian restaurant and imagine the taste of gnocchi quattro formaggi. Then I remember promising to look after the grandchildren that evening. My colleague at the desk next to me has no idea of this rich world unfolding in my head. This is the world that I call my consciousness, my mental state, my mind, or my thoughts. It is comfortably private, and I cannot imagine having an intelligent life without it. My behavior and pronouncements are but the tip of the iceberg that is my consciousness. Is this consciousness something that can be replicated? In our quest to build smarter computers and robots, will we one day develop machines that have an internal mental life comparable to our own?
Defining the Mystery of Consciousness
Were it possible to have an agreed-on definition of consciousness, its implementation could be attempted by any self-respecting artificial intelligence (AI) scientist. But for centuries philosophers and, more recently, psychologists and neuroscientists have struggled to settle on a definition.(a)
My own philosophy for understanding consciousness has been to start by trying to describe what I mean when I say that I am conscious, and then ask what would be necessary for a machine endowed with language to report similar internal sensations. When I say I am conscious, I refer to a collection of mental states and capabilities that include:
I call these “axioms,” as they are commonly felt and largely agreed-on internal constituents of what we describe as our consciousness. They became the basis of a definition of consciousness that informs my efforts to design conscious machines.
Unfortunately, the AI created over the last 60 years largely ignored the mental world we call consciousness. Today’s machines don’t have minds of their own; their so-called intelligence is achieved through the blood, sweat, and tears of armies of brilliant human programmers. Humans write the indispensable rules that cause machines to recognize sounds, respond to visual patterns, take the next move at chess, and even suggest which shares to buy on the stock market.
While these machines are limited to the tasks they are designed to perform, a conscious being has something else: a complex system of internal states instantiated through its neural mechanisms. Since our brains allow for the five qualities of consciousness described above, they provide one major advantage over today’s computers and robots – autonomy. A machine with self-directed internal states influenced by its surroundings and needs can develop strategies in complex environments without waiting for a programmer to provide it with new rules.
The autonomy of a human being cannot be imagined without consciousness to continuously evaluate the surrounding world and make choices.(b) In learning to drive a car, for example, one is conscious of needing to know how to stop or how to go around corners, of one’s current surroundings, of the consequences of driving poorly, and of one’s past experiences in cars. A robot without consciousness needs an expert scientist to build its driving competence, but a conscious robot would simply call for a driving instructor.
Building Conscious Machines
There is clear value in moving beyond rules and programs to develop machines capable of autonomous, human-level thought. But how do we create such consciousness? The field of machine consciousness (MC) focuses on analysing, modelling, and exploiting the way that living entities (human and animal) are conscious, and then applying these discoveries to machines. Developers use what we know about existing forms of consciousness to provide machines with the ability to represent sensory, motor, and internal experience and produce appropriate reactions to both familiar and novel situations.
The idea of a conscious machine began to move from the realm of speculation and science fiction into a sensible research program in part as a result of a 2001 workshop organized by the Swartz Foundation called “Can a Machine Be Conscious?” The leading philosophers, neurologists, and computer scientists at this event produced the following statement: “We know of no fundamental law or principle operating in this universe that forbids the existence of subjective feelings in artifacts designed or evolved by humans.”2 The challenge was then clear: Who was going to build a machine with subjective feelings, and what form would it take?
Notable progress in developing conscious machines was achieved both before and after this seminal conference. One of the oldest and best-known MC systems is the Intelligent Distribution Agent (IDA) designed in the late 1990s by cognitive scientist Stan Franklin to replace human advisors assigning jobs to U.S. Navy sailors.3 Traditionally, a sailor would get in touch with a living agent to discuss preferences and the available range of jobs, eventually agreeing on a future posting. IDA completely replaced this human agent with an artificial intelligence that communicated with sailors over email.
The design of IDA was based on neuroscientist Bernie Baars’s notion of the Global Workspace, wherein consciousness occurs through competition among several distinct computing (i.e. thought) processes. These processes include autobiographical memory, current situation monitoring, episodic memory, and long-term memory. The process that wins the competition enters the “global workspace” area, which broadcasts this “win” back to the competing processes to update them. This system produces what one might call a “stream of consciousness.”
Users reported that the IDA system appeared to be conscious of the sailors’ needs and intentions, much the way a caring and interested human would be. In addition to this subjective assessment of consciousness based on external behaviors, Baars and Franklin argue that IDA is conscious because it is driven by simulations of the psychological and neurological elements considered essential for a human being to be conscious.4 These include weighing up alternatives (deliberation), planning, perception, agency, and the ability to report perceptions for external evaluation (verifiable reportability).
This raises the question of what makes someone believe that a machine is conscious. We have no tests to tell us that humans are conscious; we simply believe they are because they are human. A machine, on the other hand, is often assumed not to be conscious just because it is not a living human. Baars and Franklin argue that possession of the cognitive processes without which a human could not be conscious is reason enough to believe a machine is also conscious. Personally, I remain skeptical that demonstrating human-like cognitive processes is necessarily equivalent to experiencing consciousness as humans do.
In a more recent advancement of the MC field, roboticist Owen Holland designed a human-like robot called Cronos. It has an internal model of its own body as well as the surrounding world, and it simulates results of the interactions between the two in order to select its future actions. This process starts with building a close replica of aspects of our own physical bodies and a computational system that has the same relationship to the control of this body as our brains do to our bodily control. The next step is to provide the computational system with the ability to model the world so that potential strategies and outcomes can be tested entirely internally. It is this internal activity that constitutes the consciousness of the system. Holland suggests that the system may be tested for consciousness against the five axioms mentioned earlier.
Replicating The Mind’s Eye
My own research tests the hypothesis that a robust artificial intelligence could develop the ability to be visually conscious without depending on rules invented by a programmer. I have sought to replicate visual consciousness with VisAw, a system that, like the animal visual system, recreates a whole visual scene from the minimal view obtained by the fovea at the center of the retina.5 I modeled the extrastriate cortex, which combines the visual input provided by the fovea and neural signals that drive the eye around a visual scene. I then exposed these neural networks to many images of faces, which are all stored in memory.
VisAw’s only built-in features are the abilities to look around a face, to receive a single-word name for the image, and to switch attention from an observational to an imaginational mode. In humans, the coexistence of these two modes allows us to use our visual perception (for example, looking around a room for a pair of eye-glasses) while we also visually imagine things (like what the glasses look like). A machine can be said to be “imagining” when, in the networks that produce a sensation of “seeing” objects in the world, objects or scenes appear that have been seen in the past or perhaps have never been seen before. The display indicates that the machine “knows” that it is mainly imagining because neural activity between the external drive (perception) and internal drive (imagination) can be detected and displayed as a label on the screen. In VisAw, the imagination behavior can be triggered by voice input or can emerge arbitrarily.
The VisAw’s private “thoughts” are made public in a program display, as shown in the screen shot below. The large square in the top left shows the visual data flowing into the system. Every small box represents a neural network. The large square on the right displays the machine’s overall “mental state,” produced from the combination of over 20,000 artificial, simulated neurons. The bright areas in that box are the sections of the face that the machine is actively looking at with its artificial fovea, while the darker areas represent sections that are being imagined.
One important advantage of studying conscious machines rather than humans or animals is that whatever the machine is “thinking” can be displayed on a screen or expressed in words. Because the content of a machine’s “mind” can be made transparent and observable, attributing consciousness to it may be no more outlandish than attributing consciousness to a living creature.
So should we consider VisAw conscious? We can evaluate its activities in terms of the five axioms of consciousness I laid out previously:
It may not be possible to verify that VisAw is conscious, but even if axioms 4 and 5 are not addressed in this particular experiment, it is possible to say that systems such as VisAw can be used to test hypotheses about the relationship between neural structures and consciousness. For example, VisAw is based on theories about the visual cortex developed by Crick and Koch, and its use can inform clinical neuroscientists about the plausibility of these theories.6
What Machines Can Teach Us About Human Consciousness
Consciousness and the relation of mind to body have been debated in philosophy since the days of ancient Greece.(d) Even fifteen years ago, the idea of approaching the sensitive topic of consciousness using computational methods would have been seen as an act of philosophical ignorance or perhaps sheer charlatanism. Now, it is becoming commonplace in many research laboratories.
At a practical level, machine consciousness holds the promise of producing more autonomous computers and robots. Conscious, self-aware machines are likely to interact better with humans and may even develop a sense of responsibility toward humanity.(e) For example, a robot may be able to say, “I am conscious of having caused embarrassment to Johnson, the astronaut with whom I am working.” In the years to come, machine consciousness will be yet another tool in the armory of computational systems that act in intelligent ways for the benefit of their human users.
In a broader sense, though, machine consciousness may be just as important for what it teaches us about ourselves. MC approaches are breaking down the idea that consciousness is a mysterious notion for philosophers to argue about, and instead suggesting that consciousness can be understood in scientific terms. This is good news at a time when humanity is tackling physical diseases such as cancer and heart attacks, but is becoming ever more exposed to distortions of consciousness such as Alzheimer’s disease and other debilitating mental illnesses. While the mystery of consciousness may never completely be unraveled, we are certain to deepen our understanding of it as we work to develop conscious machines.
This article is part of a series on robots and their impact on society.
Igor Aleksander is an Emeritus Professor in the department of Electrical and Electronic Engineering at Imperial College in London. There he holds the active post of Senior Research Investigator. He has been working in artificial intelligence since the 1970s, creating the world’s first neural pattern recognition system in 1981 and studying machine consciousness during the last 15 years.
If you liked this article, you may also be interested in:
See all the latest robotics news on Robohub, or sign up for our weekly newsletter.