Robohub.org
 

Machine consciousness: Fact or fiction?


by and
10 March 2014



share this:

My workday is over. What do I want to do now? I picture calling my wife to suggest dinner at that nice Italian restaurant and imagine the taste of gnocchi quattro formaggi. Then I remember promising to look after the grandchildren that evening. My colleague at the desk next to me has no idea of this rich world unfolding in my head. This is the world that I call my consciousness, my mental state, my mind, or my thoughts. It is comfortably private, and I cannot imagine having an intelligent life without it. My behavior and pronouncements are but the tip of the iceberg that is my consciousness. Is this consciousness something that can be replicated? In our quest to build smarter computers and robots, will we one day develop machines that have an internal mental life comparable to our own?

Defining the Mystery of Consciousness

Were it possible to have an agreed-on definition of consciousness, its implementation could be attempted by any self-respecting artificial intelligence (AI) scientist. But for centuries philosophers and, more recently, psychologists and neuroscientists have struggled to settle on a definition.(a)

(a) A dictionary might offer a definition of consciousness like “the quality or state of being aware of an external object or something within oneself,” or, in contrast, “a sense of one’s personal or collective identity,” neither of which are particularly helpful for the development of conscious machines.

My own philosophy for understanding consciousness has been to start by trying to describe what I mean when I say that I am conscious, and then ask what would be necessary for a machine endowed with language to report similar internal sensations. When I say I am conscious, I refer to a collection of mental states and capabilities that include:

  1. a feeling of presence within an external world,
  2. the ability to remember previous experiences accurately, or even imagine events that have not happened,
  3. the ability to decide where to direct my focus,
  4. knowledge of the options open to me in the future, and
  5. the capacity to decide what actions to take.

I call these “axioms,” as they are commonly felt and largely agreed-on internal constituents of what we describe as our consciousness. They became the basis of a definition of consciousness that informs my efforts to design conscious machines.

Unfortunately, the AI created over the last 60 years largely ignored the mental world we call consciousness. Today’s machines don’t have minds of their own; their so-called intelligence is achieved through the blood, sweat, and tears of armies of brilliant human programmers. Humans write the indispensable rules that cause machines to recognize sounds, respond to visual patterns, take the next move at chess, and even suggest which shares to buy on the stock market.

While these machines are limited to the tasks they are designed to perform, a conscious being has something else: a complex system of internal states instantiated through its neural mechanisms. Since our brains allow for the five qualities of consciousness described above, they provide one major advantage over today’s computers and robots – autonomy. A machine with self-directed internal states influenced by its surroundings and needs can develop strategies in complex environments without waiting for a programmer to provide it with new rules.

(b) NASA scientists, for example, would like their space exploration vehicles equipped to deal with surprising events and to respond to their environments independently. Media coverage of the Curiosity Mars rover stressed the importance of this kind of autonomy. While the Curiosity has AI rules that allow it to calculate a safe path without communicating with the controller back on earth, more advanced governing principles, such as emotional preferences as to which path to follow, will be difficult to produce without consciousness.

The autonomy of a human being cannot be imagined without consciousness to continuously evaluate the surrounding world and make choices.(b) In learning to drive a car, for example, one is conscious of needing to know how to stop or how to go around corners, of one’s current surroundings, of the consequences of driving poorly, and of one’s past experiences in cars. A robot without consciousness needs an expert scientist to build its driving competence, but a conscious robot would simply call for a driving instructor.

Building Conscious Machines

There is clear value in moving beyond rules and programs to develop machines capable of autonomous, human-level thought. But how do we create such consciousness? The field of machine consciousness (MC) focuses on analysing, modelling, and exploiting the way that living entities (human and animal) are conscious, and then applying these discoveries to machines. Developers use what we know about existing forms of consciousness to provide machines with the ability to represent sensory, motor, and internal experience and produce appropriate reactions to both familiar and novel situations.

The idea of a conscious machine began to move from the realm of speculation and science fiction into a sensible research program in part as a result of a 2001 workshop organized by the Swartz Foundation called “Can a Machine Be Conscious?” The leading philosophers, neurologists, and computer scientists at this event produced the following statement: “We know of no fundamental law or principle operating in this universe that forbids the existence of subjective feelings in artifacts designed or evolved by humans.”2 The challenge was then clear: Who was going to build a machine with subjective feelings, and what form would it take?

Notable progress in developing conscious machines was achieved both before and after this seminal conference. One of the oldest and best-known MC systems is the Intelligent Distribution Agent (IDA) designed in the late 1990s by cognitive scientist Stan Franklin to replace human advisors assigning jobs to U.S. Navy sailors.3 Traditionally, a sailor would get in touch with a living agent to discuss preferences and the available range of jobs, eventually agreeing on a future posting. IDA completely replaced this human agent with an artificial intelligence that communicated with sailors over email.

The design of IDA was based on neuroscientist Bernie Baars’s notion of the Global Workspace, wherein consciousness occurs through competition among several distinct computing (i.e. thought) processes. These processes include autobiographical memory, current situation monitoring, episodic memory, and long-term memory. The process that wins the competition enters the “global workspace” area, which broadcasts this “win” back to the competing processes to update them. This system produces what one might call a “stream of consciousness.”

Users reported that the IDA system appeared to be conscious of the sailors’ needs and intentions, much the way a caring and interested human would be. In addition to this subjective assessment of consciousness based on external behaviors, Baars and Franklin argue that IDA is conscious because it is driven by simulations of the psychological and neurological elements considered essential for a human being to be conscious.4 These include weighing up alternatives (deliberation), planning, perception, agency, and the ability to report perceptions for external evaluation (verifiable reportability).

This raises the question of what makes someone believe that a machine is conscious. We have no tests to tell us that humans are conscious; we simply believe they are because they are human. A machine, on the other hand, is often assumed not to be conscious just because it is not a living human. Baars and Franklin argue that possession of the cognitive processes without which a human could not be conscious is reason enough to believe a machine is also conscious. Personally, I remain skeptical that demonstrating human-like cognitive processes is necessarily equivalent to experiencing consciousness as humans do.

Cronos
Cronos

In a more recent advancement of the MC field, roboticist Owen Holland designed a human-like robot called Cronos. It has an internal model of its own body as well as the surrounding world, and it simulates results of the interactions between the two in order to select its future actions. This process starts with building a close replica of aspects of our own physical bodies and a computational system that has the same relationship to the control of this body as our brains do to our bodily control. The next step is to provide the computational system with the ability to model the world so that potential strategies and outcomes can be tested entirely internally. It is this internal activity that constitutes the consciousness of the system. Holland suggests that the system may be tested for consciousness against the five axioms mentioned earlier.

Replicating The Mind’s Eye

My own research tests the hypothesis that a robust artificial intelligence could develop the ability to be visually conscious without depending on rules invented by a programmer. I have sought to replicate visual consciousness with VisAw, a system that, like the animal visual system, recreates a whole visual scene from the minimal view obtained by the fovea at the center of the retina.I modeled the extrastriate cortex, which combines the visual input provided by the fovea and neural signals that drive the eye around a visual scene. I then exposed these neural networks to many images of faces, which are all stored in memory.

VisAw’s only built-in features are the abilities to look around a face, to receive a single-word name for the image, and to switch attention from an observational to an imaginational mode. In humans, the coexistence of these two modes allows us to use our visual perception (for example, looking around a room for a pair of eye-glasses) while we also visually imagine things (like what the glasses look like). A machine can be said to be “imagining” when, in the networks that produce a sensation of “seeing” objects in the world, objects or scenes appear that have been seen in the past or perhaps have never been seen before. The display indicates that the machine “knows” that it is mainly imagining because neural activity between the external drive (perception) and internal drive (imagination) can be detected and displayed as a label on the screen. In VisAw, the imagination behavior can be triggered by voice input or can emerge arbitrarily.

The VisAw’s private “thoughts” are made public in a program display, as shown in the screen shot below. The large square in the top left shows the visual data flowing into the system. Every small box represents a neural network. The large square on the right displays the machine’s overall “mental state,” produced from the combination of over 20,000 artificial, simulated neurons. The bright areas in that box are the sections of the face that the machine is actively looking at with its artificial fovea, while the darker areas represent sections that are being imagined.

Screen shot of VisAw
Screen shot of VisAw

One important advantage of studying conscious machines rather than humans or animals is that whatever the machine is “thinking” can be displayed on a screen or expressed in words. Because the content of a machine’s “mind” can be made transparent and observable, attributing consciousness to it may be no more outlandish than attributing consciousness to a living creature.

(c) The VisAw system was also able to “imagine” one of its learned faces to reconstruct an image of the whole face from just a partial glimpse. A model of exiting from “sleep” – simulated by “paralyzing” the attentional mechanism and blanking out the visual input as is thought to happen during human sleep – was tested, as well as the ability to report both on perceptual and imaginative acts and to distinguish between the two.5 These specific capacities do not feature in the axioms but are facilitated by the processes outlined in axioms one through three.

So should we consider VisAw conscious? We can evaluate its activities in terms of the five axioms of consciousness I laid out previously:

  1. The machine was not given any rules, but was able to create its own internal states that represented the faces I exposed it to, thus demonstrating the inner awareness of the external world demanded by axiom one.
  2. As described in axiom two, the machine’s inner states include an imagination mode created from neural activity representations of past experiences.
  3. A process of attention controls the movement of the machine’s “eye,” as demanded by axiom three.(c)

It may not be possible to verify that VisAw is conscious, but even if axioms 4 and 5 are not addressed in this particular experiment, it is possible to say that systems such as VisAw can be used to test hypotheses about the relationship between neural structures and consciousness. For example, VisAw is based on theories about the visual cortex developed by Crick and Koch, and its use can inform clinical neuroscientists about the plausibility of these theories.6

What Machines Can Teach Us About Human Consciousness

(d) Current philosophical opinion on machine consciousness is divided. Philosophers like Raymond Tallis feel that the idea of a conscious machine is misleading and invalid, a perspective with which I disagree. Others like Susan Stuart embrace MC as a challenge to develop and clarify the philosophy of consciousness.

Consciousness and the relation of mind to body have been debated in philosophy since the days of ancient Greece.(d) Even fifteen years ago, the idea of approaching the sensitive topic of consciousness using computational methods would have been seen as an act of philosophical ignorance or perhaps sheer charlatanism. Now, it is becoming commonplace in many research laboratories.

At a practical level, machine consciousness holds the promise of producing more autonomous computers and robots. Conscious, self-aware machines are likely to interact better with humans and may even develop a sense of responsibility toward humanity.(e) For example, a robot may be able to say, “I am conscious of having caused embarrassment to Johnson, the astronaut with whom I am working.” In the years to come, machine consciousness will be yet another tool in the armory of computational systems that act in intelligent ways for the benefit of their human users.

(e) In the short story “Runaround” in the collection I, Robot, science fiction author Isaac Asimov imagined a set of laws for the design of robots so that they could not hurt human beings. The rules state that robots must not injure humans or allow them to come to harm; they must obey human orders unless these orders conflict with the first law; and robots must protect their own lives unless this protection conflicts with another law.

In a broader sense, though, machine consciousness may be just as important for what it teaches us about ourselves. MC approaches are breaking down the idea that consciousness is a mysterious notion for philosophers to argue about, and instead suggesting that consciousness can be understood in scientific terms. This is good news at a time when humanity is tackling physical diseases such as cancer and heart attacks, but is becoming ever more exposed to distortions of consciousness such as Alzheimer’s disease and other debilitating mental illnesses. While the mystery of consciousness may never completely be unraveled, we are certain to deepen our understanding of it as we work to develop conscious machines.

This article is part of a series on robots and their impact on society.

Igor Aleksander is an Emeritus Professor in the department of Electrical and Electronic Engineering at Imperial College in London. There he holds the active post of Senior Research Investigator. He has been working in artificial intelligence since the 1970s, creating the world’s first neural pattern recognition system in 1981 and studying machine consciousness during the last 15 years. 

ENDNOTES

  1. The problem of definition is well set out in Robert Van Gulick (2014) “Consciousness,” The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta.
  2. Christof Koch (2001) “Final Report of the Workshop Can a Machine be Conscious,” Cold Spring Harbor Laboratory: The Swartz Foundation.
  3. Bernard J. Baars and Stan Franklin (2007) “An architectural model of conscious and unconscious brain functions: Global Workspace Theory and IDA,” Neural Networks, 20: 955–961.
  4. Bernard J. Baars and Stan Franklin (2009) “Consciousness is Computational: The LIDA Model of Global Workspace Theory,” International Journal of Machine Consciousness,1(1): 23-32. For another formulation of the basis of consciousness, see Giulio Tononi (2008) “Consciousness as Integrated Information: a Provisional Manifesto,” Biological Bulletin, 215(3): 216-242.
  5. Igor Aleksander and Helen Morton (2012) Aristotle’s Laptop: Discovering our Informational Mind, World Scientific Press.
  6. Francis Crick and Christof Koch (1995) “Are we aware of neural activity in primary visual cortex?” Nature, 375(6527): 121-123.

 



tags: , , ,


Igor Aleksander is Emeritus Professor of Electrical and Electronic Engineering at Imperial College London.
Igor Aleksander is Emeritus Professor of Electrical and Electronic Engineering at Imperial College London.

Footnote is an online media company that unlocks the power of academic knowledge by making it accessible to a broader audience.
Footnote is an online media company that unlocks the power of academic knowledge by making it accessible to a broader audience.





Related posts :



Robot Talk Episode 103 – Keenan Wyrobek

  20 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Keenan Wyrobek from Zipline about drones for delivering life-saving medicine to remote locations.

Robot Talk Episode 102 – Isabella Fiorello

  13 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Isabella Fiorello from the University of Freiburg about bioinspired living materials for soft robotics.

Robot Talk Episode 101 – Christos Bergeles

  06 Dec 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Christos Bergeles from King's College London about micro-surgical robots to deliver therapies deep inside the body.

Robot Talk Episode 100 – Mini Rai

  29 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Mini Rai from Orbit Rise about orbital and planetary robots.

Robot Talk Episode 99 – Joe Wolfel

  22 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Joe Wolfel from Terradepth about autonomous submersible robots for collecting ocean data.

Robot Talk Episode 98 – Gabriella Pizzuto

  15 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Gabriella Pizzuto from the University of Liverpool about intelligent robotic manipulators for laboratory automation.

Online hands-on science communication training – sign up here!

  13 Nov 2024
Find out how to communicate about your work with experts from Robohub, AIhub, and IEEE Spectrum.

Robot Talk Episode 97 – Pratap Tokekar

  08 Nov 2024
In the latest episode of the Robot Talk podcast, Claire chatted to Pratap Tokekar from the University of Maryland about how teams of robots with different capabilities can work together.





Robohub is supported by:




Would you like to learn how to tell impactful stories about your robot or AI system?


scicomm
training the next generation of science communicators in robotics & AI


©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association