Robohub.org
 

Taking measure of artificial intelligence and the Turing Test

by
31 August 2016



share this:
Turing_Source_Neil_Crosby

As far as party games go, the Imitation Game is a pretty clever and fairly entertaining one. A man and a women answer questions as if they were the other, trying to fool party guests into guessing the wrong gender when identifying the answers. The idea of impersonation is too intriguing to pass up in a fun context.

This game, with its basic premise and mystery, is the basis of the infamous Turing Test. British mathematician Alan Turing used the game to describe and develop a test that measures a machine’s ability to imitate a human. The question long asked by early robotics pioneers was “Can machines think?” But Turing reframed the question to instead ask, “Can machines show they can think like a human?”

His test revolutionized how machine intelligence was viewed in relation to human thinking and functionality. The philosophy behind his question and test has guided the progress made in advancing modern artificial intelligence (AI).

While the Turing Test has historical significance and contributed a great deal to the study and growth of technology, it shouldn’t be the sole standard or measure of artificial intelligence. As the goals of artificial intelligence have made gains, the Turing Test is becoming a relic of the past and an irrelevant measure of the effectiveness of AI with a few caveats.

Complete human behavior

The Turing Test was developed to measure if a machine could act like a human, meaning it would imitate both intelligent and unintelligent behavior. Turing’s test was not a way to perfect a machine to solve a complicated problem, but to emulate the human brain—imperfections and all. In the context of the test, a machine would imitate human typing errors to try and prove it was human and not machine.

It’s important to remember that the Imitation Game—both the original party favorite and the technical version—is a game of deception not accuracy. The machine is trying to be identified as the human and the woman is trying to be identified as the man. With honest answers and the element of imitation removed, it becomes a question and answer game with very little point.

To be the ultimate player of the Turing game, machines are not programmed to produce the best answers, but the most human ones. When just answering Turing’s reframed question, “Can machines behave like humans?” the answer can most certainly be yes.

The Loebner competition, founded in 1991 as a glorified show of the Turing Test, proved that. Declared “artificial stupidity,” the Loebner competition is reviled by AI experts because instead of focusing on improving AI to do what it is intended, people are competing to prove a bot can replicate human fallibility.

Updated standards

Now, as scientists and engineers try to craft artificial intelligence for practical purposes and daily use, the importance of the Turing Test is shrinking, but not disappearing. The goal of AI now is to, in fact, solve complicated problems and equations that humans would normally struggle with. Machine learning and the advancement of how technology understands and fits into the world is becoming the standard. Machines are here to help, not simply create more human-like behavior.

On the other hand, including human-like elements in robotics and AI keeps the technology grounded in human reality. As Professor Sethu Vijayakumar FRSE, who holds a Chair in Robotics within the School of Informatics at the University of Edinburgh told Sputnik International, human forms and human behavior in robots, like the one Vijayakumar is working on to man a mission to Mars, help the people using it understand its functions in a relatable context.

His point is that we should use artificial intelligence as a way to enhance the human experience. Using technology to accomplish tasks risky to human life and using the technology to refine discoveries is an ideal application of AI and robots.

Developing AI should still involve the Turing Test to ensure there is a human element included in its design. It shouldn’t be the end all be all of AI’s effectiveness or ability to succeed in the world, but it should be a small part of determining how well it will fit into a society’s reality.



tags: , , ,


Jibo The world's first family robot.
Jibo The world's first family robot.





Related posts :



Engineers devise a modular system to produce efficient, scalable aquabots

The system’s simple repeating elements can assemble into swimming forms ranging from eel-like to wing-shaped.
07 February 2023, by

Microelectronics give researchers a remote control for biological robots

First, they walked. Then, they saw the light. Now, miniature biological robots have gained a new trick: remote control.
05 February 2023, by

Robot Talk Episode 35 – Interview with Emily S. Cross

In this week's episode of the Robot Talk podcast, host Claire Asher chatted to Professor Emily S. Cross from the University of Glasgow and Western Sydney University all about neuroscience, social learning, and human-robot interaction.
03 February 2023, by

Sea creatures inspire marine robots which can operate in extra-terrestrial oceans

Scientists at the University of Bristol have drawn on the design and life of a mysterious zooplankton to develop underwater robots.
02 February 2023, by

Our future could be full of undying, self-repairing robots – here’s how

Could it be that future AI systems will need robotic “bodies” to interact with the world? If so, will nightmarish ideas like the self-repairing, shape-shifting T-1000 robot from the Terminator 2 movie come to fruition? And could a robot be created that could “live” forever?
01 February 2023, by

Sensing with purpose

Fadel Adib uses wireless technologies to sense the world in new ways, taking aim at sweeping problems such as food insecurity, climate change, and access to health care.
29 January 2023, by





©2021 - ROBOTS Association


 












©2021 - ROBOTS Association