Thinking machines are here, or not. The debate continues while developments in artificial intelligence create more and more sophisticated machines that are becoming more and more humanlike.
When does a machine become intelligent? That provokes a more general question: What is intelligence? The two questions are equally complex and debatable.
There is a no definite yes-or-no answer when asking, “Is this machine intelligent?” Many different kinds of intelligence occur in people, animals and machines, and it is not well defined in those instances.
Alan Turing, one of the earliest computing geniuses, stated the problem succinctly: Can a machine think? He proposed that an intelligent machine would have to fool a human by posing as one. The Turing test would be one in which humans could not distinguish whether it was machine or human that was answering when both were presented with the same questions.
There are machines that might appear intelligent by outperforming humans in a game such as chess. Machines perform mathematical operations and calculations thousands of times faster than humans. However, is this intelligence? Can that computer also sing “The Star-Spangled Banner” or present a poster that shows the origin of the song, or sketch a picture or write a poem that reacts to it?
Amazing advances in artificial intelligence, or AI, have made us increasingly wary of the so-called Evil Computer. Movies like “Terminator” portray a world where machines become autocrats by extinguishing people. Other stories show the brighter side of intelligent machines. In one of my favorite sci-fi novels, silicon life had evolved in a distant solar system to an advanced state with intelligence not unlike ours, but in machine form. A movie from the ’80s stars a computer that was humanized by a random jolt of electricity.
Could computers take over? Could they decide that humans were superfluous and eliminate us?
Computers require programs and can do no more than what they are programmed to do. Computers can learn, but they can only learn what they are programmed to learn.
Computers require external power. Humans do, too, but we generate our power metabolically at the cellular level, and we feed ourselves to keep the cellular fires burning. Computers require electrical power. To compete with humans, computers would have to repair themselves and change or charge their own batteries, and ultimately to produce those batteries from raw materials.
Can a computer be human? In “Star Trek: The Next Generation,” an intelligent, humanlike android longs to be human but lacks the emotional ability to do so. In virtually all literary representations of androids, the emotional component is lacking.
Computers cannot deal with paradoxes. Statements similar to “I am lying” can put a computer into a continuous and unending loop. In the “halting problem,” Turing showed than any algorithm can be made to contradict itself and cause an infinite (unending) loop.
Poetry, art, philosophy, history and similar endeavors require a human dimension, both to create and to appreciate. This entails more than the mere recursion or logic used to follow an algorithm.
As with the halting problem, there is no way to say for sure that AI will never take over from humans, but we can say with confidence that it is extremely unlikely.
Richard Brill is a professor of science at Honolulu Community College. His column runs of the first and third Fridays of the month. Email questions and comments to brill@hawaii.edu.