Almost exactly 65 years ago, British mathematician Alan Turing claimed that the defining event for machine intelligence would be a computer capable of convincing a group of humans that it too was human. Earlier this month, a supercomputer achieved that, convincing one-third of a group of human judges in London that it was a 13-year-old boy from Ukraine named Eugene Goostman.
Now that a computer has passed the Turing Test, get ready for computers that experience emotions, computers that learn from their mistakes and computers that possess all five human senses. If a computer were capable of all of these feats, let's face it, we would be forced to concede that machines had developed a form of consciousness.
In short, we are probably at the beginning of a new era for artificial intelligence as researchers come up with new ways to imbue computers with traits that used to belong only to humans. "Thinking" was just the beginning. But it's a very important beginning -just ask Descartes. It's easy to see how "thinking" supercomputers inspired by "Eugene Goostman" could revolutionize fields such as health care and education. In fact, any field involving experts.
Story continues after graphic
Sign Up and Save
Get six months of free digital access to The Island Packet
In health care, you could conceivably have an online doctor that knows the answer to all of your medical problems and could deliver these answers in a format that you would find most convincing. Instead of a 13-year-old teenager named Eugene, maybe you'd prefer a teenage medical genius named Doogie Howser? As long as your prognosis was being delivered online or via a smartphone app, you'd never know it was just a bot rather than a real doctor.
In education, you could create a super-teacher for a MOOC (massive open online course) that could be personalized enough to recognize when you were falling behind or unable to grasp certain concepts and then respond online in such a way that you'd believe you were getting mentored by a human teacher.
For now, we probably don't need to worry about a robot overlord scenario, in which thinking machines enslave humanity. However, there are a number of potentially worrisome issues such as the potential for cybercriminals to use a "thinking" supercomputer to defraud you. What if a bot called you up, convinced you that he or she was a customer service representative from your local bank and proceeded to get you to hand over valuable contact information such as your Social Security number? Exactly such a scenario was hinted at by one of the judges of the "Goostman" test: "Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime."
What may not be so obvious, though, is that our experiments to make computers "think" are changing us in subtle ways. As Brian Christian pointed out in his book about the Turing Test, "The Most Human Human," it's not just that computers are becoming more like humans as they use conversational quirks to mimic us. It's also that humans are becoming more like computers.
Our thinking patterns are changing in subtle ways, Christian says, so that we can mimic the thought processes of computers. One obvious example is in chess: Instead of just relying on flashes of grandmaster brilliance, we now realize that the optimal way to win at chess is to memorize as many chess games in history as possible and then choose the mathematically most valuable moves.
Where all this is headed is anyone's guess. The convergence of humans and computers may inevitably lead to a situation where we begin to form relationships with our computers, mistaking them for humans. (This goes way beyond just tweeting to a bunch of Twitter bots every day or accepting friend requests from fictitious Facebook bots.) In the future, the line between machine and human will continue to fade. Doctors, educators, lawyers — they could all just be bots one day.
Even the skeptics who quibble about the details of the "Eugene Goostman" competition have to admit that Turing was brilliant for coming up with a logical, easy way to determine whether computers can "think." As computers continue to be imbued with all the characteristics that we typically regard as human, it will have profound philosophical and moral implications. If "thinking" is just an algorithm, what about "love" or "memories" or "consciousness"? Indeed, the more we discover about computers, the more we may discover about what makes us truly human.