Here’s a piece – iRobot – that I wrote for Bang! science magazine in the Irish Times. It’s on one of my favourite subjects: robots.
Tell me a joke Siri. ‘Two iPhones walk into a bar . . . I forget the rest.”
It turns out that Siri, the new virtual digital assistant on the iPhone 4S, has a (bad) sense of humour. Does the ability to tell jokes make Siri intelligent in any sense of the word? Or is it just a clever piece of software with a vast bank of canned responses?
Can machines think?
Alan Turing, the father of modern computer science, addressed this question as far back as 1950 when he asked, “Can machines think?” To devise the first measure of machine intelligence, Turing tweaked a Victorian parlour game, where the player had to guess, based on the responses, whether they were talking to a man or woman who remained concealed behind a curtain.
If you converse with someone behind a computer screen and you can’t tell if this is a person or a machine, then the machine has truly demonstrated “artificial intelligence”, he concluded. This became known as the Turing Test and is still used to gauge how naturally an artificial agent can converse.
Each year the Loebner Prize competition gathers a panel of expert judges to see if they can be tricked by AI (artificial intelligence) software known as chatbots, which are programmed to mimic human conversation. To date, despite using quite sophisticated programming, these chatbots tend to give away their silicon-based nature. Their responses range from slightly off to utter gibberish, and most humans can spot that their sentence structure isn’t quite right.
Try that for size!
Here’s an example of one of the many bizarre exchanges that a judge had with a chatbot known as Do-Much-More at the 2009 Loebner Prize contest. Judge: “What do you make of the Arctic Monkeys?” Do-Much-More: “Well, here’s a clue: I make what a keeper in a zoo would make. Try that for size!”
Despite his seemingly playful nature Do-Much-More hasn’t spotted that in this context the judge was talking about a band, and the brave chatbot doesn’t even consider admitting to the judge that it doesn’t understand. It simply ploughs ahead.
This is because a test like this is about guesswork and trickery rather than gauging true intelligence, according to Jason Hutchens, an academic who entered the Loebner Prize twice. There are, however, many other measures of machine intelligence and the iPhone’s Siri doesn’t just try to hold conversations. The AI behind Siri is quite complex and its roots lie in US military research.
Adam Cheyer created Siri after working as the chief architect at Calo (Cognitive Assistant that Learns and Organises), one of the largest artificial intelligence projects in US history. Siri listens to voice commands and tries to make sense of them. The first step is voice recognition software but once Siri “hears” what you’re saying it must then make sense of it. This is where the AI comes in.
Siri like to learn
The information passed along to Siri is put in the context of a process or request that it must evaluate and carry out, which is not very different from how most computer programmes work. On the surface intelligent agents like Siri are making decisions, or at least that is how the complexity of the programming makes it appear.
Perhaps the most important piece of AI that Siri has is adaptability.
Explaining how it works on Quora.com Cheyer said: “Siri learns over time (new words, new partner services, new domains, new user preferences, etc).”
Robots that walk the walk
Artificial intelligence, however, isn’t all about the software. Some intelligent agents talk the talk while others walk the walk. The most interesting and cutting edge robots of this kind aren’t just programmed to walk; they’re programmed to learn to walk.
Josh Bongard from the University of Vermont in the US has designed robots that start out a bit like human babies; they begin by crawling, slithering or dragging their bodies along the floor. Over time, they learn to balance better, graduate to walking confidently on two legs and can travel much faster.
Interestingly, these robots experience a form of “super evolution”: in the beginning they are using anguilliform locomotion (they wiggle like fish or eels) but then progress to many legs and finally two.
One of the more famous real-life robots is decidedly more appealing due to its humanoid form. Asimo is a robot developed by Japanese company Honda Robotics and was first created in 2000. Its name is an acronym for Advanced Step in Innovative Mobility, which is appropriate as it can both walk and run.
The most recent version of Asimo was unveiled earlier this month and is probably one of the most advanced robots in the world. Standing at a mere 4ft tall, this Hobbit-like robot has advanced AI that allows it to navigate around people by predicting where they will move next. This is something that you and I do without thinking every day when we walk down a crowded street but is an amazingly complex task for a robot.
Due to both a tactile sensor and a force sensor embedded on the palm and in each finger Asimo can now open bottles and pour drinks. Intriguingly, it now has the ability to run backwards and hop on one or two legs. When this diminutive robot eventually becomes available to buy it will be like having Rosie the maid from the Jetsons but with legs instead of a set of wheels.
Honda Robotics has suggested that we are now one step closer to having an office robot as Asimo can perform simple tasks while being able to navigate around a stream of people walking about.
Emotional and mechanical
But what about that which makes us human? Something that no robot, it seems, may ever be able to replicate is human emotion. Emotion is hard wired into the human experience and is evolutionarily advantageous in terms of species survival.
For humans to really bond with machines they must connect on an emotional level says Dr Cynthia Breazeal, a roboticist with the Massachusetts Institute of Technology.
In 1999 Breazeal created Kismet, the first emotionally intelligent robot. Kismet doesn’t have a body but its head is kitted out with sensors, cameras and motors. It not only interprets what you are saying but also reacts in quite a human fashion. The robotic head swivels towards the human participant and depending on the movement of its lips, eyebrows, ears and even how it hunches or hangs its head will convey surprise, happiness, anger or disgust.
Kismet’s AI is busy interpreting the tone of your voice, your eye movement and body language to figure out the emotional context of your conversation. It then attempts to respond in kind.
These fields of robotics will have therapeutic benefits, according to Breazeal: children with autism can experience pressure-free social interaction with Kismet-like creatures.
If robots are too human-like, however, we can enter what is known as the Uncanny Valley, a phrase coined by Japanese roboticist Dr Masahiro Mori. This is a situation where robots look almost but not quite human.
Psychologically speaking, this kind of robot tends to scare or disgust us more than something that looks like Optimus Prime or WALL-E.
“There are good reasons why robots shouldn’t look too human,” says futurologist Prof Michael Hulme. “Recent research on avatar images showed that we prefer to look at faces that look clearly like an avatar rather than a pale imitation of a human being.”
An example of a creepy-looking humanoid robot is the Actroid, developed by Osaka University. Robots like this may mimic blinking, nodding and even breathing but it is likely that we will always know that there is something not quite human about them.
The future of robots…
Robots and other artificially intelligent machines will come in various different forms in 10 years’ time says Prof Michael Hulme: “I’m very interested in the notion of specific robots for specific purposes; the idea of a robot as a companion, or one that helps with housework. Take guide dogs for example, they’re very important to the individual and perform a single task extremely well; this is how I see robots fitting into society in the future.”
There will also be emotional robots in the future, he says, but they will be context-based. Science fiction scenarios of robots programmed with emotions often end in disaster, the most famous being HAL 9000 from 2001: A Space Odyssey. Perhaps HAL should not have been given emotions or the ability to acquire feelings; Hulme says that emotional behaviour will inevitably be assigned to robots that need them as part of their function.
One of the most important issues in 10 years’ time will be the world’s aging population and this is where caring, emotionally aware robots come in. We’re all living longer and part of elderly healthcare will inevitably involve robot aides.
“Given the demographics this is one of the areas where robotics will become very significant,” Hulme says.
There are already prototype units that can carry people up stairways, issue reminders to take medication and take blood pressure. There are also robots like Paro, a robotic baby seal who promotes social interaction among the elderly and is being tested by the National Institute of Advanced Industrial Science and Technology in Japan.
Hulme also thinks that AI will come into its own in the era of TMI (too much information). New research predicts that the total amount of information created in 2011 will reach 1.8 zettabytes (or 1.8 trillion).
If this data was stored on a 32GB Apple iPad it would fill 57.5 billion units; enough to build the Great Wall of China comprised of these iPads, but at twice the height of the original. In 10 years’ time we may not be able to cope with this data but we could have intelligent agents doing so on our behalf.
This AI would be “representative of the individual” and “almost performing as if it was part of the human being”, says Hulme, asking me to imagine a virtual facsimile of myself that will find the information you want on your behalf.
This kind of complex AI is more likely 50 than 10 years down the line but has its roots in the kind of “recommendation” systems like the one Amazon uses.
Will we have truly intelligent machines in 10 years time? Probably not, although the computer scientist who coined the phrase “artificial intelligence” estimated that it could be anywhere between “five to 500 five years” before real AI would emerge. So while we will have our robot butler just don’t expect it to be any good at telling jokes.