They're not super-human but they're set to change our way of life anyway...
When it comes to robots, the threat of being wiped from the face of the earth by a super intelligent android is the last thing humans should be worrying about. Robots pose a greater danger to our jobs thanks to a proliferation of robotic automation, and our culture through a growth in mediocre machines.
It's no secret that the poor state of the global economy is driving companies to cut costs and seek efficiency savings.
However, the trimming of such corporate fat, coupled with the rise of robotics technologies that cut costs through automation, is a recipe for job destruction, according to Illah Nourbakhsh, professor of robotics at the Robotics Institute at Carnegie Mellon University.
"Companies are using robotics technologies to increase their productivity per dollar and productivity per worker.
"World populations are going up - it doesn't take a maths genius to see that as companies discover more productivity [through robotics] one day when the recession ends they're not going to hire more people because that will reduce their profit margins," he told a panel debate organised by the Imperial College Robotics Society this week.
"We're heading into a world in which the level of productivity of the companies will increase through robotics technologies and there will be no reverse course, which means we're going to have more people in fewer jobs for the foreseeable future," he added. "That's a huge problem - and nobody is trying to address that at the governmental and institutional level."
Earlier this year, plans by electronics manufacturer Foxconn to dramatically increase the number of robots it has in its factories in China, from 10,000 now to one million by 2013 - the same number of human workers it has today - were reported by the FT. It's unclear what will happen to Foxconn's human workforce as the robots proliferate.
"We're thinking about the future - 100 years away - and we've stopped caring about the near future," he said. "As a result we're sleepwalking into a world in which robots are going to have a direct impact on society and on culture and we're not tracking it."
"These mediocre robots that we're going to have next year and the year after - they're not terribly good, they're much, much sub-human as compared to you and me and our friends, but they're going to be out there, they're going to interact with us and they're going to take our jobs," he added.
Alan Winfield, professor at the Bristol Robotics Lab who was also speaking at the event, said the level of societal misunderstanding about...
...the capabilities of the current generation of robotics is so great it's akin to "a mass delusion".
Too many people expect too much of robots, said Winfield - an expectation gap he blamed on media hype and sci-fi films. "We're drowning in robo-hype. There are only ever two headlines about robots - one is robots are taking over the world and the other is that robots are crap, so there's no place for people like us who make robots that are quite interesting."
This lack of understanding has dangerous implications wherever robots or robotics are deployed, according to both Winfield and Carnegie Mellon University's Nourbakhsh - who cite the use of unmanned drones (UAVs) by the US military in places such as Afghanistan as an area where not very clever robots can have lethal consequences.
"[Today] I reckon it's hard to find a robot on the planet who you can seriously argue is smarter than an ant," said Winfield. "Now would you be happy about something with the intelligence of an ant pulling the trigger? I certainly wouldn't."
"The major issue is we don't understand, as we enable and let loose into the wild more [robots]... how can we control the impact that robots will have?" added Carnegie Mellon University's Nourbakhsh.
Another risk attached to robotics is that accountability for any mistakes these "mediocre" robots make is not clearly defined.
"The issue with accountability is that fundamentally robotics technologies broaden the hand of impact that any one individual [human] has," Nourbakhsh said, citing the long chain of human engineers, designers, technicians and operators involved in producing and operating a single robot.
"We've diluted the sense of accountability to the point where it's no longer even clear from a legalistic, ethical or normative point of view who's responsible for a mistake when a robot makes a mistake."
"The predator drone is not very smart but it changes the rules of war and it changes the rules of engagement. It changes the international laws governing assassination and it's not a very smart robot. Our problem isn't superhuman intelligence, our problem is mediocre robots used by people from corporate business."
Nourbakhsh also argued there are...
...serious privacy risks attached to robotics technologies being used used to track and respond to human behaviour.
"Robotics technologies are transforming the way marketing happens in companies," he said, "because they can observe us, they can see the gaze - the direction of our eyeballs, the smirks we make with our lips when we see a price tag, what we say to our friend when we're looking at a shop window. All the shopping and lurking that we do when we're on the internet, on our mobile device and with our feet in town, all of that allows them to data-mine the precise ways in which they can maximise the flow of money from our pocket to their pocket - because they can market it to us just right.
"And that optimal marketing has no end. They get better and better at that thanks to robotic technologies that allow them to do behaviour analysis on us - people - very well. So as we become greater and greater victims of productivity increases due to robotics technologies [taking our jobs], we also become greater and greater consumers with less money in our pockets thanks to the very same companies.
"So win-win for the companies, loss for society."
Nourbakhsh singled out the Siri natural language-processing voice assistant app in Apple's latest iPhone as an example of a robotics-enabled collision between utility and privacy.
He quoted Siri's online feature description on Apple's website: 'Using location services, Siri looks up where you live, where you work and where you are... From the details in your contacts it knows your friends, your family, your boss and your co-workers'.
"The problem we have," according to Nourbakhsh, "is there is no design ethos vis-a-vis privacy".
"The marketing department doesn't even get that there's an issue with privacy that might come up here as you create a proactive agent who knows everything about you, has access to all your apps and knows who your boss is," he added.
The debate also touched on the question of how long it might take humans to create a super intelligent AI or robot.
"Human-level AI will not happen within [a near-term] timeframe - it's far too far in the future to even make a prediction," said the Bristol Robotics Lab's Winfield.
"Robotics is not covered by Moore's Law," added Nourbakhsh, pointing to the rate of progress in areas such as battery, energy and motor technology as linear at best.
"We get left-turns in technology development," he added. "Trying to draw lines - to say that's where we're headed - turns out more or less always to be wrong because primarily what companies end up spending money on is what sells.
"We're going to take these turns and I can't possibly predict [where that takes technology] - that's where the money is going to push this forward."