I was clearly wrong. We don't merely need a robot to turn on our lights and heat our houses. We need them to tell us what's right and what's wrong.
I've learned this from a pulsating idea being expressed by some academics.
Presented to a recent conference of the Association of Computing Machinery by Marija Slavkovik, an associate professor from the University of Bergen in Norway, the notion is that the likes of Amazon Echo and Google Home should be equipped with moral and ethical dimensions.
What next? Congresspeople equipped with something similar?
As the New Scientist has it, the idea is for (allegedly) moral and ethical AI to judge the behavior smart speakers see and hear and decide whether your speaker should, well, inform one authority or another.
Each machine might enjoy several separate AIs, each with their own moral and ethical angle.
Imagine, then, that your Amazon Echo is equipped with the moral compasses of, say, dad, mom, Herbert the 16-year-old Juul-toking jackal of a son and Matilda, the 17-year-old aspiring astronaut daughter. Regretfully, Erasmus the Rottweiler doesn't get a moral compass, despite PETA's artificial protests.
When your Echo hears, sees or, in some glorious future world, smells something it deems untoward, its various AI constituencies get together to decide whether anything should be done.
Hey Alexa, you hear snorting noises coming from a bedroom. Do you:
a). Immediately dial 911 and reveal Herbert might be under the influence of a nasally ingested drug?
b). Tell mom and dad that you believe Herbert might be under the influence of a nasally ingested drug?
c). Inform Erasmus in your most soothing voice that he should stop trying to get Herbert into trouble?
I'd always thought human morals and ethics were rather shifty beasts. For example, one minute it's definitely not alright for world leaders to make money out of their world leadership. The next minute, it's just fine.
Slavkovik, though, seems to complicate her own simplicity. She told the Mail: "If we want to avoid Orwellian outcomes it's important that all stakeholders are identified and have a say, including when machines shouldn't be able to listen in. Right now only the manufacturer decides."
If we want to avoid Orwellian outcomes, some might think, we shouldn't have created the Internet of Things.
An elemental problem with all these supposedly world-bettering inventions is that they bring with them the creation of a whole new system that needs to be understood.
Sadly, the system instantly demands that we participate fully in it, long before we ever understand it.
Remember that Facebook thing at the turn of the century?
I'm sure some dream of the day when they reach into the fridge to get some chocolate -- no fridge should exist without a chocolate drawer -- only to hear Alexa purr: "What do you think you're doing? Pick it up and I'm calling 911."
But the mere thought that we'll trust machines with moral judgment -- when those very machines will likely have been created by highly amoral humans -- borders on both torrid twaddle and sheer comedy.
Not too long ago, filmmakers John Carlucci and Brandon LaGanke wondered what would happen when the likes of Amazon Echo and Google Home become more knowing than humans. Their portrayal was both blisteringly disturbing and painfully imaginable, as the AI began telling a family what to think and do.
Personally, I'm not sure I can wait for the headline: "Alexa reports burglar to police. Burglar turns out to be drunk homeowner."
Subhead: "Alexa receives suspended sentence."
Scary smart tech: 9 real times AI has given us the creeps