No stars for Internet of Things security

At this week's AusCERT 2016 conference, an embedded device security specialist proposed a 'Security Star' rating for consumer IoT devices. It's a great idea, but it'll never happen.

There's three types of security threats, according to Andrew Jamieson, who revels in the job title Security Oompa Loompa at safety science company UL. He called it "Security DIY". Not do-it-yourself, but Deliberate, Ignorant, and Yet to be discovered.

Deliberate threats, such as back doors and remote data transmission, you fix with code reviews.

Ignorant threats, such as poor security configurations or bad design choices, you find through penetration testing.

And the Yet to be discovered threats? These are the unknown or unreported vulnerabilities in the software that's being used. They could be bugs, or entirely new types of attacks. You just have to pray.

Security evaluations take time, cost money, and always fail to find every possible problem, Jamieson told the AusCERT 2016 information security conference on Australia's Gold Coast on Wednesday.

These evaluations are expensive enough for big ticket items. With the Internet of Things (IoT), devices are getting cheaper, vendors are adding more functions, and the ever-accelerating design-prototype-production cycles leave little time for proper testing anyway.

Consumers may say they care about security, but they vote with their wallets, Jamieson said. And why wouldn't they? If there's two devices with the same functionality, and nothing else to differentiate them, why would you buy the more expensive one?

Vendors don't have any incentive to spend time and money building security into products, and even less incentive patching them once they ship. No, they want to sell you the new model next year.

Jamieson's key observation was that IoT security is therefore a commercial problem, so it needs a commercial fix. And that fix has to take as little time and money as possible.

Jamieson's proposed solution is a "Security Star" rating, along the lines of the energy and water-use ratings for consumer whitegoods, or the US Department of Transportation's 5-Star Safety Ratings for cars.

There's some obvious problems. How do you compare different products, different architectures and different security requirements objectively? Without costly code review and pen testing of every release, that is?

Jamieson's answer is to keep it simple.

Devices can be defined by three things, Jamieson said, all of which affect the overall attack surface: The number and type of interfaces, both input and output; the processing attack surface; and the system architecture. We "just" need to wrap some metrics around this, he said.

The joy of metrics is that you can just make them up, just like the kilogram.

Jamieson proposed a metric which he called the Logical Security Posture (LSP). A device scores points for security features, and loses points for features that increase the attack surface. And if there's no commitment to update the firmware, well, that's an automatic zero-star fail.

Stars are issued based on "number of years secure", so "4 stars until 2018" is different from "5 stars until 2015", and regular on-site inspections by a follow-up service can wipe a product's stars if they've surreptitiously changed the design.

The details of how this might work are in Jamieson's presentation slides.

This is all an excellent idea, provided there's some education to ensure consumers realise that a 5-star rating doesn't mean a device is impregnable, just that it's broadly better than a 4-star device, and they can differentiate between otherwise identical devices. They can choose a less secure device if they're not too worried about a camera in a farm shed, or a more secure one for their children's play room.

The processes for running a star rating system are well established in other industries, so there are bureaucracy models to copy, and there are companies like UL who know how to do the ratings.

But I see some key problems.

As Constellation Research vice-president for digital safety and privacy Steve Wilson pointed out, some aspects of security don't sit neatly on a scale.

"I am at the doctor; see a three star power rating on the air con. I'm a bit sad but still, they might be great recyclers," Wilson tweeted. "At the doctor's, the autoclave might have three of five stars for sterility. No wait, it can't. Safety is pass/fail," he added.

Can consumers really know how to trade off security for price or convenience? Such problems exist for healthy food ratings, for example. What's the security equivalent of deciding that because you're not eating any carbs at breakfast, you can have as many sausages as you like?

Can the vast independently minded libertarian-leaning IoT industry cooperate well enough for this to work?

I also see one big problem that might well be unfixable. The security threat landscape changes fast. Very fast.

In the automotive industry, say, product safety recalls are usually limited to a single manufacturer's design or production process, and maybe just a single model. But software is vastly more complex than a car's physical design.

If one of those yet-to-be-discovered threats emerges, it might make an entire category of LSP scoring invalid. That in turn might invalidate an entire category of designs that use the now-failed design patters, and potentially an entire category of devices.

You'd need a comprehensive notification, re-rating and recall regime, and it'd have to work fast. That sounds expensive. Very expensive, for a $30 wristband. And yet such a flaw could provide a pathway via Bluetooth through a consumer's smartphone, with all its personal data, with obvious results.

I really do like the idea of a Security Star rating. I just don't see it happening.

Disclosure: Stilgherrian travelled to the Gold Coast as a guest of AusCERT.