Assuming you’re a regular reader of this or other technology news sites your kneejerk reaction to this headline should be “nope”. That’s a good instinct. Because just as new hardware tends to outpace the legislation that governs it, the people and companies behind that hardware tend to prioritise features, functions and time to market over security.
With the pace of innovation and the potential penalties for launching later than a rival, when it comes to consumer technology, speedy execution trumps most concerns. Someone’s likely to make a smart oven first and then worry about securing it against nefarious parties later. That this is a poor way to go about things is obvious. But it’s also the nature of the technology beast.
“When planes get connected I won’t fly,” says Andrey Nikishin, head of future technologies projects at Kaspersky Lab. Pressed to elaborate, Nikishin says he’s overstating his position. After all, aircraft have been sending and receiving telemetry since the early days of radio broadcasts.
His point, though, is that people tend to be a little too enthusiastic to add connectivity to things without make sure they’re sufficiently locked down, too. And in some cases, they’re also embracing the “internet of things” without stopping to ask whether connectivity is solving an existing problem, or merely creating the potential for new ones to arise. Do you need a connected toaster? Sure, it could save you two minutes in the morning. But it could also be used to burn down your house.
The hands-off future needs hands on security
Particularly worrying to Nikishin’s mind is the rate at which autonomous vehicles have been developed with security playing second fiddle to sensors, mapping and the other problems that need to be solved for us to take our hands off the wheel confident our automated people carrier won’t carry us into a tree (or another vehicle, autonomous or otherwise).
“Connectivity definitely isn’t a bad thing,” Nikishin says, “but it has to be treated correctly”. To keep safety and security at the forefront it should be considered at the blueprint stage, and constantly assessed and updated, rather than retrofitted.
“It’s impossible to take one thing, add connectivity, and claim to be ‘safe and secure’ and be finished,” Nikishin explains. “The notion of ‘a safe device’ is one where every algorithm is verified and it never harms its environment or the humans that use it.” In other words, checks and measures and smart design can go a long way to ensure safety across the life of the product. Think insulated cables, fuses and other failsafe mechanisms.
But security is a different beast. “You can only say a connected device or piece of software is secure at a certain moment in time. In a minute, or tomorrow, or next week, a vulnerability could be found,” Nishikin says earnestly. And that vulnerability might not be found by someone with good intentions.
“Safety is a protection against unintentional actions. Security? Well, that’s protection against intentional – and often malicious – actions.”
In the race to get to products on shelves before rivals do, Nikishin says some companies inevitably decide that security corners are the ones that get cut. More worrying still is that, when it comes to smarthome tech or other IoT (Internet of Things) devices, it’s often impossible as an end-user to accurately assess a device’s security chops.
Nikishin has a smart-heating solution in his home that he could control remotely from his smartphone. But because he can’t be sure of the security of the proprietary system he’s opted not to use that feature.
“I’m probably more paranoid than most people because of my work,” he says with a grin. “But my point is that people need to consider the risks versus the benefits, especially when the benefits aren’t especially useful or impressive.” The temptation for many users is to enable more features than they need to in case they want them later, or simply because they’re there. Nikishin suggests that’s the wrong way to go about it.
How paranoid should consumers be? Nikishin says, “reasonably”. Don’t connect devices you’re unsure of. Don’t enable features you may never use. Don’t just click “accept all” and continue when granting apps or other services permissions to access data on your devices. And, when it comes to social media, “Ask yourself, ‘What might a malicious agent be able to do with this thing that I’ve shared?’” Nikishin says. “And when all else fails, remember: if it looks suspicious, it almost definitely is.”