"Security by design" has been on the agenda recently as the government unveiled new measures intended to reduce cyber risk in connected devices.
In collaboration with the National Cyber Security Centre (NCSC) and industry specialists, the government released this Security by Design report; a review into IoT security which states that companies should take greater responsibility to implement security mechanisms into their products, as well as recommended guidelines in how to achieve this.
YOU MAY ALSO LIKE:
The government has outlined 13 steps to improve the security of consumer IoT devices, here are a few highlights:
No default passwords, all IoT device passwords must be unique.
It should be easy for customers to delete personal data.
Systems must be resilient to outages.
Software should be updated automatically with clear advice for consumers.
Companies should provide a point of contact so that security researchers can report issues immediately and disclosed vulnerabilities should be acted on in a timely manner.
TEISS asked experts in the field, Adam Shostack, author of Threat Modeling: Designing for Security, Ed Moyle, Director of Thought Leadership and Research for ISACA, and Hadi Hosn, Director of Cyber Security Solutions EMEA at SecureWorks, about why security by design matters and what questions businesses should ask themselves when designing products and apps.
According to Moyle, this isn’t a new concept. "Over the years, it has been referred to variously as “security by design”, “secure by design”, “build security in”, among others. But really, the target end state and mentality is the same: build a process where the software itself is both designed to be secure and built in such a way as to minimize flaws that could compromise security," he says.
Security by design, Shostack states, means thinking about security at the start of a project, the same way you might think about scalability, reliability, or other properties you might want your system to have. "That's in contrast to the approach, common today, of using 'penetration testing' as the project gets ready to ship," he adds.
Why does is it matter?
Shostack explains that it matters because as a system develops, it becomes harder to add security. "If you design in floor to ceiling windows in a bad neighbourhood, it's hard to put bars over them later. You need to understand what the requirements are, and design appropriately. The rules for changing content on Wikipedia are not the same as those for Parliament. If you don't notice that until you're close to delivering, you might have trouble changing it," he states.
Moyle adds that it's also important for society at large as, "fostering more robust software development practices can have a long term beneficial impact on economics and can improve quality of life by reducing the impact of security issues to individuals within that society."
What should businesses think about when creating processes or apps?
As a former developer, Moyle advises making it easier for the developer to do the right thing vs. the wrong thing. He feels that human nature will always choose the 'path of least resistance', and you should use that trait to your advantage.
"Human nature is such that, immediately after these developers read that, they would go back to doing what they were doing before. I.e. the path of least resistance in that case was for them to do what they’ve always done, which was non-optimal from a security outcome point of view," he says.
Moyle continues: "If, on the other hand, you can make it such that doing 'right thing' is easier, then you use human nature to your advantage."
As an example, he says, "if I write an encryption API that wraps the underlying OS or library API’s cryptographic components to handle, for example, key management, then I actually make the developer’s life easier. They’d rather call the wrapper API which is simpler because they don’t have to worry about all the key generation/storage/management code. As a security-focused resource, I can now make sure the wrapper code does exactly the right thing and conforms to policy. I’ve now made laziness fight for me rather than against me because choosing security saves them time."
He says that this approach always works. "The super-rigorous, grinding coding standard approach tends to lose effectiveness over time whereas less work is always maximally compelling," he explains. The same thing works for design, he adds.
What steps should businesses implement for security by default?
Shostack thinks that businesses (or open source projects) should consider 4 key questions:
1) What are we working on?
2) What can go wrong?
3) What are we going to do about it?
4) Did we do a good job?
"These questions are the heart of threat modelling, which is a collection of techniques for thinking about the security of a system, and thus designing security in," he adds.
But is it enough?
Even though the advice is timely, there are no penalties or fines against organisations that do not meet these requirements. According to Hadi Hosn, the government's guidelines are not enough. "IoT devices will not be certified by the government before going into the market," he states.
He fears that organisations that "prioritise speed to market and cost of production of the IoT device over security will continue to produce devices that are insecure."
Hosn continues: "I would have liked the government to come out with a certification scheme around IoT devices, whilst maybe not live now but planned in the near future to give consumers confidence that they are acquiring secure devices that are not going to spy on their babies through the baby monitor or share their personal data and habits with a malicious organisation."
In November last year, Consumer rights group Which? warned people about serious security loopholes in popular Bluetooth-enabled toys such as the Furby Connect, I-Que Intelligent Robot, Toy-fi Teddy, and CloudPets …