Knowing who might want to attack you and what they might be seeking is the key to designing effective security. In the 2014 Sony hack, as the New York Times reported at the time, the company's more than 7,000 employees arrived at work one day in late November to find macabre images of their CEO's severed head displayed on their computer screens. Shortly afterwards, Sony shut down all its digital systems.
Sony's threat model probably looked something like this: pirates want to steal our intellectual property; hackers want to steal our customers' personal and financial data; journalists want to embarrass us and snoop on the talent. It's highly unlikely anyone imagined that a hacker group tacitly or overtly supported by a nation-state wanted to wipe the company out because of a tasteless movie. Now, of course, they will.
And yet, no-one has the unlimited resources to devise systems to counter every conceivable (and inconceivable) threat. Choices must be made, based on a realistic understanding of risk. This is where Threat Modeling: Designing for Security, Adam Shostack's thorough guide to thinking through security planning, is intended to help. Shostack's history in the industry includes start-ups such as the Canadian privacy tool specialist Zero Knowledge Systems and the security specialist Netect, and helping to found the International Financial Cryptography Association and the Privacy Enhancing Technologies Symposium. He also served for some years as the program manager for Microsoft's SDL threat modeling tool.
Much of Threat Modeling works methodically through the basics, from early brainstorming through structured planning, creating checklists, working with attack trees to find and represent threats, managing and mitigating potential attacks, finding tools, and methods for validating the results of the above. Later chapters also consider account management and the relatively new vector of the cloud.
More than a manual
If Shostack stopped there, his book would still be a useful manual. What makes it stand out is his attention to privacy and usability.
He devotes a chapter to finding privacy threats, warning that dismissing privacy concerns as a waste of time is the wrong approach, as consumers repeatedly have shown that they do care about privacy when they understand the threat. Shostack discusses, for example, the 'nymity slider' conceived by Ian Goldberg to assess how privacy-threatening a protocol may be ('nymity' is "the amount of information about the identity of the participants that is revealed [in a transaction]").
Usability, typically ignored by security practitioners, is of key importance: devising security systems that are cognitively impossible for humans to follow, or that add a layer of extra difficulty to completing the tasks that a staff member has been hired to complete, are properly viewed as a threat in their own right. That is how workarounds are made. Shostack offers a variety of research-derived approaches to modeling people, arguing that the informal ones in many security professionals' heads are "often full of contempt for normal folks".
Two more chapters are noteworthy: one on threats specifically to cryptosystems; and one on how to bring threat modeling into your organization. That last may be the hardest part.