Beyond Fear

I finished Bruce Schneier’s new book over New Year’s, while passing through various airports around the country, wondering what the recently raised Orange Alert actually means and reading the Al Qaeda Threat of the Month online. This arguably wasn’t the best book to read while navigating the new air travel universe, especially when you the passage about how security screening bunches a large number of people (who haven’t yet been screened) in a small space.

I haven’t flown that much before, but the TSA seemed very professional in handling the long lines, even if they could only process everyone with a certain deliberateness. Of course, we get silly stuff every once in a while, like the Scissors Babe episode at Newark, and lines can be catastrophically long at airports that aren’t prepared. But if the line itself becomes a target, the TSA screeners at work there won’t be at fault; instead, they’ll be victims, as much as anyone else, of a flawed security policy where a minor risk was displaced to create a larger potential danger.

And that’s basically what Schneier talks about: where does security policy come from? What are the costs and benefits of a given policy? A lot of these topics appeared in his previous book, Secrets and Lies, though more specifically applied to IT security than to the world at large. Beyond Fear is his post-9/11 book, and attempts to analyze real-world security policy in general, while giving us tools to do so on our own.

Probably, the main lessons to draw from Beyond Fear are that security policy tends to be political, that security failures are inevitable, that security systems are best characterized by how they break, and that there will always be criminals whose motives haven’t really changed for millennia.

  • Security policy is political: this is relatively obvious, but should be firmly restated. Consider TSA screening policies, especially the early ones where anything valuely pointy or sharp got confiscated. This came about because of the need to do something right after 9/11, mainly to restore public confidence in air travel, resulting in a lot of silly items on the prohibited list. Note that, because there are always costs and trade-offs with any given security policy, there will be a political process deciding on the policy, whether it’s a CEO asking that holes be poked in the firewall to increase convenience, or the choice to deploy air marshals on flights. The hope is to manage the political process so that all interests are well represented (air travellers were not well represented when airport security policy was decided).

    But even a well-managed process may result in contradictory policy. Different interested parties will want to secure different things. The Club in a car is perfect for the individual driver, since a thief will just go over to an un-Clubbed car, but is not useful for a police department interested in cutting down auto thefts overall. Something like LoJack would be prefered to reduce overall theft, but is relatively costly to individuals.

  • There has always been and always will be assets that are protected from attack, be it oneself (a repository of good eats to a tiger) or one’s possessions. Defenses can be pierced with sufficient or unexpected force, or may be made obsolete by improved technology or reconceptuatlizations (high castle walls being vulnerable to cannon fire, the West Coast Offense, Al Qaeda’s 9/11 plan). What may be secure now may not be in the future, and we have to plan for that possibility.
  • When a breach happens, how do the security systems respond? Are they brittle, where a security breach compromises everything? Do they instead bend, and eventual breaks are localized? Techniques such as defense-in-depth and chokepoints help to improve security, by localizing breaches and giving the security system time to respond. One characterization of the brittleness of a system is how many secrets are required to secure the system: do we require an algorithm to remain hidden in order to keep the password secret? The more required secrets, the riskier the system.

    Note that, given the absence of flexibile machine intelligence, humans are always, always, always required in the system, because they can better respond to unexpected situations. Security systems should be built with human response to security breaches in mind. For example, safes are rated at how long it would take for a skilled safecracker with the right tools to breach (30 minutes, 1 hour, 1 day, etc.). You then have to design your security system so that guards will come by and check on the safe within that time period. Any security system will have humans at the heart of it, with technology used to leverage human response rather than replace it.

  • Basically, there have always been people who want to kill you or steal your stuff. The only difference now is that they no longer are required to come up close to you with a sword to do this. Schneier argues that the motivations of the bad guys can be broken down to greed, a desire for fame or publicity, revenge, curiosity, and so on. Knowing why someone wants to break your security helps in planning security. You would erect different barriers against thieves compared to people who are merely curious. You would take different measures against people who are out to kill you rather than steal from you.

There’s also sections on the elements of security policy, such as detection, response and audit, and the difference between authentication (you are who you say you are), identification (how can I figure out who you are?) and authorization, and how confusing authentication and identification leads to security lapses. One example of the latter is the attempt to use face recognition technology to identify terrorists in airports and failing miserably. Face recognition is useful as an authentication technique (we do it naturally when we look at people), but not for identification. Further, we run into the base rate fallacy if we sound the alarm on every hit.

There, of course, are a number of anecdotes on security. Schneier recounts a flight he was on, where one of the pilots went to the lavatory: after the stewardess blocked the aisle with a beverage cart and let him into the lavatory, she took his place in the cockpit, so there’d always be at least one person in there (even if she didn’t know how to fly). While this was happening, a husky gentleman in the third row took off his headphones to watch over the process. Schneier saw it as good procedure, though he wasn’t sure where it came about. Possibly, it was because of EgyptAir 990, so that the one pilot cannot take the plane down by himself without at least someone trying to stop him immediately (armored cockpit doors won’t work if the screen process for flight crews is faulty). Alternatively, two people in the cockpit makes it more difficult to take by external assault. Or perhaps both.

Schneier makes a reference to Martin Shubik’s remark that technological eras can be characterized by how many people ten determined men can kill before being killed themselves; technology since the industrial and post-industrial eras has raised this number exponentially. However, he doesn’t quite follow up this frightening thought beyond saying that we should think more clearly about security than we are now. In particular, he doesn’t seem to apply this thought to Al Qaeda and the swamp from which Al Qaeda arises, a swamp which seems infinitely capable of sending forth murderous, determined men.

While Schneier is correct in that there will always be bad guys, no matter what we do, he seems to be mistaken in the best way to deal with Al Qaeda, whose terrorism he ascribes to publicity seeking on a grand and deadly scale. Yes, we should create security systems that can help against the suicide hijacker — air marshals, armored cockpit doors, etc. — and we should be more conscious of the trade-offs we’re now unconsciously making. Schneier thinks it’s hopeless to expect any better, that there will always be people trying to kill other people. But we can drain the swamp: Al Qaeda seems to be driven by a mass pathology, a cult of death bent on apocalyptic murder. Are there any strategies that we have that can address this mass pathology?

We’ve done it before, when we purged Japan and Germany of their cults of death, though it cost a great deal of blood, treasure and time (war between, say, Germany and France is now unthinkable, in the far better sense of the thought never arising, rather than being too horrible to contemplate). And now, after 9/11, we will have to do it in the Islamic world, hopefully with less blood, though probably with more treasure and time. Steven Den Beste has an overview of this war and what we’re doing in it (or at least should be doing in it) which I agree with in general. In a sense, the War on Terror is misnamed; a better description of what’s happening may be to see this is instead as a War on Bad Philosophy: our safety in the long term hinges on our ability to convince the populations of nations that apocalyptic murder is something that cannot be permitted, to free them the mass pathologies that make such thoughts thinkable and reasonable. This will be the work of generations, a change in psychology similar to but greater than our domestic Civil Rights struggle, but over many more people, using fewer instruments of compulsion, from a far deeper pit of hatred. And we won’t know if we’ve won.

We can’t stop everything, and there will still be individuals bent on mass murder. Tim McVeigh is an example. This is where Schneier’s security policy recommendations would be useful: we can harden ourselves against the “small” attacks, so they become more difficult to successfully pull off and cause less damage when they get through. But we may be able to do something about the ability and the willingness and ability of hundreds of individuals to band together to commit mass murder. We may be able to do this by freeing their societies — freedom for others is safety for ourselves. The alternative may be far darker.

Comments are closed.