When you give a businessperson a choice between security and convenience, be prepared for convenience to win every time. It’s like tossing a stone and expecting it to hover instead of fall; sure, that might be ideal behaviour, but it’s not realistic. Businesspeople function the same way: when the metrics of your success are based on efficient delivery of goods and services, anything that increases efficiency and/or decreases drama is worth purchasing.
This is why mature organisations have some sort of “cyber risk” function that acts as a safety brake on procurement. Call it what you will – “security engineering,” “trusted architecture,” “Jennifer,” doesn’t matter – the “cyber risk” team scrutinizes new technical solution requests, imagines what might go wrong once the solution enters production, and figures out what might be done to pre-emptively mitigate the risks. If the potential for disaster remains after all necessary controls are implemented, then the new “solution” gets rejected.
A great example of what can happen when your “cyber risk” function fails appeared on the AP news wire last Thursday. In an article titled “Military units track guns using tech that could aid foes,” reporters James Laporta, Justin Pritchard, and Kristin Hall highlighted a killer new technological tool that violent criminals can use to find, track, and attack American soldiers.
In brief, military police don’t take their duty firearms home with them at the end of their shift like civilian police do. The military is much more disciplined about weapon accountability than … well … everyone else. When an MP comes on duty, they report to their unit armoury and check out specific firearms and ammunition. At the end of their shift, they return their ordnance to the armoury and formally check it back in. The armoury staff are required to keep meticulous records of everything, which includes many inventories … all of which must be 100% accurate.
This is a good thing. It ensures that military weapons don’t get “lost” and wind up on the open market or get used by criminals. The last thing anyone wants is a violent scumbag dragging a belt-fed machinegun into their local grocers. I’m a big fan of weapons accountability. So are the MPs. I’ve never met an MP who felt ambivalent about proper weapons discipline.
The thing is, arms inventories take a looooooooonnnng time. You don’t just count the number of rifles on the rack; you’re required to crosscheck every single item’s serial number against unit property records and make sure that every item is in the right location within the armoury.
So, when some enterprising defence contractors saw an opportunity to cultivate a lucrative new customer base for their “Internet-powered logistics solutions” they leaped on it. Vendors pitched military bases and MP units on the efficiency benefits of putting Radio Frequency ID tags on weapons and equipping customers with computerized scanners. As Laporta, Pritchard, and Hall put it: “RFID … is infused throughout daily civilian life … When embedded in military guns, RFID tags can trim hours off time-intensive task, such as weapon counts and distribution.”
There are, obviously, some downsides to adding a gizmo to your military rifles that will “squawk” its identify and location when interrogated by a radio signal. If you know how the technology works, you can easily and invisibly locate every single RFID-tagged weapon in the area. If you were a bad guy wishing to attack a military base, knowing exactly where all the armed defenders were in real time would give you a deadly tactical advantage.
I didn’t need to read the article to immediately know what the reporters meant. I’d rejected this exact solution ten years prior, back when I was the IT approval authority for my Air Force unit. Our local MP squadron had been pitched the RFID-enabled inventory solution and wanted to overhaul their armoury. “No more manual check out and check in steps,” their chief cop gushed. “A sensor on the door records when each weapon leaves and returns.” Their “inventories” could be performed in minutes rather than hours with guaranteed perfect accuracy. The time savings alone would pay for the solution in a year by freeing up cops for other duties.
I completely agreed. The technology was fabulous. The potential time savings were tantalizing. The increase in accuracy alone seemed irresistible. I empathized with their stated need … and still rejected the purchase because the unmitigable risks remained too danged high.
The MPs were furious with me. This one decision probably did more to sour our professional relationship than any other spat we’d had over the decade I’d supported them. Nonetheless, both our enterprise IT risk model and our force protection regulations forbade us from handing an adversary a cheap, easy, and effective tool for finding and tracking all our base defenders. I denied their request.
In the AP article, the reporters closed their article quoting Weathered Security founder Dale Wooden right after a presentation on RFID risks at DEFCON 2010: “If the disease is missing weapons and the cure is RFID tags, then you have a cure that is worse than the disease. They’re prioritizing convenience over service member’s lives.”
I agreed with Dale then, and I still agree with him now. It didn’t matter that I deeply empathized with our MPs. I wanted to help them find ways to reduce the drama inherent in the arms inventory process … just not at the cost of squaddies’ lives. Sure, the risk probability was remote (thankfully!), but the potential risk impact was catastrophic. Military doctrine prohibited accepting such risks in circumstances that weren’t already a dire emergency.
That’s a large part of what a “cyber risk” program is about: investigating new solutions and working out whether the technological, procedural, and incidental risks can be acceptably mitigated or not. Then communicating a recommendation to the ultimate approval authority (like the CSO, CIO, or CTO) who makes the final decision.
Thankfully, most corporate “cyber risk” decisions don’t have to factor murdered employees in their planning. In some ways, that’s an impediment to their function. People have an easier time understanding horrifying potential outcomes like “death,” whereas an “API compromise” is difficult to visualize. That makes “cyber risk” a Cassandra profession: you’re constantly warning people about a doom they’ve set in motion, but the people you warn don’t believe you and take out their frustrations on you for getting in their way.
It is, however, necessary for the survival of an organisation. Given the choice between security and convenience, the latter is always preferred. Saving work now is always preferable to potentially preventing a possible problem at some potential future date. We do this as individuals (e.g., “should I eat this bag of pork rinds now because they’re delicious or have an apple instead to prevent future coronary artery disease?”). We do this the same thing in groups. If anything, it’s easier to accept potential future risk in a group where blame for a possible negative outcome can be diffused through many different stakeholders.
That’s why someone has to be the “bad guy” in the procurement process and say “no.” It’s a thankless job, but a darned necessary one.