A practical guide to busting the "perfect security" myth -TEISS® : Cracking Cyber Security

Privacy / A practical guide to busting the “perfect security” myth

A practical guide to busting the “perfect security” myth

There’s no such thing as “perfect security” just like there’s no such thing as a “real” unicorn. Most people have come to accept these truths as they’ve matured. Some haven’t for whatever reason. As security professionals, we have a duty to politely correct this utopian misunderstanding whenever it manifests, so the users we serve can make sound, rational, and realistic decisions about how best to secure sensitive information.

It’s hard to believe that it’s been a year. My first TEISS article posted on 3rd May 2018, under a remit to explore cyber security (in general) and security awareness topics (in particular). This is my twenty-fifth column in the series. As such, I’d like to start a discussion. Please answer me honestly; is your organisation’s critical information “secure”?

Take a moment and think about it. Please take the broadest possible perspective. Consider all the things that your organisation does (and doesn’t do) to keep your critical information protected from unauthorized disclosure, safe from unauthorized deletion or modification, and available on-demand. [1] When all is said and done, is your critical information – right this minute – secure?

The only credible answer to this question is “no.” There are no other answers; anyone who says differently is trying to sell you something. [2] Usually something expensive that requires a multi-year support contract and possibly professional installation.

Back when I was an IT consultant I discovered that clients really hated hearing this news. I drove several clients incandescent with rage by telling them that honest, undeniable fact: their critical information was not completely secure, and no one could “make” it secure. Not even me. I said that I could (and would) do everything appropriate in my power as a cyber security expert to make their critical information as secure as possible within the limits they imposed, but I could not achieve and would not guarantee absolute security. To claim otherwise would make me a liar and a liability.

Irritatingly, a surprising number of DotCom salespeople in the late 90s and early 2000s were more than happy to say whatever their client wanted to hear to make a sale. The “grow fast, lose money, cash out with an IPO” mindset led some salespeople to engage in some shockingly unethical behaviour.

I don’t mean to get all epistemological with this column, but I feel strongly this is a critical cyber security concept that must be reiterated specifically because it’s inherently unsettling. I’m not trying to split hairs with the definitions. We want our systems and networks to be impregnable. We want to be confident that our critical data is (and will remain) unsullied by hackers. We want to be confident that all our defences will work perfectly, every time, and we can therefore sleep worry-free each night.

We want a lot of things. My clients told me they wanted a perfectly secure network. I replied that I wanted a unicorn, however neither of our wants were going to be realized in the real world. [3]

It surprised me at first how some people would grow personally offended by this news. [4] I saw successful professionals grow outraged after I refused to promise that I could install some box-with-blinky-lights-on that would somehow guarantee their network be made perfectly and permanently secure. Telling the hard truth was often, I freely admit, a career-limiting move.

It was, however, the truth. As a degreed, certified, and experienced professional; I believed then (as I believe now) that I have a moral, ethical, and functional obligation to report honestly about the state of the programs and systems I’m responsible for to my superiors, stakeholders, and clients. I consider lying about security to be abhorrent and irresponsible. I carried that principle forward when I entered civil service in an IT leadership role. I committed myself to only ever telling the verifiable truth. No obfuscation and no over-promising. That … didn’t always go over well.

I have never understood why a business owner would feel that it’s acceptable to aggressively berate a vendor for delivering unwelcome news.

As an example, there was one senior supervisor at our installation – let’s call him ‘Bob’ – who blasted me during a briefing when I told him the organisation’s network could never be made “perfectly secure.” Bob laid into me before I could frame the problem for discussion.

“You have to secure our network,” he snarled, wagging an angry finger at me. “That’s what we pay you for!”

I politely explained to Bob that he didn’t pay his doctor to make him immortal; he paid her to keep him as healthy as possible for as long as possible, all while considering how much he was willing to pay (in time, treasure, and self-denial) to pursue his personal quality and quantity of life goals. It would be churlish and juvenile of him to demand that she produce for him divinity-grade results.

It’s the same situation when it comes to cyber security: we practitioners do the best that we can with the equipment, people, and processes that we have, considering the threat environment, our defensive capabilities, and our superiors’ willingness to support necessary control measures. We pursue immortality absolute security, knowing full well that we’ll never achieve it no matter how grand our budget grows. We’ll come as close as we can, though … if the business allows us to.

It’s a running joke in cyber security circles that we should benchmark the TSA’s airport screening model as a “best practice” for the corporate world. Everyone knows that making workers queue for an hour, take their shoes off, and get wanded before they enter the office would ridiculous, impractical, and utterly corrosive to morale. It’s a great example of a security control that technically solves a niche problem but does so at an unacceptable cost and therefore shouldn’t be implemented.

Sometimes they don’t. Sometimes, the relative value of our proposed solutions isn’t enough to justify the tremendous imposition that our solution imposes on the business. Sometimes the cost to remediate a vulnerability so far outweighs the potential cost of that vulnerability being exploited that it’s just not cost-effective to address.

Sometimes interoperability requirements require a “good enough” approach rather than a perfect one. Some vulnerabilities can’t be addressed because no one knows they exist. Cyber security is a methodical cross-disciplinary process, not a “magic box.”

That explanation didn’t satisfy Bob, and I understood why. Bob was at the tail end of his career. He’d come up through the ranks during a time when “computers” were the size of entire rooms, required a staff just to turn on, and could be “secured” by simply locking the “computer room” door.

He was a bit fuzzy on the idea of networked PCs and didn’t grok the Internet at all. That wasn’t a failing on his part; he was exceptionally talented at his specialty and had never previously needed to know how computers work. In Bob’s career, computers had always been someone else’s problem. He wasn’t keen on making difficult decisions that might cause him to look foolish later.

That was nearly twenty years ago, and the business world has changed for the better by leaps and bounds. I have no doubt the person now filling Bob’s shoes in that organisation is much more technologically sophisticated. That’s because all of us have seen Internet tech utterly transform every aspect of our lives since the so-called “Dot Com Revolution.” Most of us literally can’t do our jobs without working tech … nor would we want to.

These days, most businesspeople understand and accept t there are no “magic box” solutions for solving cyber security needs just like there aren’t any unicorns. Nice as it would be, we all must make do with what we have. It might be a slightly gloomier world, but it’s a stronger and more resilient one.

As we grapple with complex security needs in the light of business requirements, we’ve learned to make more practical decisions. We’re better at securing critical information. Maybe not perfectly secure, but often secure enough.

Effective security measures need to be wisely balanced between cost and complexity on the one hand against probability and impact on the other. An organisation’s goal is to be secure enough to perform its essential functions at an acceptable price in effort, budget, and aggravation.

Note that I said, “most business people” and not “all business people.” There are still some folks in every organisation everywhere that haven’t gotten the message. That’s okay. This is why we have security awareness programs: to establish common baselines for core cyber security skills, knowledge, and abilities. To help people understand how and why their individual contributions affect the big picture. To help inculcate a sense of personal and collective responsibility for keeping our systems, networks, and sensitive information as secure as possible, given the totality of circumstances.

One of the best ways to accomplish this goal is to find out where your users are, mentally and emotionally. Figure out where their heads are at (so to speak). One of the most effective ways I’ve found to gauge this is to simply  ask your users: is your critical information – right this minute – secure? If they answer “yes,” then you know you have some supplemental teaching to do.

[1] In cyber security terms, this is another way of explaining the “Confidentiality-Integrity-Availability triad.”

[2] Apologies to the Dread Pirate Roberts for borrowing one of his greatest quips.

[3] The unicorn comment is a running gag from my civil service days. Someone would ask me for a ridiculous policy change or service, and I’d snark that I wanted a unicorn. As a reward for faithful service when I retired my employees bought me a gaudy toy unicorn as a farewell gift.

[4] The inherent insecurity of systems part, not the unicorn part. *That*, I get.

The following two tabs change content below.

Keil Hubert

Keil Hubert is the head of Security Training and Awareness for OCC, the world’s largest equity derivatives clearing organization, headquartered in Chicago, Illinois. Prior to joining OCC, Keil has been a U.S. Army medical IT officer, a U.S.A.F. Cyberspace Operations officer, a small businessman, an author, and several different variations of commercial sector IT consultant. Keil deconstructed a cybersecurity breach in his presentation at TEISS 2014, and has served as Business Reporter’s resident U.S. ‘blogger since 2012. His books on applied leadership, business culture, and talent management are available on Amazon.com. Keil is based out of Dallas, Texas.

Comments

Most Popular

Get the latest cyber news in your inbox

Join our community of cyber professionals today!