A guide to phishing emails and how they work
10 June 2019 |
The most efficient phishing attacks are precisely crafted messages that use our known psychological attributes against us, stimulating us to act against our own training and better judgment. Security Awareness writer Keil Hubert describes a mysterious email message that could well have been an insidiously clever spear phishing attack.
There’s something “fishy” in my inbox and I’m at a loss for what to think about it. I was originally convinced that it wasn’t “phishy,” as in a cyberattack masquerading as a legitimate email. I was confident that it better fit the old-fashioned definition of “fishy” as in “improbable” or “unlikely.” Then I had to reconsider. It wasn’t a phish and yet it was (in its own right), for its potential rather than for its actual substance.
Let me explain. Late one Friday afternoon, a new automated message appeared at the top of my unread queue from one of our internal ticketing systems. The subject line read “Service Request XYZ12345 has been assigned to Keil Hubert.” I opened it out of mild curiosity even though my group doesn’t use service tickets for anything. I figured that someone must have made a mistake selecting names for the assignment field.
The only information contained in the ticket was a brief description: “Tracking ticket for [common industry best practice]” … and nothing else. No attachments, no explanation, no context, and (worst of all) no actual task to perform that might allow me to close the ticket. Intrigued, I tracked down the fellow who’d assigned the ticket to find out what to do with it now that I had it.
This turned out to be my first encounter with a wholly-new departmental business process that had just started its field testing. Sometime earlier, there had been an event or incident that could only be closed if certain tasks were completed. Fair enough. I understood the logic governing the decision even if the new tracking process came off as a bit … unfinished. Well-intended, just needing polish.
I could see the general intent of the process, just not where it was going.
As I explained it to my colleague: “Seeing this ticket in my inbox is like waking up to find a trout on my pillow: I have no idea how it got there, no idea where it came from, no clue what message it’s supposed to convey, and no idea what I’m supposed to do with it now that I have it. My first instinct is to bin it and pretend it never happened. It’s obviously someone else’s trout, yeah?”
My mate laughed. He understood; he’d done his job as-instructed and understood the new process needed improvement. Ideally, a work ticket is supposed to trigger … well … work. Logically, a work ticket with no actual work requirement is little more than a distraction. We agreed to toss the proverbial trout onto the owning manager’s desk once he returned from holiday.
It wasn’t until after our exchange that I realized the insidious potential of this email as a method for spear phishing. I’d copped to it subconsciously; I called it a “trout” instead of … anything else. My instincts were hammering me that this email had great potential to “work” as a real phish.
In a phishing attack, the aggressor is attempting to provoke an emotional response in their victim. The attacker adopts the veneer of legitimacy and invokes a sense of urgency to override their victim’s natural defensive instincts. This provokes the target into doing something they would not normally do, like open a malware-saturated attachment or click on a hyperlink to a malicious server masquerading as a real host.
It is important to teach colleagues to actively search out the “call to action” in every suspicious message. What is the sender trying to get you to do? Is anything in the message written to imply urgency? Are there stated or implied consequences for failing to take the directed action?
Examples include “your account has been disabled, click here to re-activate your login credentials” and “a warrant has been issued for your arrest, call this number to pay your fine.” A successful phishing defence requires a sharp eye and a cynical reading of all unexpected messages.
We encourage our colleagues to listen to their instincts. Their subconscious mind will often react to a subtle defect or inappropriate tone in a message before their conscious mind realizes where the attacker made an error.
In comparison, my mysterious work ticket came across as the exact opposite of a typical phish; it suggested no sense of urgency and didn’t contain any instructions. It was plain text, with no attachments, and just a single hyperlink to a known and verifiable host. The message appeared to be perfectly safe. The one thing it did that resembled a real phishing attack was to imply that SOMETHING MUST BE DONE … eventually. Specifically, that I – as the assigned worker on ticket XYZ12345 – was required to somehow close out a new service request.
This had me thinking … for a high-strung, Type A personality like me, a vague-but-unactionable notice like this was far more effective at getting my attention than a conventional phishing attack. Even though my suborganisation doesn’t use service tickets for anything, the very notion of having an open ticket with my name on it was impossible to ignore.
I talked with my partner in security awareness and we agreed this type of “unfinished process” notification could be a very effective lure. If you could infiltrate a target organisation’s network and create a new user account (or, better yet, suborn an existing legitimate user’s account), and further gain access to the organisation’s ticketing system, you then create a new service request record with a maddeningly vague subject and no “ask” in the ticket body. This perfectly safe message is your bait.
If you’ve done your pre-attack reconnaissance well, you’ll know which users are likely to be enticed by this. Someone might see the message, react to its lack of detail, and would invariably then contact the sender. That’s the implied ask: contact me if you want to know what this really is.
If the attacker has control of the ticket-assigning account, the target is initiating contact with a colleague whose legitimacy is assumed. After all, they had access to the ticketing system! Can’t do that without verified credentials. Once contact is made, there’s probably not going to be any attempt at peer-to-peer informal authentication; the ticket initiator is assumed to be a trusted insider. At that point, it is possible to social engineer the target into whatever you want.
A talented social engineer can convince an intelligent, well-trained person to take self-destructive actions without a trace of self-awareness simply by drawing on the victim’s sense of shared belonging, assumed trust, and organisational need. By the time the victim realizes they’ve been duped, the attacker has achieved their objective and disappeared.
It’s a complicated attack method that requires both network compromise and target analysis, it’s very clever and it’s highly likely to work if all the steps are followed. It applies pressure against a target’s psychological hang-ups and stimulates the target into compromising themselves. I can see this method working during an APT-style deep infiltration of a target network. There’s probably a name for the tactic already; until corrected at the coffee machine by my colleagues, I plan to call it “the beguiling trout.”
The key takeaway is that all of us have personality characteristics and attitudes an adversary can influence to make us take actions not in our best interests, assuming they can reach us to “push” our proverbial buttons. That’s what phishing attacks in general (and spear phishing in particular) are all about: manipulating the target’s emotions before their rational mind can recognize the signs they are being tricked.
A critical aspect of cyber defence is knowing ourselves: we need to understand our own emotional triggers so we are prepared to recognize and avoid attacks designed to exploit our own intrinsic vulnerabilities. It’s not enough to scan incoming messages for spelling errors or to check URLs for dodgy domains. Everyone needs to apply introspection and some candid self-awareness to avoid a clever manipulator with a sinister agenda … or a beguiling trout.
Latest posts by Keil Hubert (see all)
- Want to improve your security? Understand the cognitive bias behind decision-making - 1st July 2019
- A guide to phishing emails and how they work - 10th June 2019
- Cyber security through storytelling: which approach will motivate your users? - 13th May 2019
- A practical guide to busting the “perfect security” myth - 7th May 2019
- Security training: why one approach is not going to work - 18th April 2019