Analysis / Bread Alert! Did a stale culture at Panera Bread contribute to data breach?
Bread Alert! Did a stale culture at Panera Bread contribute to data breach?
3 May 2018 |
Eight months passed between the time that Panera Bread was notified of a serious data loss problem and the time that they took the vulnerable function offline. I want to know what cultural factors contributed to such an inexplicably long delay.
If you hadn’t heard, one of our neighbourhood eateries reported a significant data breach event this last month. Panera Bread is a US-based bakery-café chain with about 2,100 locations across the USA and Canada.
We have one of their locations a ten minute drive east of our Dallas complex; there are four Panera locations within easy walking distance of our Chicago headquarters. When we heard the news that Panera’s public website may have been breached, we paid attention.
Security Researcher Brian Krebs (of KrebsOnSecurity.com fame) summarized the problem on this ‘blog post from 2nd April:
‘… the Web site … leaked millions of customer records — including names, email and physical addresses, birthdays and the last four digits of the customer’s credit card number — for at least eight months before it was yanked offline earlier today ...
‘The data available in plain text from Panera’s site appeared to include records for any customer who has signed up for an account to order food online via panerabread.com.’
Our cybersecurity engineers analysed the reports and replicated the original security researcher’s findings. Sending a specific query to the delivery.panerabread.com server yielded my customer name, registered phone number, and the last four digits of the credit card I’d originally associated with the account (one that’s no longer active, thankfully).
The fact that this information was available to anyone on the Internet was disquieting; the fact that Dylad Houlihan, the researcher who first discovered the vulnerability, noticed it and report it to Panera eight months earlier is even more disturbing. On that, Krebs wrote:
‘A long message thread that Houlihan shared between himself and Panera indicates that … Panera’s director of information security, initially dismissed Houlihan’s report as a likely scam. A week later, however, those messages suggest that the company had validated Houlihan’s findings and was working on a fix.’
Someone had to have reacted to the news with the appropriate level of concern. Who responded first and what was done to mitigate the threat?
Eight months. That’s a really long time for a company – any company, anywhere – to be leaking customer records. It’s not clear (at time of writing) how many customer accounts may have been exposed or compromised in some way. It’s not clear yet why the vulnerable code was allowed to remain exposed for so long, or what was being done to correct things.
There’s probably going to be lots more explained on this incident over the coming weeks, and it’s likely that the final explanation will include some interesting stories from inside Panera’s ops and security arms.
That’s why I find this story immensely interesting: I want to know how the vulnerability was processed inside of Panera after it was reported. How was it classified? How were the risk and potential impact assessed? How was the remediation strategy determined? What was actually happening behind the scenes before they story broke in public and the affected site was taken offline?
Understand, I’m not an engineer; neither a security-specialty type nor any other. While I’m keenly interested to listen to my engineer mates who explained how Panera’s data was exposed, that part of the incident doesn’t help me do my job. My job function is to support Security Awareness, Training, and Education (and yes, I treat those as separate-but-related disciplines).
My primary area of study has always run to behaviour – both individual and organisational – not to technology for its own sake. I want to understand the social, political, and operational dynamics behind what occurred within Panera’s HQ. I suspect that there will be lessons that we can all take away from this event.
Abstracting the issue from Panera’s event (since it’s still unfolding), this issue of responding to an externally-detected vulnerability affects every organisation that uses technology, bar none. No matter how small your operation, the fact that you use phones, PCs, Whatevers-as-a-Service, and such means that you’re always potentially vulnerable. This is, like it or not, How Things Work™ and we have to deal with the risk.
That’s why most Technology- and Security Governance models place special emphasis on establishing, optimizing, and overseeing formal protocols for how issues are identified, processed, prioritised for correction, and tracked to completion.
The Panera test kitchens undoubtedly have food scientists who test every possible variable in a recipe. Their InfoSec team must be just as well-staffed and detail-oriented.
As an example, the U.S. government’s National Institute of Science and Technology manual (NIST SP 800-53) for information security controls devotes an entire section just to the process of overseeing the handling and correction of a vulnerability like the one affecting Panera:
SI-2 (1) FLAW REMEDIATION | CENTRAL MANAGEMENT
The organization centrally manages the flaw remediation process.
Supplemental Guidance: Central management is the organization-wide management and implementation of flaw remediation processes. Central management includes planning, implementing, assessing, authorizing, and monitoring the organization-defined, centrally managed flaw remediation security controls.
I know that may sound like a lot of dry, bureaucratic lecturing. Difficult as it may be, trust me: this is fascinating stuff to an organisational behaviour researcher. Control language like this is very conceptual and high-level; the actual, practical implementation of this idea can be amazingly nuanced and challenging.
For SI-2(1), the organisation is expected to get a bunch of different experts (or groups of experts, depending on the size of the organisation) to not just ‘work together,’ but to accurately and consistently track responsibility for planning, decision making, change control, implementation, and communication.
Everything has to get done swiftly, but in a regimented and auditable fashion. Such a process, by virtue of its scope, is inherently vulnerable to both mistakes and non-technical influences.
Or, put another way in deference to the Panera case, people can get in the way of well-designed processes. That’s what makes both consulting and security training so endlessly interesting. You can have the best technology in the world. The best processes and rules. All of the academic models and notional controls. None of that will protect your organisation if your people either can’t or won’t follow your processes. Put another way, a tool is only functional when its wielder chooses to employ it. 
That’s what I want to learn about the Panera incident in the coming months: what happened during the eight months that elapsed between the day that Dylad Houlihan informed Panera HQ that they had a serious data loss vulnerability exposed to the Internet and the day that the public learned about it … and discovered that it hadn’t been fixed?
 For a non-cyber example, consider the chain of preventable process failures that led to the loss of the space shuttle Challenger in 1986.
Latest posts by Keil Hubert (see all)
- A guide to phishing emails and how they work - 10th June 2019
- Cyber security through storytelling: which approach will motivate your users? - 13th May 2019
- A practical guide to busting the “perfect security” myth - 7th May 2019
- Security training: why one approach is not going to work - 18th April 2019
- The importance of taking care of our people and their mental well-being - 1st April 2019