Hello, good morning, everybody. It's Thursday morning at 10:00, therefore, it must be time for another teissTalk. We are live every Thursday morning at 10:00, every Tuesday afternoon at four when my co-host, Jenny Radcliffe, takes the stage. And we also archive these talks online, so if you want to catch up with them later, you can do. But of course, what's the point? You want to be here live. So we give you a taste of what we're going to be talking about over the next month.
Next Tuesday, Jenny can be talking about the future role of the CISO. We're going to talk about cyber career paths, the skills shortage, working with a hybrid working environment, that's a big one. 5G, passwords and authentication. All the major topics coming up on teissTalk to do. Please put this in your diary and we welcome back our continuing listeners and viewers. Welcome back. If you've been before, if you knew very, very warm, welcome to you. And to spread the word, if you like teissTalk and you like the stuff that we're doing, please share it amongst your friends, colleagues, family, maybe not family, and spread the word. We love having more attendees. There's obviously the chat tab there. You can get involved. We've got some people saying hello, good morning to everybody there. We've also got the questions tab. You can ask a question at any stage during today's topic discussion. Do use the questions tab to get in touch with us.
We are going to be talking about resilience today and your organisation, measuring your organisation's cyber resilience. And this obviously comes across off the back of one of the more interesting cyber security stories of recent times, the Colonial Pipeline hacks. So we're going to be addressing that, gossiping about what went on there and also addressing how Colonial Pipeline could have been a bit more resilient, should be a bit more resilient and can get back on its feet after hitting the news in such a catastrophic way.
We're also going to give away a teissTalk mug, by the way. Yeah, tea tastes better from a teissTalk mug. I am thinking, so I'm thinking of a there's a song in my mind and it's a song about American critical national infrastructure. And that's the only clue I'm going to give you American could a national infrastructure. And it's a song, if you can guess that you win a teissTalk mug winging its way to you. Might be stunned if anybody gets that.
But listen, we have got some great panellists to talk about this resilience issue today. I will be hearing later from Martyn Booth, who's chief information security officer at Euromoney Institutional Investor, and Ben Lindgreen, who's head of Cyber Resilience at Pay.UK. If you don't know who Pay.UK are, stay tuned, because they are absolutely massive. As are Euromoney Institutional Investor. I want to welcome our first guest, who is a risk and resilience expert. Sandra Bell is founder and chief executive of The Business Resilience Company. Sandra, how are you doing?
I'm very well, thank you. And how are you?
I'm alright. Yes. Yes. Somebody's had a go at the American Critical National Infrastructure song and has guessed 'American Pipe' as in American Pie. Sorry, Richard, not correct. No mug for you.
Good try, though.
Worth a try, yes. Thank you for joining us. Where are you joining us from? Where's home for you?.
I'm in the U.K. I'm in Basingstoke. Somebody has to be.
Nothing wrong with Basingstoke, very convenient location. Tell us a bit about your, there's an intriguing thing on your website that I found that I thought was interesting. You apparently said here you identified the injury mechanism associated with explosive blast and patented a material system that now forms the basis of all modern bomb disposal equipment.
Gosh, yes. That's going back a bit, Geoff. Yes, absolutely. I started my career as a scientist, so my PhD. is in personnel blast protection. And so, as it says there, what I did is I worked as part of a multidisciplinary team to identify and apply what happens to people when they with improvised explosive devices. And so essentially we identified that it was a stress wave going through the body, not squashing things together. Actually, that ruptures the alveoli and then you sort of drown in your own fluid. So essentially what we did was we put in a, for the acoustics and the physics people in the audience, we put in a notch filter and got rid of the particular frequency. And the rest is used all over the place now. So it's the bomb disposal suits. It's the same physics behind submarine stealth. The same physics behind car impact bags, air bags and indeed footballer's shinpads as well. So, yeah, I have taken a mould of Michael Owen's leg in my past.
Wow. That is a claim to fame. That's one of those things when you tell somebody three things about yourself and only one of them is true and that's the one that's true. So it's the frequency you can isolate the particular frequency if you can get rid of that, actually, the whole thing gets mitigated?
Absolutely. Absolutely. If you think of your lungs, they're tiny, tiny, tiny, so, and any construct, I mean your body's made out of sort of construction material. And if you think of any construction material, it's really, really good in compression, but not so good in tension. And so essentially you get a stress wave going through. It then hits the interface sort of compressive wave. It hits the interface with the air, gets reflected back as a tensile wave and then just rips apart.
Good God. Wow. So how did you get from that to risk and resilience? What's been your?
Well, it was sort of worked as a scientist for, because obviously I was in the counterterror world, you know, sort of working in that space and then moved into a whole sort of different areas. And then, you know, I was working as a technical director for innovation strategy and then the UK's MoD when we had 9/11. And so I sort of jumped out into sort of whitehall world, into the think tank to try to sort of push it to cohere and create a debate around that homeland security. And how do we sort of cohere as a nation or nations around that threat. So worked in policy and strategy, then jump into the operational sphere, doing a bit of sort of not just advising on strategy, but actually doing it for real. And that's where I am.
Interesting. We're going to be we're going to be coming to some sort of cyber point of view. But obviously you'll have dealt in that career with risk and resilience in various different sort of areas. Are there things about the cyber aspect of risk and resilience that stand out compared to, for example, a terror attack or an adversary trying to do things like that?
Yeah, I think the biggest thing, as you said you started with a pipeline. I think the biggest thing is that IT is an enabler for business just like we've seen that the, you know, the pipeline is an enabler for a society to provide, like oil, petrol, etc., and allows people, you know, things to happen. So you can't look at cyber attacks and cyber resilience just in the IT sphere, you've actually got to look at the wider whole. So when it comes to, you know, I think the pipeline attack is going to bring it back into stark focus, you can only go so far with putting security in and, you know, doing what I'd call the engineering resilience of duplicating things, having high availability, a point of failure, blah blah blah. We all talk about in this space. But actually, you're never going to stop it. You know, it's going to happen. If somebody wants to get it, they will get it regardless of what you put in place. And so, therefore, you've got to also do is to look at mitigating or at least managing those impacts. And it strikes me reading some of the news that I've seen over the past, you know, past few days is, number one, the company was sort of not realising what those impacts can be. And then again, society as a whole thinking it'll never happen. We've invested a lot on security. It's a super duper, you know, sort of resilient IT system. Actually, just having a resilient IT system and having super security is not cyber resilience. It's got to be the whole thing.
Interesting stuff. Oh, we've had another guess at the song. Sorry, Mark. 'Times they are a changing', not correct. We're looking for a song about critical national infrastructure, think what critical national infrastructure can mean in the US. Anyway, be surprised if anyone gets it. Sandra, it's interesting this, I wonder about one of the issues here being that you talked about this idea of, you know, it's when, not if. And I think that's a sort of accepted aspect of cybercrime, cyber activity. The thing is, if you're an IT security manager, if you're a CISO, it's not a very good message to have. Whatever you put in place, it's going to fail, so let's think beyond that to when it does fail. I wonder whether the cyber security community struggles with its resilience thing because it means acknowledging that you will fail, that cyber security will fail. That's the whole point of resilience, is, look, you've got to be resilient when you're attacked and when your defences fail. Do you think that's part of it? You see that in organisations where the IT security guys go, look, we've got this covered. It's not going to happen.
Well, I do. I do see that defensive. You're absolutely right. You see that defensive on the cyber security side of things. But actually, you know, if you actually go to the CEO and the board and say, you know what, I'm actually managing the risk. That's what it's all about. It's just about managing risk. And, you know, and they say, well, it won't be 100 percent. They'll go, well, yeah, I worked that one out because that's what I do. I'm running a business and I know about risk. That's what I know about risk is that, you know, it's about sort of, you know, sort of defending. You can defend against the negative impact, but also stack the odds in favour of a positive outcome. And that's being resilient as an organisation. And balancing risk is what the CEO and the board does. And so if you can get the CISO, the growing CISO that will go up and say, actually, I've taken it this far and this is the residual risk you've got in place, I think you'll find that being really pleasantly surprised. Everyone I've helped get into that board room to get that message has been thanked. They won't be thanked, however, if they leave it until the attack has happened and said, well, it was going to happen anyway, et cetera. Got to do it beforehand. But if you get in there beforehand, explain how far you've taken it, what measures you've got in place and what that residual risk is. And the business then has got something to plan from. They can go, okay, yeah. We got that worst case scenario. I can go and put something in place. I know what my responsibility is. At the moment they're left in the dark. Yeah. They'll be pleasantly surprised.
It's interesting that so in your experience, when you have those conversations where, you know, the chief executive or, you know, the risk managers in an organisation get to hear from the tech security folks and they have that conversation, how willing are those risk and senior people willing to go to get their hands dirty in understanding the tech? How much detail do they want from the tech security? Folks say, look, here's here's what we've done. You know, how far does the conversation have to go?
If they really trust, you can find if the tech security guy goes out and says, you know, this is what it is and this is the residual risk. And they are critical, then they don't actually need to go into the detail because that's the expert. You as a CEO will employ experts right across your business. And if you've got confidence in them, which you should have confidence in them, then there's no need to get what I like to call the long handle screwdriver to dig into exactly what you've got. You've got to be brutally honest as the infoset person and say, well, this is what I've got in place and therefore that means that the residual risk is this and the impact of the business that. This could happen, over to you. So, for example, we talk about I mean, a lot of organisations have gone down the high availability route for their IT. So you've got automatic failover between various data centres, but obviously in order to get some of the recovery time objectives or get to the up time they sort of flatten that infrastructure. What that means is that's fabulous for tolerance and it's fabulous for small, isolated sort of incidents. But what it does do, it increases the big catastrophic failure. So it's going to be low probability, but it is going to be high impact. And you just go up and say, well, actually, this is what I've done over here. And the business go, yeah, that's what I want. Actually, I want that to happen. Now, you've told me that actually I need to be prepared for catastrophe. OK, I'll take it from here. If you get in there early, start having those conversations. You know, everyone's got the same whatever. Whenever I go into an organisation and working with them or working in an organisation, I always like to point out to everybody, you know, look at your payslip. You know, we've all got the same logo at the top of it. We're on the same team. It's not a them and us. We're all going to that same aim.
Yeah, let's talk about testing of all of this, because part of resilience is simulating an attack. How many of those exercises you've been involved with and how successful those sort of desktop tests?
It's interesting. You've got to be realistic testing. So in a cyber sort of scenario, you've got to sort of simulate the stress that people are going to be under. I'm not a great fan of just, you know, all of the super duper bells and whistles and the full sort of simulations, realistic simulations, etc., because actually they're trying to do is to simulate the stress and the decision making within the business. So as long as you can create that stresses all sorts of things, you can play to create that decision making issues in the in the boardroom. But it is about the people rehearsing some of those really big decisions, like do I pay the ransom? Do I not pay the ransom? Do I turn off the IT or not? Do I recover over to somewhere else? Do I communicate that to the stakeholders? If you haven't already thought those through, then they become really difficult in a stressful situation. And I often find organisations, especially at board level, have put a lot of development, personal development into their business as usual behaviours. And they are not aware of actually how those translate into a stressful situation. I saw a fabulous quote a couple of days ago when I was listening to a podcast and they were saying that, you know, sort of you if you put a carrot into hot water, it goes soft. If you put an egg in hot water, it goes hard. If you put coffee into hot water, it changes the chemical composition and people are the same. If you put them in hot water, they will do one of those different things and react in those different ways. And you need to know beforehand what those are so that you can personally develop on it. Yeah, I can, under stress, I know that I can be quite directive. You know, I don't need to be. But actually in a stressful situation, if I'm a crisis leader, I have to hold myself back. I'm just telling people what to do because that doesn't always work.
Interesting. Interesting. By the way, Jenny, my co-hosts had a pop at the infrastructure song, 'Down the highway Route 66'. No, not quite like we're looking at infrastructure, telecoms, infrastructure, maybe. Sandra, that's really interesting. I wonder how.
'Hanging on the telephone'?
Oh, it's close. It's close. That might be, it's not there, but it's close.
I'm showing my age, aren't I now?
Now Jenny's got it Wichita Lineman was the song I was thinking of, but I think Jenny's probably already got a teiss mug, so maybe we'll save that one over to the next time. Sandra, I'm interested in this. How do you simulate, give us some tips like how do you simulate stress and how do you put people under stress and those in the kindest possible way, of course, in those simulations?
OK, so you've got two types of stress when you're in a crisis situation. You've got job stress, which is about the environment, the stress of the actual environment that you're in there and you really create that by providing a richness to the scenario that you're presenting. You're saying, you know, this has happened, that's happened. And you make that in their mind as realistic as possible. You don't actually need to physically have bombs going off in the background and smoke coming in and people running around like headless chickens. Actually, you just need to present a one thing after another after another and then you get people into that heightened sort of discomfort and then you've got decision stress. Now, decision stress is it's about it's about actually making that decision. Obviously it's in the title, but it's what you tend to find is the higher the stakes decision, the more psychological discomfort the people in and you often find. And obviously, when you're doing a crisis and making crisis decisions at a strategic level, that really high stakes indeed. And so what you need to do is to make sure rather than sort of just ask for lots and lots of when you're doing simulations, you often go, you know, lots and lots of little decisions, get them to do that. And then because you've got time to record what their responses are. If you're if you're doing a test actually, no, just need to get some really big ones really, really quickly. And then you what you find is if you get people into that into that mode where they just simply make a decision, just simply any old decision, it doesn't actually matter. It's just relieve anything, relieve that psychological discomfort. And so it's all about sort of making it, you know, simulating that psychological discomfort and taking them up to that level until you got that, you know, you start encountering those, you know, those behaviours that you potentially don't have and then you go, time out. What have we done here? OK, let's force a proper decision making model and actually take your time when you're doing it. Don't do that knee jerk reaction. Yes, it's uncomfortable. Get used to that uncomfortableness and just go right and you get climatized to it like anything else.
And presumably also you get to learn how somebody else, how the other people in the organisation behave. For example, you say you tend to take the lead, you know that's how that person will respond. How your team members will respond.
Interesting. I'm interested by this. I wonder if we could do a poll. We have our teiss elf who moves away in the background and I wondered if we could do a poll about whether, are simulations, are cyber simulations genuinely effective? I guess that would be that would be the question. Are cyber simulations genuinely effective, yes or no? Let's have a poll on that and we can get your views. Quite interested to see where people sort of stand on this kind of thing. But while people are mulling that, Sandra, let's have a think about this colonial pipeline attack. It's obviously hit the news. So you know what we know so far: Colonial Pipeline get hit by a ransomware gang. We think it's the Darkside gang. Ransomware takes effect. The company's then forced to take its pipeline offline and then has to get permission from the government to ship the petrol by tankers and there are now queues at petrol stations. The thing I find remarkable about this is it does sound, I read these security companies reports and every now and again they map out a hypothetical scenario of what could happen and people will be queuing at the petrol stations. This is actually happening. That is remarkable. Absolute remarkable attack. I mean, from a resilience point of view, what do you make of it?
Well, as I say, it sort of you know, it strikes me, but, you know, people have sort of had a bit of an overreliance on 'it won't happen', they haven't done that thought through, you know, what is the worst case scenario? They've done that lulled into a false sense of security that the security's there. And the other thing that strikes me is that there's not much coordination. When you're responding, its impact, impacting very wide scale impact, lots of people so from sort of social emergency services, businesses, et cetera, et cetera. And actually there needs to be a coordination of that. At the moment, everybody's sort of running around doing the best they can, looking after themselves like, you know, people filling up their cars and even being told not to fill up your car, etc, etc. and there's tankers coming in really needs you know, it really shows that actually if you really do need to coordinate these things, number one, you need to think of actually the worst case scenario is what is actually what is going to happen and then just walk it through. You're never going to encounter it. It'll never turn out whatever you think up and whatever scenario you think, it will be slightly different. Actually, it just gives you a framework to move forward with.
Interesting stuff. On our poll, by the way, on cyber simulations, are they effective? Seventy three percent said maybe sometimes. Twenty seven percent said yes, always, but nobody said never effective. So there's some value in those as well. It's interesting, Sandra, I think that the colonial pipeline thing, it's interesting that nobody sort of thought ahead to the point where there might be fuel shortages and people might panic. That seems to be the bit that's not been in the plan. But listen, let's bring in the rest of our panellists now. I'm delighted to introduce we've got Martyn Booth, who's CISO at Euromoney Institutional Investor, also welcoming Ben Lindgreen, who's head of Cyber Resilience at Pay UK, who are both going to be joining us on the teissTalk stage imminently if indeed their Internet connection holds up. Ben, good to see you, you're first up. Tell us about Pay.UK then. What's the business doing? How big is it?
So we run the UK's National Retail Payment System, so fast payments and checking credit with the image clearing service. So we're part of the UK's critical national infrastructure and altogether in 2019, we processed over nine billion payments, which had a value of over seven and a half trillion pounds.
Astonishing. So basically, when you go into a shop and you put your card into the machine, is that part of that?
So those payments are typically through Visa, MasterCard, American Express, their card payments. We do the pipes for payments for whenever direct debits. When you make payment, a bill payment through your online banking, where the the infrastructure that enables the individual banks to connect to each other and transfer that money from one party to another.
Right, really critical part in that case of the national infrastructure. We're very glad you could spare some time for us in that case. Martyn, from Euromoney Institutional Investor. Again, tell us a bit about Euromoney. What's the size of the business and what are the various branches of it?
Sure. First thing's first, though, I'm on Virgin Media, so apologies if I drop off. My net for the last three weeks has been poor on every call I've been on, so.
Looking good so far, Martyn. It's fine.
So, yeah, Euromoney Institutional Investor is a complicated business, really. It's a 30 to 50 Deeter business now. It's kind of moved away from publishing. So we've got about 80 businesses or so that make up the group. Some of the group's CISO, I look after all those businesses and we've got businesses that vary from pricing businesses. So if you wanted to know how much to pay for tungsten steel in Australia on any given day, they'll tell you that. We do anything from second hand aircraft. We've got an asset management strategy business. They do publications around where to put money. And then we got a large events division about six hundred events a year we run from our events division. So quite a mixed business really.
Yeah, definitely. So, there's a sort of Bloomberg esque aspect to it where Bloomberg's publisher but also it's a massively important business information side. It sounds like that's the territory that you're roughy in.
One of our competitors, Informer. We've got quite a few.
Good, good. It's interesting. After this Colonial Pipeline attack, FBI and the CISA, the Critical Infrastructure Security Agency in the US put out some updates about this, which I think there's is a link in the chat there. What's really interesting, if you look at that link, the stuff that CISA is talking about, it's quite interesting because you can sort of see what happened to Colonial Pipeline by the stuff that the CISA is warning about. But the stuff that I found really interesting was that the remediation for it was things like patching, not allowing your network to accept Tor Connections, you know, blocking known bad addresses, you know, looking out for phishing emails. This all just seems sort of very kind of basic stuff. I mean, Ben, is there anything from the Colonial Pipeline story that you've taken that's new?
So I think the first thing I'd say is that frequently we talk about security basics as if they're easy and they're not, particularly in organisations like Colonial. I think they've merged a number of times over the last 30, 40 years. So there's probably disparate systems connecting together. But to get those security foundations right is very difficult and an attacker only needs to find one weakness to exploit. So, no, from the things that I've read, I don't think there are anything particularly new or innovative in the attack. And indeed, I read one article that said that in 2017/18, they had an appallingly bad audit that showed fundamental weaknesses in their cyber security. But an attacker, I don't think that necessarily they were targeted by Darkside from some of the things I read, I think they were just an opportunistic target. And that's always the challenge to organisations. I mean, people frequently say, why would people want to attack us? But because you're on the Internet and somebody found a weakness that they can exploit, I think is the answer.
Well, that's the thing, isn't it? The scary thing about ransomware is if you've got money and you can be forced to give over money you are a target. Had a comment, Ben has said it's the basis for war games in the military, purple team against cyber contingency planning, in general. As Eisenhower said, plans are of no particular value, but peacetime planning is indispensable. The actual activity of planning. It's interesting, Sandra, that goes back to the idea of just going through the exercise, regardless of what plan you come out with. The fact that you've been through the planning process, I suppose, is a key one.
Yes, I like the Eisenhower one, but the other one I quite like is Mike Tyson one. Everyone has a plan until they get punched in the face. Yes. It is all about planning. Plans for me are just simply, you know, the audit trail, you'd have the planning process and a nice, convenient place to put information you might want to find on the day. You know, you don't want to be searching around for the name of the plumber or the telephone number when your house is sort of flooded, et cetera, et cetera. But it absolutely is that planning. You've got to think it through. You've got to go, what would I do in this situation? You're never going to encounter exactly that situation. And the other thing about, you know, sort of often find about sort of cyber exercises and, well, not just cyber exercises, but, you know, general exercises is they they tend to only look at one scenario. And, you know, I think for anybody that's been on the operational side for any length of time, you know, sort of disasters are like buses. They don't come one at a time when they meant to and they all come all at once. You know, I've been on the shelf and the end of a DDoS attack on the day when, you know, we've had a power outage, et cetera, et cetera. It's just, you know, that's just life. You know, sometimes you have a really bad day at the office, and actually you've got to be able to pick your way through actually what's what's not important in this that prioristiing.
Indeed. Yeah, thanks for that. We've had a couple of comments, by the way, Federico saying ransomware gangs do research on targets before any attack, which yes, that's almost certainly true. And also in this case, I mean, the Darkside ransomware gang, they just as far as I'm aware they made the ransomware available, it's not necessarily them who decided to target Colonial. Also Simon Yakan, I think that's the right way to pronounce it. Like The Lazarus Heist, which is my podcast, lots of planning. A very good podcast. Thanks, Simon.
So with regards to the research, what I meant was they look for a vulnerability in the first instance and then they do the research on the organisation. So I wouldn't have said Colonial Pipeline would have been particularly high on their list of initial targets if they were pulling together a list beforehand. And indeed, they apologised for the impact on society as a result of hacking them. But they do their research once they get that initial vector. I think, you know, it's as they say, they say themselves they're in the business of making money. So they will have seen that as an oil firm they'll be cash rich. So once they've identified that there are weaknesses that they can exploit, that would then attack it rather than, you know, their initial target being the oil firm, I think that was just looking for the weakness to exploit.
Interesting stuff. Martyn, can we come to you on this? I'm interested. I mean, you know, you were busy CISO. How often do you get the chance to sort of unplug from that and do an exercise or think about resilience? Because it strikes me as one of those things that you have to sort of clear time in your diary to do. Is that right?
Obviously, it's something that we specifically clear time in the diary do. We have to run these exercises on at least an annual basis. We probably do bi-annual exercises around instant response in areas that we're worried about. Ransomware was the last one just so that the, I suppose there's a number of reasons to do that, so that the people that operationally are ready and know the kind of things that are going to come up. But to make sure that the board, because we do run them on the board as well, they're aware of the types of issues that we're looking at managing and they're aware of that part to play in that. From the conversations earlier, I would say I've had that conversation with my with my board about how big a target are we. Euromoney is a half a billion pound firm. So it's not small firm, but because it's a B2B firm and it's not a massive household name, I think when I joined the business, a lot of the board members where under the impression that we weren't going to be heavily targeted in cyber attacks. The defences that we have in place specifically are no application firewalls and et cetera, would indicate that that isn't the case and that I think I would agree with Ben on that. That the sophistication now of attacks, the maturity of the attack model, I think is based around scanning IP blocks. So scanning everything out there, finding what comes back as a potential weakness and then going after it. I think if you start with the research, I think attackers have worked out if you start with the research, you could research something to the Nth degree, waste a lot of time and money on it and try and fight a breach, eventually breach and find nothing there. I think people have found it easier to find the weaknesses first and then work their way in and then see what's inside. And maybe it's not nothing, but they may not expending much effort to find that out.
Shoot first and work out who you shot later on.
I think that that's the way they do. It is certainly the case with us. I mean, we see we see a phenomenon of people attempting on our network and you wouldn't think that from the kind of size brand that we are.
Martyn, that's really interesting. So you went through a sort of exercise and ransomware was your focus. If you had to pull out a couple of the key lessons from that what were the what were the key ones that you came away thinking, right, that's exactly, that's the key thing we have to do?
Well, in terms of what to do, in sense on ransomware or in terms of what to do in the kind of managing that process?
Just in terms of having been through the exercise, either preventive measures or remedial measures once it's happened. After the exercise, what were the things you realised were the key things?
Well, I think that the recovery from ransomware is well, should arguably relatively straightforward. Right. You want knowledge to spread of a ransomware, attack, and then you need to recover from backups. So we've had a few ransomware instances, the same as most companies, luckily there haven't been massive I think we lost a file share for a couple of days, which was significant for the business to be involved but on the grand scheme of things could've been a lot worse. So I think that the outcome of those, frankly, is preparation and control assessment. So are there things you can do just to stop ransomware? Yes. All there things that you can do to prevent the spread of ransomware. Yes. And making sure that those component parts are being put in place appropriately and that you have assurance that they're working correctly and that they cover the right areas. So we have been doing that. I think the biggest part that we've come out of ransomware is that we're now deploying voice detection and response capabilities so that we can, if an endpoint becomes infected, we have a much improved ability to kind of lie that down automatically before it spreads to other devices naturally through the network. I think that the belief that people have got everything in separate networks, settlements and everything's very locked down, most businesses don't operate like that. We certainly don't. We really need to make sure that a compromised device is not there to start spreading through the network. And I think there are technologies that can help with that, which is sometimes AI powered and sometimes backed by analysts, by quality analysts.
I was going to ask about that, Martin. The fact that Euromoney is so, as you said, is lots of lots of different businesses. I wonder whether that made the task easier in that you can segment those businesses more effectively. From the sounds of it, that's not the case. It's not that easy.
No, it's a nightmare. We integrate into a central, larger central cloud tenet. So those we have some ability to control this, though, but they're all in the same place, other businesses we acquire sit on their infrastructure because we've made three acquisitions already this year, which makes it much due diligence and then integrate so no, makes it more complicated. So we're running in separate businesses from a small central team.
Picking up on Martyn's point in terms of the sort of the exercising and lessons learnt. Everybody within an organisation is going to have to do stuff in order to solve it so that from, Martyn from a CISO perspective, will be doing stuff on the security. There'll be IT people doing stuff, there'll be business operational people doing stuff, and maybe executive doing stuff, and everybody has got to work together. And what I often find is that people haven't thought through who is leading the response to one of these things. You know, clearly it's a large scale cyber attack, it needs to be lead from the board, but they need to be aware of actually what the IT have got to do. Where Martyn talks about it should be straightforward coming back from backups, one or two days. But do they know it's one or two days? Does the business know that because it's the board's job to provide the top cover, to allow the IT to do that restore from backup without constantly having the telephone go with somebody from the business going, is it there yet? Why is it not back? Can I have it back? You've got to set those expectations up early and to manage it.
Ben could I come to you on this. I mean, obviously, as you say, you're critical national infrastructure. Just talk us through the kind of obligations you're under from a regulatory point of view in terms of resilience. What are you expected to do and how often?
So we're regulated by the Bank of England's financial markets infrastructure division. And recently they've produced a joint supervisory approach with the PRA and with the FCA on organisational resilience. And I think it's a really good approach that they're suggesting for organisations to look at. And I think it takes away a little from considering it just as a cyber issue, because I don't think it is. It's about your whole organisation and it's about understanding what your important business services are and more importantly, how long can you be without them. So the concept of impact tolerances. And so you then ensure that the controls that you have and the mechanisms for recovery, the mechanisms for response reflects and hopefully there to those impact tolerances. And I think going back to the Colonial Pipeline issue, what that issue suggested was that they hadn't really considered the impact that an attack would cause on their delivery of oil to the Eastern Seaboard, which has caused all of the subsequent run ins. I think that was surprised to learn that the pipeline industry, even though it's critical national infrastructure in the US, isn't actually formally regulated. So they're not got a lot of interest from their regulators to ensure that these fundamentals are looked at. And I think our regulators approach it in the financial services to look at this is a really good way to make sure that organisations consider it holistically. It's not just a case of, you know, if our systems have been up all the time, that they're resilient. Frankly, they may just been well designed in the first place, but all of the people that was involved in those engineering design decisions may have moved on. What happens if you need to bring something up from Bare Tin? Have you tested on that and going on to the points that Sandra makes. And I think the other thing I'd say is that frequently when we test, we test on the perfect circumstances. Everyone in your you know, your gold team for for instant response. You always make sure that they're in, everyone knows their places. Well, that doesn't happen in the real world. You know, they're almost guaranteed to lose a couple of those people to holidays, to being on a plane, travelling in normal times. And organisations need to test those things as well.
This goes back to Sandra's point about everything happening at once, Ben. How far do you go to prepare for that kind of black swan incident where it's like, your data centres hit by a plane and you're hit by ransomware and the CISO is fallen ill. And, you know, how far do you go to prepare for the almost outlandish?
So for organisations like ourselves, the requirement from the Bank of England is extreme, but plausible scenarios. And I think any of those that you've just mentioned are extreme but plausible scenarios. The PRA's view is that it's severe but plausible for the scenarios. But I think it is valid. I mean, the global pandemic over the last year has shown is that these incidents that occur in arguably a black swan, a ransomware like we saw with Notpetya, taking out every single piece of technology that you have, practically these things happen frequently. And I'd argue that if, like ourselves, you got an extreme but plausible scenario to consider, if it's happened somewhere else in the world, then that's extreme but plausible. So you need to consider these things.
Those high level things. If you've just got a little incident, you know, a little outage or an operational outage, that's fine. You know, that can be dealt with from the operational perspective. That's not a crisis. You know, if your building's burnt down that shouldn't be a crisis, that's just an operational issue that you should have thought through and you should have managed. It's those really big ones that actually, you know, the board should be thinking about those, you know, that's their job to to think and prepare the organisation to do that. I mean, picking up on Ben's, in terms of the, you know, the regulation, I mean, for me, that regulation is very much like a MOT on your car, it saying to financial organisations that you need to be resilient and you can be resilient. So there's the need, you know, sort of, you know, to make sure that you don't harm customers or the financial markets. Those are one so that the boards risk appetite and there's desires about wanting to sort of grow despite, you know, something going on. You can set that wherever you like and that risk appetite. But PRA and the FCA, what they are asking us to do is demonstrate that lower level, once a year from self-assessment. Can do really important business services, i.e. those that would harm people and just demonstrate that you've got it. So it's just like, you know, doing the breaking test on a car once a year. It's a MOT. You can put your appetite wherever you like.
Interesting stuff. Thanks, Sandra. The MOT analogy is an interesting one. Some interesting discussion in the chat, Tony Pamphlet saying what's point of planning your response to a cyber instant, if ill-prepared board members are the ones to suddenly lead it. Sandra says you have to prepare them. Martin says, my experience, boards do not need responses. Senior operational resources do but boards need to understand the process and potential damage to the business while recovery is being enacted. Interesting stuff. Can I come back to you, Ben, just quickly on that? I mean, one thing about your business is, as you say, the amount of transactions you processing is amazing. And I think about this colonial pipeline attack and the people queuing at gas stations in the US. To what extent does your planning go towards telling the public, look, though, don't worry your direct debit will go through. That public fallout of sudden panic, is that part of the resilience plan that you do?
Absolutely. Communications has to be a key element of it. And because of the nature of our organisation, the service that we provide, it's not just communications with ourselves, but it's also with all of the participants of the banks and the building societies that are actually sending these transactions. So we need to ensure that people are made aware and that the communications process is robust. And again, that goes back to testing. And I think also going back to the point that Tony made about ill-preped board members, the ones that suddenly lead it, sometimes that's what happens in an emergency. I think Sandra touched on that really well earlier. You can have a very clear response plan. You can have a gold silver bronze team. But sometimes what you can get when something major crisis does happen that individual board members or individual executives think that they suddenly need to be part of this and all of those well designed plans and preparations that you get thrown by the window. And that's something that organisations need to really test themselves to actually make sure that they don't fall into that trap. I did hear of one organisation doing tests. One of their senior executives was always the type that felt that he needed to be involved in every decision. So when they were doing one of the tests, they had it that he was on a plane travelling back from India for the whole of the tests. So they had to have other people making those decisions. That's how you test your resilience by making sure that you don't have key individuals that are always at the centre of things, because, frankly, some of these instances can happen for you can last for multiple days, if not multiple weeks. And you'll very quickly get burnout of those key people.
Interesting, Martyn could I come to you on this? There has been interesting back and forth in chat. Rich is saying the best response in these markets where the board members take active involvement and engage with the process and don't delegate. Martyn there's a balance here, isn't there? On the one hand, you're saying that's great. You know, board members engaging. On the other hand, you're saying, look, the operational people just need to get on with this. Where do you, how easy is it to strike that balance?
Well, I think instant response and managing its response is a skill that not everyone has it. I think your security leader and probably your IT leader should have that skill and be able to manage major incidents. A critical instance, our CEO, certainly he's quite hands on, would want to be heavily involved in the management of these incidents. And we have formal escalation paths that, for instance, he gets informed and he's quite involved with these processes. He sometimes sit's in on the incident response rooms in the war room. So I think it varies as long as you've got somebody that is the delegated leader for critical incidents or a number of people, ideally the dedicated leaders, and that they're involving the right people with the escalation past. And I think that's the best that you can really do to strike that balance.
I think for me, it's the difference between an incident, an emergency and a crisis. So, I mean, a lot of the time we're talking about, you know, operational incidents and absolutely right. Operational incidents should be managed by the operational people in your plans, et cetera, et cetera. But a crisis is when is when that has got to the point where you're threatening the strategic objectives of the organisation. And that's that's an awful lot higher. And that's when you do need the input from from the board to say, well, actually, it's threatening the whole strategy or existence of the organisation in which you do that. I think what we're finding too often is that things are being categorised as crises when actually they're not they're just major incidents and actually are best managed at the operational end because they've been categorised as crises the board go, oh. It must strengthen the strategic nature, I must go and do this. And then they find themselves trying to manage the operational bit, which, you know, they've got experts to do that. And so I think if we can sort of be very clear in our terminology and those escalation paths, then I tend to find that things, you know, things come down quite a bit.
Interesting stuff. We're pushed for time, we're going have to bring things to a close. Final comment from Neil King. The other area you need to plan is restraining over-helpful employees who are too customer focus, which is an interesting point. Thanks for that, Neil. My final point in the Colonial Pipeline thing, one really interesting thing in that CSA report, which is they don't think that there was any intrusion into the operational technology side of Colonial Pipeline, so why they shut the pipeline down? I think that distinction between what's IT and what's OT is now out the window because at the point where OT isn't compromised, but you've still shut the pipeline down. Well, what are we really talking about anyway? Final thought from me there. I want to thank our guests today. It's been a fantastic panel. Thank you, folks, for your time. Sandra Bell, risk and resilience expert at The Business Resilience Company, Martyn Booth, CISO at Euromoney Institutional Investor. And Ben Lindgreen, who's head of cyber resilience at Pay.UK.
Next week,4:00pm on Tuesday. My co-host Jenny is doing the future role of the CISO, which should be really interesting, particularly for a lot of our attendees. Thanks to all of you folks. Remember, we're live 10:00 a.m. Thursday, 4:00 p.m. Tuesday. Have a great rest of the week. We'll see you again soon.