Many organisations focus on quantity over quality, so that CTI teams are generating more reports rather than understanding threats relevant to their environment
Cyber-criminals are using AI to create convincing fake websites: it’s becoming impossible to tell the difference between a genuine offer and a malicious trap
teissTalk host Geoff White was joined by Hans-Peter Bauer, Senior VP EMEA, Cybersecurity, BlackBerry; Neil King, European Business & Information Security Specialist, Canon Europe; and Robin Lennon Bylenga, Human Factors in Information Security, Co-founder of global Human Factors cybersecurity council, and HFACS-Cyber specialist.
Ransomware attacks targeting the education sector are on the rise with 57% of ransomware incidents reported to the FBI in August and September 2021 involving K-12 schools compared to 28% of incidents from January through July.
More than three-quarters of cyber security decision-makers believe that automation is important, according to research from ThreatQuotient. However, many organisations are struggling to move ahead with automation, and this was certainly the case with attendees at a recent virtual Teiss event.
Executives are under pressure to roll out AI, but without proper governance, secure systems, and cultural safeguards, the rush to deploy risks doing more harm than good. IT teams can deploy tools to safeguard networks and lock down laptops, but human-centred attacks and behavioural risks are a trickier problem. With the rise of AI tools, said James Moore, CEO of behavioural security company CultureAI, the human-layer of security is becoming harder to manage than ever.