Is AI the answer to cyber complacency?

Ade Taylor, head of security services at Roc Technologies urges focus and a renewed approach to cyber posture

Image:
Ade Taylor, Roc Technologies

October means another Cyber Security Awareness Month (CSAM) and, while wholeheartedly supporting the intent, I can’t help thinking that the theme this year (Secure Our World) leaves something to be desired.

We’re being urged to:

All very worthy, but all very 2010. And, for the most part, still not proactively observed by enough of us.

The problem here is critical mass. Herd immunity applies to malware as it does to vaccines: when we protect as many machines as possible from malware, it makes it harder for threats to propagate. Yet we’re still not at the point where enough people take enough measures to secure enough machines to achieve a minimal likelihood of malware spread.

But why?

It’s not just one thing. There are lots of reasons for this, but for me the absolute killer is complacency. The majority of users know very well what they should do – we’ve all done the hours of mandatory training after all - but for whatever reason they don’t. And through such complacency is malware spread.

Let’s accept, then, for the sake of argument, that another cyber-awareness month on the themes of passwords, MFA and updates is not going to be given the attention it deserves. Given this context, what can we, as a community of cyber-security professionals, do about it?

The ball is back in our court. As an industry we’ve been talking about user awareness and insider risk for literally decades. It may be that we’ve finally made as much progress, more or less, with that approach as we’re every likely to do. What next?

It’s taken me nearly half this blog to mention AI, but it is going to become a big part of the solution. Next-Gen (sorry) continual, task oriented “nudge” training is unobtrusive and doesn’t break the user experience, is easy to understand and, importantly, it’s in context. We’re talking, then, about user support – continually helping them to do their job while remaining compliant with best practice and corporate policy. AI gives us the ability to develop tools which understand the context of a user’s actions, not just a list of good and bad stuff with which to compare those actions.

It’s much more powerful to intervene at the precise moment a user is doing something risky, regardless of the application they’re using, and offer some sage words of advice, followed by solution of how to do the task in a safe way, than it is to speculatively pop a giant window in the middle of an in-progress spreadsheet saying, “looks like you’re about to do something bad, probably, so you need to have a think about things and see if you agree.”

More commonly the message says, “That’s probably a bad thing to send so you can’t”, with a suggestion that you phone the helpdesk to find out why. It’s a deeply frustrating user experience and – as we all know - stopping people doing something without a clear explanation as to why and how to do it properly, forces them to figure out their own way of sharing “secret_document.docx” with Bill from catering, which leads quickly to a listing on a stolen password marketplace.

One last thing. Phishing.

Phishing campaigns based on research using AI tools and emails crafted using gen.ai, without even mentioning the wider social-engineering opportunities offered by deepfake videos and voice, are very hard to detect by both systems and humans. Our corporate humans should still be shown the ways in which they might guess correctly and not click on the link, but ultimately, it’s down to the cyber-security industry to create the tools to relieve users of that responsibility and let them get on with their day-jobs.