AI Risks: Net-New or Amplifier?

Is AI a net-new risk or an amplification of existing risk? A year ago, most of the CSOs I talked to said the latter (“it just amplifies our existing data governance issues”). Now, most say “both”…the amplification issues persist, but things like almost-undetectable deepfakes are net-new challenges we’ve not seen before.

But actually, the question is wrong. AI is not a “risk” at all; it’s a technology. The actual risks are things like the unauthorised access or disclosure of sensitive data. So real risk exists, but not necessarily in the way we think. It matters because what we misdiagnose, we mistreat.

Remember that risk is not a thing, it’s an ecosystem of interconnected (and often interdependent) elements. Threats and vulnerabilities, for example, are two building blocks of risk, but they are not risks in and of themselves. If/when AGI lands, AI could be rightly described as a threat, in the same way that human attackers are threats. For now, the threats are human.

AI itself is not a vulnerability, either…things like inadequately protected training data, or improperly validated inputs/outputs, are the actual vulnerabilities being exploited. Lumping all this together as “AI risk” is convenient but it actually obscures the real problem, and who is responsible for fixing it.

Our corporate risk registers are littered with not-risks: threats, vulnerabilities, audit findings, non-compliances that make us nervous, things we don’t like and want to keep an eye on, etc. I dare you to revisit your risk register, and re-shape it around the actual risks…here’s what you will find, if you do: 1.) it gets a lot shorter, because 2) many of the things being tracked separately are in fact the same underlying risk. But now you know the actual problems you’re trying to solve.

The Arup deepfake scam, often held up as an example of “AI risk”, is a good example. Without a doubt the use of convincing AI-generated deepfake video was a critical part of the scam’s success. But look deeper…the fact that the finance person was able to wire £20m without any additional layers of approval or controls was the actual vulnerability exploited. There were cultural and human vulnerabilities too. But good luck finding anyone focusing on this element…the AI FUD gets far more clicks.

The lesson for Arup (and probably all of us): I’d worry less about AI deepfakes and more about why our financial controls are deficient. If we don’t, we will buy the latest deepfake detection tools, but we will still get owned, because we didn’t actually fix the problem. Just like security teams need to worry less about users clicking phishing links (which we cannot prevent) and more about why one mouse click precipitates an existential event for our organisation. That is a failure of adequate, layered security controls and resilient architecture, not attackers or naughty users.

So track the risks. Just make sure that the risks are the actual risks, so that the problems you fix are the actual problems. There’s probably a great AI tool that will help you with that.

Leave a Reply

Your email address will not be published. Required fields are marked *