This article is reactionary security/media FUD at its absolute worst. A quick summary: researchers found that GPT-4 can exploit real-world security vulnerabilities using CVE advisories.
Here’s why it’s FUD you should ignore: There’s no such thing as a critical vulnerability, generically. There’s only a critical vulnerability that can be exploited in your environment, specifically. Sure the application of AI is novel but it doesn’t change the mechanics of risk, i.e. if a vulnerability is not exploitable in your environment, there is no risk (no matter what the CVSS score is).
It does, however, keep everyone running on The Hamster Wheel of Futility, which is what seems to power much of the vulnerability management industry.
This is why the world so sorely needs evolved exposure management tools which bring together vulnerability mgmt, attack surface mgmt, CSPM, and other isolated domains into a single cohesive posture management capability, which can properly quantify exposure (and therefore risk). XSPM will help with this, on paper—we’ll see.
Until then, stay focused on exploitability specifically, not generically, as the primary metric for prioritising the remediation of vulnerabilities.