Andy Swift, Cyber Security Assurance Technical Director at Six Degrees
In many situations, cybercriminals will look for paths of least resistance where security is at its weakest. This is particularly true for those with a profit motive where it doesn’t make financial sense to plough resources into complex strategies when there are easier routes to success. In this context, AI is an ideal automation and augmentation solution for threat actors focused on implementing effective strategies at scale.
Back in January last year, for example, it was reported that OpenAI was being used by cybercriminals with “no development skills at all” to launch cyber-attacks. The widespread use of AI by threat actors is indicative of a new phase in the cyber security arms race, where the barriers to entry have fallen.
At the same time, however, AI has quickly become an enabling technology, dramatically increasing the scope to mount increasingly sophisticated and effective attacks. This places security teams in a very difficult situation where they have to face down adversaries armed with highly effective tools and an environment where attacks occur with increasing frequency.
Exploiting vulnerabilities
So, how are cybercriminals using AI to improve their reach and impact? Until recently, identifying potential vulnerabilities and writing the code required to exploit them could be a complex and time consuming process. The arrival of AI has made it much easier for threat actors to speed up or even automate parts of the exploit building process.
In practical terms, AI based tools can be used to intelligently review source code, software libraries or even fuzz binaries to detect patterns or anomalies that could indicate a potential exploitation path quickly.
Landing bigger phish
Threat actors are also using AI to create highly convincing phishing campaigns that use Natural Language Processing (NLP) and deepfake technologies to make attacks much more believable. Emails that were once easy to dismiss as fakes are being replaced by highly targeted messages that draw on social media or other publicly available data.
This is certainly being translated into increasing concerns among cyber security leaders. One recent industry report revealed that nearly two-thirds are concerned about the use of deepfakes in cyber-attacks, and a similar number are “worried about cybercriminals using generative AI chatbots to enhance their phishing campaigns.” The same report pointed out that between January and March this year, there was a 52% increase in attacks evading detection by Secure Email Gateways – the very technologies designed to protect organisations from email-based threats.
Fighting back
Thankfully, this isn’t a one-way street. The cyber security industry and the organisations it protects worldwide are investing heavily in AI to meet these threats head-on. This is reflected in significant sector-specific growth, with the global AI in the cyber security market predicted to reach nearly $61 billion by 2028 – up from $22 billion last year.
There are a huge number of innovative technologies coming to market which very much lend themselves to AI. Take log parsing for example, where AI is being employed to speed up analysis by identifying complex attack patterns and behaviours. This enables defensive teams to react in near real-time – insight which can make the difference between whether an attack succeeds or not.
In addition, the ability of AI systems to continuously learn and adapt to emerging attack vectors is adding a much-needed proactive capability to defensive toolkits as it finds its way into common SIEM (Security Information and Event Management) and SOC (Security Operations Centre) tools.
This is characterised by significant improvements in threat detection and the overall efficiency of security teams, who now have an ally in analysing large volumes of security data at speed. This frees up time to be devoted to more complex tasks that benefit from human insight and experience.
Where this all leads remains to be seen, but what is clear is that the next few years will continue to see the cyber security arms race accelerate as adversaries increase their reliance on AI.
Jesse Pitts has been with the Global Banking & Finance Review since 2016, serving in various capacities, including Graphic Designer, Content Publisher, and Editorial Assistant. As the sole graphic designer for the company, Jesse plays a crucial role in shaping the visual identity of Global Banking & Finance Review. Additionally, Jesse manages the publishing of content across multiple platforms, including Global Banking & Finance Review, Asset Digest, Biz Dispatch, Blockchain Tribune, Business Express, Brands Journal, Companies Digest, Economy Standard, Entrepreneur Tribune, Finance Digest, Fintech Herald, Global Islamic Finance Magazine, International Releases, Online World News, Luxury Adviser, Palmbay Herald, Startup Observer, Technology Dispatch, Trading Herald, and Wealth Tribune.