AI: Enabler and conqueror of emergent fraud
By Ramprakash Ramamoorthy, director of research, ManageEngine
AI is no longer science fiction. All kinds of AI applications have made their way into our daily lives, from high-profile generative machine learning (ML) models like ChatGPT and DALL·E to less headline-grabbing but equally transformative fraud detection algorithms ticking away in the background of our banking apps. AI is transforming the way we use many things, from search engines to cars. Suddenly, generative models, lower entry barriers, and quantum data are bringing all kinds of advanced use cases within reach.
This won’t come as a surprise if you’ve been paying even the most cursory attention to the tech world (or, indeed, the news) in the last few years. But the sensational headlines hide a fundamental challenge. The more powerful AI becomes, the more dangerous it is in the hands of threat actors. Fraud has undergone an eye-wateringly rapid evolution since generative AI became widely available. No more quaint emails from mysterious princes with deep pockets and bad grammar—now we’re in the age of machines that can speak, write, and act like humans.
That means governments and businesses need to act fast. Tech always evolves faster than legislation, but lawmakers and sector leaders must take the initiative to throw restraints on AI. It’ll take collaboration between both groups to deliver the necessary controls. We need a dual defence with regulators and businesses joining forces to build the legislation and security practises necessary to keep pace with the level of attacks.
The depths of AI-driven fraud
First, though, let’s remind ourselves of the scale of these threats. Cyber fraudsters craft AI-powered scams, vastly upscale them through automation, and capitalise on them. Malicious actors use AI algorithms to deceive others by creating synthetic fake identities and conducting social engineering attacks through chatbots, voice cloning, and deepfakes to extract confidential data from people without their knowledge or consent.
Fraud as a Service (FaaS) is an unsettling trend where cybercriminals provide different fraudulent services or tools using websites on the internet or dark web, mirroring the SaaS model and providing access to malicious actors. AI-powered FaaS offers a broad spectrum of tools, like phishing and malware kits, synthetic identity theft packages, and botnets. All of this goes to show that you don’t need to know how to use AI in order to wield it—all you need is money.
With all the ill-gotten gains up for grabs, there’s plenty of money moving around. AI has made automated spear phishing and whaling attacks increasingly common, allowing imposters to access sensitive information. According to the Cyber Security Breaches Survey 2023, “across all UK businesses, there were approximately 2.39 million instances of cybercrime and approximately 49,000 instances of fraud as a result of cybercrime in the last 12 months.”
Phishing occurs not just through emails but also websites. Every day, people visit harmful sites that harvest their data, and worryingly, the domains of these phishing sites change so often that it’s almost impossible to maintain a manual blocklist. On top of common phishing attacks, there’s also the risk of zero-day attacks where hackers exploit newly discovered vulnerabilities. AI has lowered the bar there, too; it wouldn’t take long for an advanced AI model to map an application’s code to find vulnerabilities.
Across the board, the growing challenge is that large language models and ML algorithms learn extensively from both public databases and breached datasets obtained during past attacks. They use this scraped data to fuel personalised phishing attacks, exploiting users’ trust and familiarity with certain brands to drive them to phishing sites that resemble the legitimate sites of those brands.
In short, AI-assisted content generation brings both promise and peril. The language models can provide startlingly plausible content for fake product reviews using stealthy tactics to mimic genuine users. These counterfeit reviews entice or dissuade, impacting decision-making and undermining the authenticity and credibility of the product.
If these sound like small-fry problems, the AI revolution has also brought a new wave of serious financial crimes that are more sophisticated than ever before. Market manipulation, for example, exploits stockholders and arbitrators to cause price fluctuations and flash crashes. Scammers also employ tactics such as spoofing and layering to influence prices by placing and cancelling orders to create a false impression of market activity.
Innovations in combating sophisticated fraud
In the face of these growing risks, the old fight-fire-with-fire approach is quickly becoming a daily reality: You have to set an AI to catch an AI. Here are some of the key technologies and techniques being deployed to counter the flood of AI-enabled fraud:
Anomaly detection
Datasets have patterns, and any sudden changes that occur in these patterns and data points require supervision. Through anomaly detection, such unusual behaviour can be identified and flagged. Anomaly detection is used in several industries, from healthcare to finance. For example, in healthcare, it’s used to find anomalous readings in a patient’s reports and to identify unusual health conditions. Here, AI not only detects anomalies but also offers explanations of what they indicate. In finance, banks use anomaly detection to find suspicious transactions and curb fraud. Electricity providers implement it to monitor the consumption of electricity, detect irregularities, and prevent outages.
Natural language processing
Natural language processing (NLP) is used to generate content and understand the behaviour of target audiences. In the field of banking and insurance, NLP has proven to be highly useful in detecting fraudulent insurance claims. According to the Association of British Insurers, “in 2022, insurers detected 72,600 dishonest insurance claims valued at £1.1 billion. It is estimated that a similar amount of fraud goes undetected each year. This is why insurers invest at least £200 million each year to identify fraud.”
Banks operate in various regions and are required to abide by the rules pertaining to those regions. This can make it challenging to detect fraudulent activities using documents alone. Banks have information about their customers through the Know Your Customer framework and its customer due diligence requirement. NLP can use predictive models and text mining on this data to identify a customer’s fraud risk score in real time.
Deepfake video identification
Deepfakes are becoming ever more realistic, making it difficult to identify fake images, audio, and videos. To mitigate this, Intel recently introduced a deepfake detector called FakeCatcher, which can detect fake videos with a 96% accuracy rate. FakeCatcher identifies the “‘blood flow’ in the pixels of a video. When our hearts pump blood, our veins change colour. These blood flow signals are collected from all over the face, and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, we can instantly detect whether a video is real or fake.”
Keystroke dynamics
Cyberattacks are prevalent across industries. According to Statista, the global cost of cybercrime is estimated to increase by USD 5.7 trillion (+69.94%) by 2028. Two-factor authentication and CAPTCHA are some of the most common ways to verify the authenticity of a user, and now AI is integrated into these systems to analyse biometric verification. In keystroke dynamics, a unique biometric template of an individual is obtained by analysing the pace at which they type. This data is collected and integrated with a neural network, which can tell whether it’s them or a bot typing. This technology “offers a promising hybrid security layer against password vulnerability.”
The way forward
In the past, fraud prevention was based on following traditional rules. By contrast, AI learns from past data to predict future trends, helping businesses improve the accuracy of their fraud detection. AI can also discover and mitigate risks in real time rather than taking days or weeks to conduct analysis. As the cherry on top, although AI used to be considered an impenetrable, complicated black box by everyone but computer scientists, it now offers explanations for its decisions, thereby improving accountability.
But therein lies the problem. We’re approaching the point where AI will become essential and beyond the ability of most people to control. AI can benefit society, but only a close alliance between developers and tech, government, and industry regulators will enable the safe, responsible use of AI.
Regulations and frameworks for AI need to be developed quickly if we’re to facilitate a safer cyberspace for all. These need to be specific and granular, addressing the myriad ways AI may be developed and may impact different industries. We should take an industry-specific approach to problem-solving, encouraging collaboration between government bodies, industry regulators, and companies and helping organisations stay one step ahead of the threats they face. There’s good reason to be optimistic about AI’s impact, but we need to do the work to ensure it progresses safely.