undefined

AI and APP fraud: How to fight fire with fire

By Kathy Gormley, AML Product Manager at Resistant AI

Kathy Gormley

APP fraud is a hot topic right now, not least because of high-profile cases like the Tinder Swindler, who seduced women online then conned them out of millions of dollars, and was immortalised in a feature-length documentary last year. Because the payments are authorised – nobody has forced the victim to make a payment, rather they have tricked them – they are particularly difficult to prevent, track and police.

But this is not just the stuff of Netflix documentaries. Look at the numbers, and it is a fast-growing problem in financial services. A report by payments software firm ACI Worldwide and analytics firm GlobalData predicts that fraud volumes in the UK could double between 2021 and 2026, from $789.4m to $1.56bn. 

And with each leap forward in technology, fraud evolves too. Criminals running APP scams are often innovative, organised and professional, but don’t have the same hurdles to adopting new tech as financial services firms. That means they are using cutting edge techniques like artificial intelligence (AI) and deepfake technology without having to grapple with lengthy procurement processes, GDPR laws or regulatory requirements that rightly apply to legitimate businesses. 

Criminals are getting increasingly creative. Additionally, OpenAI’s generative AI product, ChatGPT, removes obstacles like language barriers. Before, potential victims would see telltale signs like grammatical errors in scam messages. Now, you can write a clear and concise email in practically any language, targeted at a very specific set of victims in seconds. In other words: the criminals are at an advantage. 

Convenience: a double-edged sword

That is a particular issue for neobanks and digital payments firms, who specialise in providing a seamless user experience, compared to the relatively clunky interfaces offered by legacy financial institutions. The reason APP scams have already become so widespread in recent years is the move to instant, real-time payments, and the expectation of that from customers. Criminals set up so-called mule accounts, which can then be used to launder scammed funds at lightning speed.

For mules to be effective, you need a lot of them. They can come in the form of synthetic, fake accounts, which are controlled by criminals. Or they can be real people, often young, as identified by the Cifas national fraud database which has shown that 23% of cases indicating money mule behaviour are aged under 21. In particular, cash-strapped students are sometimes vulnerable – or willing – participants in schemes like this, taking a small commission to receive funds and quickly move them further along the chain. The more accounts the money moves through, the more difficult it is to track and freeze.

Add AI and machine learning into the mix, and it complicates matters further for AML teams. Scammers can use AI to create ever-more convincing fake identity documents such as driving licences, and use automation to produce them in vast quantities, in a bid to evade fintechs’ know-your-customer (KYC) protocols. For some of our larger customers as much as 1-2% of their document intake was found to be fraudulent. 

Forward-thinking neobanks are looking to add further controls to ensure legitimate users, for example by asking new account holders to take videos as part of the verification process. But this is no silver bullet. If deepfakes like that of Lewis show us one thing, it is that mocking up a photo or video of somebody has never been easier.

These fintech firms, meanwhile, often hang their hat on rapid growth – and who can blame them? It is a marker of success, and it gets investors excited. But fraud tends to be concentrated in new accounts, meaning a higher proportion of neobanks’ accounts are likely to be fraudulent than at traditional banks, which are dominated by legacy users. Traditional banks may be on the receiving end of similar numbers of fraud attempts, but because of the larger size of their customer base, it poses less of a threat. It means that the very things which make neobanks attractive to consumers also opens them up to a whole new world of risk. 

The result is an environment that poses an existential threat to neobanks. Starting in 2024, UK regulations will make customer reimbursement mandatory for both the sending and receiving institutions. If your company houses the accounts of fraudsters who receive the funds, you have to pay out 50% of identified scams. That brings a huge amount of balance sheet pressure on firms to improve their controls, not only to protect their own consumers from being victims of fraud, but also to keep out bad actors.

Reputational damage, too, is a much bigger issue for fintechs, whose customers are far more footloose than those of traditional banks. Sure, it is convenient to open an account with your favourite neobank – but if you get scammed, it’s just as easy to switch to a competitor. And if other financial institutions perceive fintechs to be too high-risk, they will cut them off from the financial system entirely. This is why accurate assessment of threat is so essential to the future of fintechs, who must demonstrate their security and reliability. 

AI can also provide the solution

So what can be done about it? Toeing the line between customer friction and risk management is a near-constant conundrum for firms. Nobody wants their account to be frozen just because they quickly transferred their salary out of one account and into another. Catching a single criminal transaction while making sure it is not a false positive is often a nuanced process – doing it at scale can be extremely challenging.

That is where AI can help fintechs fight fire with fire. Solutions which use machine learning can quickly pick up and apply insights into what activity you might expect of a certain type of customer, and what you might not. Is that £2,000 transfer typical of a certain type of account? Could the customer just be moving their rent payment to the right place, for example? Or perhaps, there is something darker at play. AI can answer these questions on an individual basis and then combine the data on large numbers of transactions, pinpointing when funds are leaving several accounts for a common counterparty which might be part of an APP fraud operation. 

Better still is to stop fraudulent accounts at the sign-up stage. Resistant AI’s solution, and others, can analyse a digital document in more than 500 different ways to detect signs that it has been forged, or that it is a genuine document that has been tampered with. Things like the background of a driving licence could be obvious signs that even a human being with expertise could pick up on – but what about if a photo has been taken by the same kind of camera as another fake in the database, used exactly the same camera angle or has exactly the same shadows? 

Of course, for AML and compliance professionals, there are barriers to using AI. If you work at a financial institution, integrating new software with your legacy system is never simple. Fintechs, meanwhile, tend not to like spending much on headcount or training compliance teams just for a new piece of kit. Plug and play add-ons like ours can enhance existing systems without having to rip and replace what you already have. That way, AML teams can focus on value-add work, rather than going through a complex change management programme.

Like cloud technology before it, then, adoption of AI to mitigate threats from criminals has thus far been slow. People don’t understand it and therefore don’t trust it, so they say ‘I need to see greater adoption, before I make the leap myself’. Regulation and regulatory guidance, too, is at an early stage. 

But for the avoidance of doubt, nobody is saying don’t use AI – quite the contrary. Chief executive Nikhil Rathi said in a recent speech that one of its major benefits is the “ability to tackle fraud and money laundering more quickly and accurately and at scale”. With the right guardrails in place, he said, AI can offer opportunity. In the same speech, he said the regulator itself is using AI to “identify risky behaviour” in the market.

When it comes to APP scams, perhaps the best case to be made is that it introduces multiple new layers of defences for financial crime teams at neobanks and payments firms. If using an AI solution at the origination stage, it means scammers must up their stakes to avoid detection, making it more challenging and expensive to create new mule accounts, for example, and it is making a big difference. Combine that with large-scale analysis of transactions to identify when money might be being laundered, and you are significantly raising the barriers for criminals.

There’s no way you can do either of these as effectively with manual, static and people-based controls, so exploring how AML teams can use AI now could save you some serious trouble further down the line. Looking forward, we’d all be silly to think it will not play a key role.