paint-brush
FinTech Wary as AI Fuels Financial Crime Surgeby@annaepx
231 reads

FinTech Wary as AI Fuels Financial Crime Surge

by Anna ErkhFebruary 29th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Survey finds 17% increase in fraud driven by AI-generated identities. 76% of financial experts suspect their organisations have unknowingly accepted these artificial identities. 87% of these professionals predict the situation will worsen before effective solutions are developed. One in four adults has faced some form of AI voice scam, according to McAfee.
featured image - FinTech Wary as AI Fuels Financial Crime Surge
Anna Erkh HackerNoon profile picture

In the financial sector, a troubling trend is emerging: a 17 percent increase in fraud driven by AI-generated identities. Surprisingly, 76 percent of financial experts suspect their organizations have unknowingly accepted these artificial identities. Moreover, 87 percent of these professionals predict the situation will worsen before effective solutions are developed. This insight comes from a survey of 500 finance and FinTech professionals conducted by Wakefield for Deduce, a company fighting synthetic identity fraud. As the challenge looms large, the industry braces for a complex battle, with hope pinned on future innovations.


“I would say the use of AI in fraud is probably the most significant risk facing us. There’s going to be an innovation’ arms race’, and we’re always going to be behind if you’re on the ‘good side’ of the fraud fight: a criminal can innovate quickly and just use it, and come up with a scam. We will take a long time to keep up with that unless we take innovation as a default position,” said Nick Sharp, Deputy Director at the National Economic Crime Centre (NECC), during a panel discussion at a UK-focused conference in London.


According to Uktech.news, the frequency of scam attempts is climbing, with National Trading Standards reporting last year that nearly three out of every four UK adults have encountered scams, and 35% of them, which translates to about 19 million people, have lost money due to these fraudulent schemes.


A study conducted by cybersecurity firm McAfee revealed that one in four adults has faced some form of AI voice scam. In a notable incident earlier this month, an employee at an international company was deceived into sending $25 million to fraudsters through a convincing deepfake video call.

Synthetic Fraud: The New Frontier

Financial service providers, including those offering loans, credit cards, and credit evaluations, have long been confronted by fraudsters who exploit another person’s personal information to fabricate fake identities for financial gain. Experts from the FinTech Payrow emphasize that this type of fraud, known as “synthetic fraud,” has become more sophisticated with the advent of generative AI technologies.


Generative AI encompasses tools that can create content in various formats—text, images, audio, and video — from simple prompts, enhancing the ease and speed of executing scams.


This includes:

  • Spreading phishing messages more efficiently.

  • Constructing online presences for fake identities to make them seem real.

  • Mimicking real people’s activities to gather more personal information.


Synthetic fraud, which blends real and false information, poses a greater challenge for credit monitoring and security services.


Fraudsters are shifting towards impersonation tactics, deceiving individuals and businesses into making payments to entities that appear legitimate.


Victims often include those who seldom check their credit reports, those with readily available online information, and groups less informed about fraud risks, such as the young and elderly.


Countering AI-enabled fraud requires extensive legitimate data for pattern recognition, a challenge more complex than traditional identity theft, where fraud involves the unauthorized use of someone’s personal details.

Chatbots: The New Frontier for Fraudsters

Criminals are now leveraging chatbots, such as ChatGPT and Dall-E, to enhance their hacking and scamming efforts. ChatGPT’s capacity to generate customized content from minimal inputs presents a risk as it can create personalized scam and phishing communications.


For example, by inputting basic personal information into a large language model (LLM), the backbone of AI chatbots like ChatGPT, scammers can generate phishing messages specifically designed for an individual. Despite safeguards to prevent misuse, such exploitation remains a concern.


LLMs also enable scammers to execute phishing operations on a massive scale, reaching thousands in their native languages. Evidence from hacking forums shows criminals utilizing ChatGPT for fraudulent purposes, including information theft and ransomware creation.


New variants of malicious large language models, such as WormGPT and FraudGPT, have emerged, capable of generating malware, identifying system vulnerabilities, offering scamming advice, facilitating hacking, and compromising electronic devices. Another variant, Love-GPT, targets individuals through romance scams on dating platforms by generating fake profiles that engage users in conversation.


The international challenge extends to the creation of phishing emails and ransomware using AI, raising concerns about privacy and trust on platforms like ChatGPT and CoPilot. The increased reliance on AI tools risks exposing personal and corporate information, either through their integration into future training datasets or potential breaches that could disseminate sensitive data.

Future-Proof Your Digital Life and Finances

Revolut has recently announced the launch of an AI scam detection tool. According to them, AI technology can be highly effective in combating scams. This new feature from the FinTech company intervenes when it identifies a high probability that a transaction is fraudulent. Users attempting to make such a payment are automatically redirected to a “scam intervention flow,” which encourages them to reassess the transaction’s context and provides information on common scam tactics.


While FinTech companies ponder how to protect our finances from artificial intelligence attacks, we can take a number of protective actions on our own.

Exercise increased caution with seemingly authentic messages, videos, images, and calls, which might be AI-generated. Verify their legitimacy through a trusted source.

It’s advisable to refrain from sharing sensitive information with ChatGPT and similar LLMs and to remain aware of their limitations, including the potential for inaccurate responses, especially in critical applications like medical diagnosis and professional tasks.


Lastly, consult your employer about using AI technologies in the workplace to ensure compliance with any existing policies or restrictions. Adopting these precautions can help mitigate known and emerging threats as AI technology continues to evolve.


The article “Voices of Deception: Guide to Protecting Yourself from AI Voice Scams” offers steps for safeguarding your identity online and advice on what to do if you receive a suspicious call.