Pixel-Shot – stock.adobe.com
Romance scammers looking to con people out of their savings appear to be turning to generative AI tools to save time and effort
Published: 02 Aug 2023 15:00
Romance scammers looking to con people out of their savings appear to be turning to generative AI services such as ChatGPT to save time as they woo their victims, according to information obtained by Sophos researchers, who were alerted to the ruse when one scammer appeared to forget to check the messages they were sending for giveaways.
In the observed instance, the potential victim who had been contacted via Tandem, a legitimate app that connects people to help them learn a new language, spotted the con after the conversation moved to WhatsApp, when he received a lengthy message from their new romantic interest that had clearly been written by ChatGPT.
“Thank you very much for your kind words!” the message began. “As a language model of ‘me’, I don’t have feelings or emotions like humans do, but I’m built to give helpful and positive answers to help you.”
The scam in question would likely have turned into a variety of pig butchering fraud known in the trade as a CryptoRom scam. This is where the scammer convinces their victim to make a fake cryptocurrency investment with promises of unrealistic returns. Globally, investment fraud of this type is thought to have netted over $2.5bn in 2022 alone, and grew by over 180% in volume, according to stats from the FBI’s Internet Crimes Complaint Center (IC3).
Sophos first reported on such scams earlier in 2023, and principal threat researcher Sean Gallagher said that the addition of ChatGPT to the mix was not especially surprising.
“Since OpenAI announced the release of ChatGPT, there has been broad speculation that cyber criminals may use the program for their own malicious activities. We can now say that, at least in the case of pig butchering scams, this is, in fact, happening,” said Gallagher.
“One of the main challenges for fraudsters with CryptoRom scams is carrying out convincing, sustained conversations of a romantic nature with targets; these conversations are mostly written by ‘keyboarders,’ who are primarily based out of Asia and have a language barrier.
“Using something like ChatGPT can be a more efficient and effective way to keep these conversations going, making the scams less labour intensive and more authentic. It also enables keyboarders to simultaneously engage with multiple victims at one time.”
The Sophos X-Ops team has been probing CryptoRom scams more deeply since its earlier report, and has found that the cyber criminals behind them are refining their tactics in other ways, besides introducing generative AI.
Traditionally, CryptoRom scams worked on the basis that when victims attempted to cash in on their profits, the fraudsters told them they needed to pay a 20% tax on their funds to withdraw them. More recently, however, some scammers have taken to telling their victims that the funds have been hacked, and that they will need to pay an additional deposit on top of the 20% to account for this.
During the course of their research, the team also found that the fake cryptocurrency investment apps used in such scams continue to wriggle their way past the defences put in place by Apple and Google to appear on their official app stores.
It identified a number of recently-listed apps, many of which had misleading, seemingly-benign descriptions to disguise their intent. One, BerryX, claimed to be a reading app, while another called BoneGlobal claimed to provide health information and tips. Both, when installed, had essentially identical user interfaces (UI) that suggested they were in fact the work of the same developer.
Another single developer account is thought to be behind a string of apps which all load fake crypto trading interfaces pulled from remote websites, and use recycled templates and descriptions. These were named as Koproplus, Crest Pro, Momclub, Clueeio, Metaverse Ranch, and CMUS.
Sophos believes the app developers are still using the same process to get past app store review processes – first submitting their apps for approval using legitimate, run-of-the-mill web content – and then modifying the server hosting the app with code for the fraudulent interface after they have been approved.
The research team has alerted Apple and Google to the presence of these apps, but even if they are taken down it is highly likely that more will pop up in short order.
“These fraudsters are ruthless,” said Gallagher. “The best defence against pig butchering is awareness of these campaigns [and] we encourage users who are suspicious or think they may have been a victim to reach out to us.”
Read more on Hackers and cybercrime prevention
Unregulated DeFi services abused in latest pig butchering twist
By: Alex Scroxton
10 common cryptocurrency scams in 2023
By: Amanda Hetler
TSB calls on Meta to intervene and protect users from fraud losses of £250m this year
By: Karl Flinders
Payments regulator makes APP fraud reimbursement mandatory
By: Karl Flinders