Latest News

AI, fake CFOs drive soaring corporate payment-fraud attacks

Written on Feb 9, 2024

The vast majority (96%) of U.S. companies were targeted with at least one payment fraud attempt in the past 12 months, a 71% increase from the prior year, as criminals step up their tactics, according to automated fraud prevention services provider Trustpair. 

To dupe organizations, fraudsters primarily used text messages (50%), fake websites (48%), social media (37%), CEO and CFO impersonations (44%), hacking (31%), business email compromise (BEC) scams (31%) and deepfakes (11%), according to a report on the findings, released Tuesday. More than 260 senior finance and treasury leaders were polled, Trustpair said. 

“Our research shows fraudsters are becoming increasingly more sophisticated in their tactics and their reach is expanding,” Trustpair CEO Baptiste Collot said in a press release. 

Total potential losses from cyberattacks and cyber fraud rose 48% in 2022 to $10.2 billion from $6.9 billion in 2021, according to the FBI. The FBI’s Internet Crime Complaint Center received 21,832 complaints involving fraud attempts via “business email compromise” scams in particular, with adjusted losses totaling over $2.7 billion. 

Such attacks have accelerated as generative artificial intelligence tools like ChatGPT have made it much easier for scammers to create “close-to-perfect” texts, emails, phishing websites, and deep-fake voices at scale, according to Trustpair. 

“ChatGPT-generated text messages, hacked websites, and deep-fake phone calls are now the norm as fraudsters use cutting-edge technology and AI to move faster and better than ever before,” the report said. 

With a small sample of audio, cybercriminals can clone the voice of nearly anyone and send bogus messages by voicemail or voice messaging texts, according to a 2023 report from cybersecurity firm McAfee. 

“The aim, most often, is to trick people out of hundreds, if not thousands, of dollars,” the report said. Of 7,000 people surveyed by McAfee, one in four said they had experienced an AI voice-cloning scam or knew someone who had. Nearly three-quarters of respondents (70%) said they weren’t confident they could tell the difference between a cloned voice and the real thing.  

AI can also be used by criminals to review large volumes of data for the purpose of identifying potential targets and tailoring scam content, according to a report released last December by PricewaterhouseCoopers. “There is no hard evidence that this is currently happening, but there was a belief amongst some of those that we spoke to that this risk will increase in prevalence over time,” the report said. 

Of the companies that were targeted by fraud attempts in the past year, most (90%) were hit with at least one successful attack, according to the Trustpair study. For 25% of companies, the average financial loss of successful fraud attacks was more than $5 million. 

Financial loss isn’t the only potential risk of such attacks, the report said. The possibility of reputational damage with customers or investors was a concern for 50% of finance and treasury leaders. 

Despite the fact that payment fraud is an escalating threat, many companies aren’t adequately prepared to face it, with just a little over half (56%) of respondents reporting an increase in anti-fraud technology spending in the last six to 12 months, according to Trustpair. 

“For budget and prioritization reasons, as well as a lack of awareness about market solutions, companies aren’t shifting to automation quickly enough and are still lagging behind fraudsters,” the report said. 

Related Upcoming Events