Genius AI fraudsters fooling work emails with financial scams.

February 14, 2024
1 min read

TLDR:

  • Criminals are using generative AI to create convincing financial scams, including phishing and spear phishing emails.
  • Generative AI makes it harder to distinguish between what is real and what is not, as criminals can create realistic videos and deepfakes.
  • Larger companies are particularly vulnerable to these scams, especially in the world of APIs and payment apps.
  • The financial industry is using its own generative AI models to fight against fraud.

Even companies that ban the use of generative artificial intelligence (AI) are falling prey to financial scams that deploy the technology. Criminals armed with AI tools can easily create realistic videos, fake identities, and convincing deepfakes using the voices and images of company executives. A recent scam that cost a Hong Kong-based company over $25 million demonstrates the level of sophistication that these scams have reached.

Generative AI makes it harder to tell what is real and what is not. Cybercriminals can use AI models to create convincing phishing and spear phishing emails that resemble trusted sources. These emails can trick employees into sharing sensitive information or making fraudulent payments. Larger organizations with annual revenues of $1 billion or more are particularly susceptible to email scams.

These scams are made worse by the fact that the scale of financial fraud is growing thanks to automation and the increasing number of websites and apps handling financial transactions. The proliferation of payment solutions and the use of APIs in the financial services industry provide more opportunities for criminals to carry out these scams at a larger scale.

The financial industry is fighting back by developing its own generative AI models to detect fraud. For example, Mastercard has built an AI model to identify “mule accounts” used by criminals to move stolen funds. However, as generative AI technology continues to advance, the challenge of distinguishing between real and fake identities becomes more difficult.

Cybersecurity experts recommend that companies develop specific procedures for money transfers and find ways to verify requests through channels other than email or messaging platforms. They also suggest implementing more detailed authentication processes, such as requesting users to perform specific actions during identity verification.

In conclusion, generative AI is leading to a surge in very convincing financial scams, and companies need to adapt their cybersecurity strategies to combat this growing threat.

Don't Miss