Skip to main content

Looking for Valuant? You are in the right place!

Valuant is now Abrigo, giving you a single source to Manage Risk and Drive Growth

Make yourself at home – we hope you enjoy your new web experience.

Looking for DiCOM? You are in the right place!

DiCOM Software is now part of Abrigo, giving you a single source to Manage Risk and Drive Growth. Make yourself at home – we hope you enjoy your new web experience.

Looking for TPG Software? You are in the right place!

TPG Software is now part of Abrigo. You can continue to count on the world-class Investment Accounting software and services you’ve come to expect, plus all that Abrigo has to offer.

Make yourself at home – we hope you enjoy being part of our community.

Mitigate AI-driven fraud: Trends and next steps for financial institutions

Terri Luttrell, CAMS-Audit, CFCS
August 29, 2024
Read Time: 0 min

Technology advances lead to increased AI-driven fraud 

Are traditional methods of detection enough to protect your financial institution from AI-driven fraud losses?

You might also like this webinar, "How to confidently navigate AI: 5 Ways to leverage at your financial institution."

Watch now

 

Introduction

Artificial intelligence and its impact on fraud

Artificial intelligence (AI) has become a global game-changer, offering innovative solutions and efficiencies that were once unimaginable. While AI provides tremendous benefits to financial institutions—streamlining operations, enhancing customer experiences, and bolstering security—it also presents new opportunities for criminals. According to Deloitte, AI-enabled fraud losses in the U.S. are projected to soar to $40 billion by 2027, a significant jump from $12.3 billion in 2023. This sharp increase signals an urgent need for financial institutions to strengthen their fraud detection measures and stay ahead of these sophisticated threats. This blog explores some of the more common AI methods fraudsters use to prey on their victims.

New developments

Understanding AI-driven fraud trends

Extortion and spokesperson deepfakes

Deepfake fraud, which involves creating highly realistic but fake videos and audio, is one of the most alarming developments in AI-driven fraud. According to the FBI, deepfake images are used for extortion: creating sexually explicit photos, demanding ransom, and threatening to expose the fake images. This type of extortion often targets young adults.

Criminals increasingly use deepfakes to impersonate well-known figures, making scams more convincing and widespread. For example, a deepfake video of Elon Musk was widely circulated online, promoting a cryptocurrency scam. The video used footage from a real TED Talk featuring Musk, misleading viewers into believing it was a legitimate endorsement and leading to financial losses for some investors.

In another recent example, a retiree thought he had found an opportunity to secure a better future for his family when he stumbled upon a video of Elon Musk endorsing a promising investment. Convinced by the deepfake pitch, the man opened an account with an initial deposit of $248. Over the following weeks, he invested everything he had—more than $690,000—draining his entire retirement savings in the process. What began as a hopeful investment quickly turned into a devastating financial loss, illustrating the efficiency of today’s AI-powered fraud.

High-profile incidents like these highlight the power of deepfakes, but the threat isn't limited to celebrities. Recently, a business email compromise scam lured a Hong Kong finance officer into transferring $25 million to criminals who had used deepfake technology to impersonate his company's Chief Financial Officer during a video call. The scam was so convincing that it bypassed multiple security checks, demonstrating the serious risks deepfake fraud poses to financial institutions.

Voice cloning with AI

Voice cloning is another AI ability that can be used to make traditional scams more effective. An Arizona mother answered a call from an unfamiliar number, only to hear what she believed was her 15-year-old daughter in distress, supposedly being held by kidnappers demanding $50,000. The voice sounded exactly like her daughter’s, but AI had generated it. Fortunately, the mother’s concerned friends quickly contacted 911 and her husband, leading to the discovery that her daughter was safe.

Similarly, a Taylor Swift voice clone was used in an advertising scam in which the pop star appeared to endorse a giveaway of a popular brand of cookware. Fans were directed to a fake website where they were charged for nonexistent products. This scam exploited Swift's popularity and known fondness for the cookware brand, making it particularly effective in deceiving her followers.​

AI-driven phishing

Phishing has long been a significant threat to financial institutions, but AI is taking these scams to a new level. A recent survey conducted by the Harvard Business Review showed that 60% of participants fell victim to AI-generated phishing. AI has bolstered phishing tactics, enabling scammers to pull in over $2 billion in 2022 alone. Since the arrival of ChatGPT in late 2022, there has been a staggering 1,265% surge in malicious phishing emails, according to cybersecurity experts at SlashNext. By analyzing a target’s communication patterns, AI can generate phishing emails that closely mimic the writing style of trusted colleagues or companies, making them incredibly convincing. AI phishing is expected to increase drastically in quality and quantity over the coming years.

Synthetic identity fraud

According to Forbes, identity theft has become a major concern across the globe, impacting more than 42 million individuals and accounting for around $52 billion in losses in the U.S. With the emergence of generative AI, the banking sector and other businesses are facing a new and more complex risk: synthetic identity fraud, where criminals combine real and fake information to create new, fictional identities. The convenience of digital banking and other online services have made personal details more easily accessible. Fraudsters are using AI to generate realistic names, social security numbers, and other identifying details that appear legitimate but do not correspond to actual people. These synthetic identities are used to open bank accounts, apply for credit, or commit insurance fraud. AI makes this process more seamless, increasing the volume of identification fraud.

Staying on top of fraud is a full-time job. Let our Advisory Services team help when you need it.

Connect with an expert

Prevention methods

Strategies for preventing AI-driven fraud losses

As AI-driven fraud becomes more prevalent, financial institutions must proactively protect themselves and their clients. Here are five key strategies for loss mitigation:

  1. Adopt advanced fraud detection systems: Fraud detection solutions leverage AI to combat AI, making them one of the most effective tools available for detecting fraud before it escalates. Some fraud detection solutions use machine learning algorithms that can analyze transaction data to detect and flag unusual patterns indicative of fraud. Utilizing technology like this is essential for staying ahead of sophisticated fraud typologies and mitigating hard dollar losses.
  2. Enhance employee training: Continuous education is vital to ensure that employees can recognize the latest fraud trends and signs of AI-driven fraud. This includes being able to identify phishing attempts, deepfake videos, and other AI-powered scams. Front-line staff must understand the importance of robust know your customer procedures and fraud detection at onboarding to prevent fraudsters from breaching your bank or credit union.  
  3. Strengthen verification processes: Financial institutions should implement multi-factor authentication for client verification to counteract threats like deepfakes and synthetic identity fraud. These measures can help prevent fraudsters from gaining unauthorized access to accounts.
  4. Bolster governance and compliance frameworks: Establish a dedicated committee to oversee AI deployment, ensuring that fraud detection technologies are used ethically and comply with regulatory requirements. Robust compliance reporting is crucial for maintaining transparency and readiness for regulatory examinations.
  5. Client awareness and education: Launch a comprehensive fraud awareness campaign for your community. Host informational meetings in your branches or local community centers. Educate customers about the risks of generative AI through seminars, email newsletters, and social media. Emphasizing the importance of vigilance and fraud reporting will strengthen your trusted advisory relationship with clients.

AI is undoubtedly a powerful tool that offers significant benefits to the financial sector but also introduces new risks. As fraudsters become more sophisticated in using AI, financial institutions must be vigilant and proactive in their approach to fraud prevention. By investing in the right technologies and continuously updating security protocols, banks and credit unions can protect their assets and maintain the trust of their clients, even as the threat of AI-driven fraud continues to grow.

Find out how Abrigo Fraud Detection stops check fraud in its tracks.

fraud detection software
About the Author

Terri Luttrell, CAMS-Audit, CFCS

Compliance and Engagement Director
Terri Luttrell is a seasoned AML professional and former director and AML/OFAC officer with over 20 years in the banking industry, working both in medium and large community and commercial banks ranging from $2 billion to $330 billion in asset size.

Full Bio

About Abrigo

Abrigo enables U.S. financial institutions to support their communities through technology that fights financial crime, grows loans and deposits, and optimizes risk. Abrigo's platform centralizes the institution's data, creates a digital user experience, ensures compliance, and delivers efficiency for scale and profitable growth.

Make Big Things Happen.