Skip to main content

Looking for Valuant? You are in the right place!

Valuant is now Abrigo, giving you a single source to Manage Risk and Drive Growth

Make yourself at home – we hope you enjoy your new web experience.

Looking for DiCOM? You are in the right place!

DiCOM Software is now part of Abrigo, giving you a single source to Manage Risk and Drive Growth. Make yourself at home – we hope you enjoy your new web experience.

Looking for TPG Software? You are in the right place!

TPG Software is now part of Abrigo. You can continue to count on the world-class Investment Accounting software and services you’ve come to expect, plus all that Abrigo has to offer.

Make yourself at home – we hope you enjoy being part of our community.

Mitigating AI-enhanced cybersecurity risks for financial institutions

Edward Callis, CPA, CISSP, CCSP
October 25, 2024
Read Time: 0 min

Helpful info for Cybersecurity Awareness Month 

Helping financial institutions mitigate AI-enhanced cybersecurity risks is the focus of a recent letter from the N.Y. Department of Financial Services.

Cyber threat warnings for banks & credit unions 

Whether intentionally timed to coincide with Cybersecurity Awareness Month or not, the New York State Department of Financial Services (DFS) recently issued an industry letter addressing the cybersecurity risks associated with artificial intelligence (AI). This guidance is particularly relevant for banks and credit unions, as it aims to help them understand and manage the evolving threats posed by AI by offering strategies to mitigate AI-specific risks.

“AI has improved the ability for businesses to enhance threat detection and incident response strategies, while concurrently creating new opportunities for cybercriminals to commit crimes at greater scale and speed,” said DFS Superintendent Adrienne A. Harris.

These advancements in AI have been highlighted in a number of industry announcements, whether in fintech or consumer media. While AI offers enhanced capabilities for threat detection and incident response, it also introduces new opportunities for cybercriminals to exploit vulnerabilities at a greater scale and speed. This dual-edged nature of AI necessitates a comprehensive approach to cybersecurity.

The DFS industry letter described three primary risks posed by AI: 

  • Social engineering
  • Cybersecurity attacks
  • Data theft.

Staying on top of fraud is a full-time job. Let our Advisory Services team help when you need it.

Connect with an expert

Social engineering threats 

One of the most significant threats highlighted in the DFS letter is AI-enabled social engineering. Traditional social engineering attacks, such as phishing, have been a persistent issue in cybersecurity. However, AI has amplified these threats by enabling the creation of highly personalized and sophisticated content. For instance, AI can generate realistic audio, video, and text deepfakes that can deceive individuals into divulging sensitive information.

 A notable case involved a deepfake video of company officials, including the Chief Finance Officer, who instructed a finance worker to transfer funds to a fraudulent account. The employee, convinced by the authenticity of the video call, complied, resulting in a substantial financial loss for the company.

Other notable examples include human resources and payroll personnel updating employee direct deposit information based on AI-generated phishing emails.  Financial institutions and other organizations can mitigate this threat by having self-service human resource information system (HRIS) platforms where personnel are directed to adjust their direct deposit details themselves.

AI-enhanced cybersecurity attacks 

AI also enhances the potency, scale, and speed of cyberattacks. Threat actors can use AI to scan and analyze vast amounts of data quickly, identifying and exploiting security vulnerabilities more efficiently than ever before. This capability allows them to conduct reconnaissance, deploy malware, and exfiltrate nonpublic information (NPI) at an unprecedented rate.

AI has also been used to develop new variants of ransomware that could bypass traditional security controls. The ransomware spread rapidly across the network, encrypting critical data and demanding a hefty ransom. Financial institutions should work to ensure corporate systems are isolated from backend operations, including the use of different network layers and authentication platforms. These controls can mitigate the spread of ransomware should an employee endpoint be exposed.

 

Preventing AI-related data theft

AI systems often require large datasets, including NPI, to function effectively. This necessity increases the risk of data breaches, as threat actors are incentivized to target entities with substantial amounts of sensitive information. Additionally, the storage of biometric data, such as facial and fingerprint recognition, poses further risks. Stolen biometric data can be used to bypass multi-factor authentication (MFA) and gain unauthorized access to information systems.

A financial institution experienced a data breach where AI was used to manipulate biometric data, allowing attackers to impersonate authorized users and access confidential information. It’s still too early to recommend specific mitigation strategies for this type of attack, although general access management controls using the principal of least privilege are strongly encouraged.

 

Strategies to mitigate AI-related risks

In addition to highlighting AI-enhanced risks, the DFS recommended several mitigation strategies. Most of these strategies represent administrative control enhancements rather than technical controls.  This means that many community financial institutions can implement certain AI risk mitigation techniques without implementing new systems.

Specifically, the DFS letter outlines four strategies to mitigate the cybersecurity risks associated with AI:

  1. Risk assessment and management: Institutions should conduct thorough risk assessments to identify potential AI-related threats and implement appropriate controls. This includes regular reviews and updates to security policies and procedures. Members of internal audit teams can leverage resources from ISACA, the international professional association focused on IT governance, including complementary AI audit programs that come with membership.
  2. Employee training and awareness: Training programs should be designed to educate employees about the risks posed by AI and the measures to mitigate these risks. This includes training to recognize AI-enabled social engineering attacks and respond appropriately. Updating annual training to include AI-specific threats, particularly around phishing, can assist both frontline and back-office staff to identify and report suspicious email at the institution.
  3. Collaboration and information sharing: Institutions should collaborate with industry peers and regulatory bodies to share information about emerging threats and best practices. This collective approach can help improve the financial services sector’s overall cybersecurity posture. Information technology and information security staff at institutions can benefit through membership with information sharing associations, such as the Financial Services Information Sharing and Analysis Center (FS-ISAC).
  4. Advanced security technologies: Leveraging advanced security technologies, such as AI-driven threat detection systems, can enhance an entity’s ability to detect and respond to cyber threats. These systems can analyze patterns and anomalies in real time, providing early warnings of potential attacks. While implementing new vendor technologies can come at a cost, many institutions find their existing vendors are including AI detection in their existing threat detection solutions since including them has become a requirement to stay competitive.

Proactive, comprehensives approaches

The evolving landscape of AI in cybersecurity presents both opportunities and challenges. Banks and credit unions must stay vigilant and adopt a multi-layered approach to safeguard their information systems and protect sensitive data from cyber threats.

The DFS industry letter underscores the importance of a proactive and comprehensive approach to managing AI-related cybersecurity risks. By understanding the unique threats posed by AI and implementing robust mitigation strategies, financial institutions can better protect themselves and their customers from cyber threats.

About the Author

Edward Callis, CPA, CISSP, CCSP

Vice President of IT Risk & Assurance
Edward Callis is a Vice President of IT Risk & Assurance at Abrigo. He leads a team of IT professionals who assess Abrigo’s vendor and partner ecosystem, and who provide comprehensive due diligence documentation so financial institutions can make an informed choice when selecting software platforms. Edward has more than

Full Bio

About Abrigo

Abrigo enables U.S. financial institutions to support their communities through technology that fights financial crime, grows loans and deposits, and optimizes risk. Abrigo's platform centralizes the institution's data, creates a digital user experience, ensures compliance, and delivers efficiency for scale and profitable growth.

Make Big Things Happen.