Considerations for AI Adoption at Community Financial Institutions

October 24, 2023

By: Mike Detrow, CISSP 

You have probably seen the headlines claiming that artificial intelligence (AI) models such as ChatGPT will soon replace many human jobs. Marketing campaigns are also touting the use of AI by vendors to improve the effectiveness of their data analysis tools. If you have not already started to think about the application of AI for banking operations, you will likely be evaluating it soon. Just as with any other risk management practice, it is best to evaluate new technologies proactively rather than waiting until your vendors force you to use them or your employees begin using them without your knowledge. 

The purpose of this article is to identify the risks associated with machine learning and generative AI that you should consider as you are evaluating use cases for AI at your financial institution. Machine learning is the use of training data and algorithms that allow computers to imitate intelligent human behavior more realistically. Generative AI uses machine learning to allow a computer to generate new content such as text, images, video, or sounds based on specific input provided by a user.  

The Role of AI in Financial Institutions: A Look at Practical Applications 

First, let’s explore potential use cases for AI in community financial institutions. Some of the applications that we have seen so far include: 

  • Document development, such as job descriptions, policies, and marketing materials 

Risk Factors for AI Implementation in Community Financial Institutions 

Next, let’s examine some of the potential risks associated with the use of AI in community banks and credit unions. One of the biggest concerns with the use of AI is the security of non-public information. Entering such data into an AI model that is not under the complete control of the financial institution or one of the institution’s vendors introduces the risk of this information being disclosed, resulting in the potential misuse of this sensitive data. 

In addition to security concerns, there are other risks which should be considered. Results provided by AI-driven decision-making models could be biased based on the data that was used to train the model. Also, the information provided by AI models may be inaccurate or misleading, which could inadvertently result in an employee disseminating such incorrect information if not thoroughly vetted.  

Building a Strong Foundation for AI Risk Management within Your Financial Institution 

Now that you are aware of the risks associated with AI, what should you do to evaluate its potential within your bank or credit union? To safeguard your financial institution in the era of rapid AI adoption, it’s imperative to set guidelines early. The first step is to establish a group within your institution that will provide oversight for AI. If you already have an IT Steering Committee, this role will likely be assigned to this committee as it should already include the appropriate employees for this task. If you do not have an IT Steering Committee, you should consider establishing a cross-functional group of employees drawn from various areas of the institution to handle AI oversight. 

The first initiative for your AI oversight group should include a discovery process to identify any existing use of AI at the financial institution. It is possible that employees are already using ChatGPT to help develop marketing materials, for writing scripts or macros, or they may be using web browser plugins to improve productivity. Some of your vendors may also be using AI for various tasks associated with delivering services to your financial institution or customers, such as AML models, loan underwriting, and website virtual assistants or chatbots 

This group should develop a plan to identify any employee use of AI, whether it be through engaging in conversations with employees or potentially through employing the use of web traffic analysis. Keep in mind that your IT staff may not be the only employees that are potentially using AI within your financial institution.  

Additionally, your AI oversight group should review vendor documentation and, if deemed necessary, reach out to vendors to determine how they may be using AI. The purpose of this discovery process is to determine whether any non-public data has been put at risk based on any current or prior use of AI by employees or vendors so that appropriate actions can be taken to address any potential data misuse and prevent any further inappropriate AI usage.  

Once the AI oversight group has identified existing utilization of AI by employees and vendors and addressed any potential security concerns, the next step is to formally establish the institution’s risk appetite related to AI. This is achieved by documenting it within a policy that will be approved by the board and provided to employees for their acknowledgement. You should consider the following criteria within your policy: 

  • Definition of AI and the associated risks 
  • Authorization Process: Clearly defined IT Steering Committee approval requirements for new use cases. 
  • Vendor Risk Management: Due diligence practices for new vendors and ongoing monitoring of existing vendors to understand their AI usage and the potential risks involved. 
  • Acceptable Use: Employee guidelines for the usage of AI models such as ChatGPT and browser plugins, data security, output verification process, etc. 
  • Ethical and Legal Requirements: Guidelines for nondiscrimination, regulatory compliance, and adherence to other institution policies. 
  • Intellectual Property Protection: Measures to safeguard intellectual property rights and copyrighted material. 
  • Incident Response: Procedures to detect and report any suspected security incidents. 

It is important to note that it is likely not feasible to implement an outright ban of AI at the financial institution within your policy, especially as some of your vendors are likely already using AI or will be using it in the near future. 

With the use of AI expected to increase very rapidly over the next few years, it is imperative for management to establish guidelines for its use as early as possible to limit the potential for its misuse at your institution. 

Y&A’s Solution for Secure AI Adoption and Risk Preparedness within Financial Institutions 

In the rapidly evolving landscape of AI integration within the financial sector, striking a balance between reaping the potential benefits of this technology and practicing effective risk management can be challenging. It’s crucial to adopt a risk-ready approach to scaling AI integration in order to safeguard the future of your institution. The proliferation of AI applications shows no signs of slowing, making it wise to proactively address risks before regulatory measures come into effect. 

To streamline the process of addressing AI risk, Young & Associates offers a customizable AI policy that you can tailor to your financial institution’s specific needs. Click here to learn more about this product. 

Should you have any questions about this article, please reach out to Mike Detrow, Director of Information Technology, at [email protected] or contact us on our website. 

Get Our Insights

Connect with a consultant

Contact us to learn more about our consulting services and how we can add value to your financial institution