Story

Empowering Employee AI Adoption in Banking through Ethical Governance Policies

By Derek Frost

The capabilities of AI-enabled tools are vast and transformative, with the potential to significantly enhance productivity, elevate customer experiences, and pave the way for innovative products and services. In the banking sector, employees are optimistic about AI’s impact on their job security but unconvinced their banks can use AI responsibly.

According to a Gartner survey, a significant 74% of banking professionals across diverse functions anticipate that AI will bolster the value of and demand for their roles. Additionally, 63% foresee an expansion of their role’s scope due to AI, while only 14% perceive AI as a threat to their job security. However, though they may be confident in their own job security, only 38% believe that AI will increase job availability to everyone else. Moreover, only about half of bank employees surveyed think AI will strengthen privacy rights or create a more equal society.

This suggests that the challenge of adoption is not rooted in concerns over job security. Instead, employees express greater concern about the ethical implications of AI on their customers and colleagues. Therefore, it is important that chief information officers (CIOs) at banks understand this sentiment and work with other leaders to set responsible AI policies and practices that will improve AI tool adoption among employees.

To improve employee adoption of AI, Gartner recommends that bank CIOs implement the following steps:

  • Ensure that privacy protections and explainability are included in IT development and deployment standards for AI by promoting them on an ongoing basis, such as through governance agendas and project reporting. Such guardrails can help make AI decisions intelligible and assure staff that all stakeholder concerns are being considered and addressed.
  • Monitor and prevent AI ethics breaches by maintaining a regular dialogue with senior business leaders on the use of AI and expect the same dialogue between CIOs’ direct reports and their business counterparts. This vigilance will help instill confidence among users and leaders.
  • Co-sponsor governance oversight by collaborating with legal and compliance to operationalize AI ethics inside IT and across the bank.

Bank CIOs and other AI leaders still have significant work to do to earn the trust of their workforce. They can help secure that trust by making privacy and explainability fundamental requirements for AI tools.

Sensitive customer and employee data is essential to training AI models for many banking use cases. However, as AI becomes more ubiquitous in the industry, opportunities to leak this sensitive customer data will increase dramatically.

To tackle those data privacy challenges, CIOs need to adopt a multi-faceted approach. They should mandate regular reviews of vendor contracts for privacy controls and security, while also staying abreast of emerging AI security tools to guard against adversarial prompting and vector database attacks. To mitigate the impact of data breaches, CIOs should advocate for investment in synthetic data capabilities, enabling models to be trained on synthetic data wherever feasible. In situations where the use of real data is unavoidable, guidelines should be established for IT and business teams to train models using a federated approach. Additionally, CIOs should implement policies for in-house or vendor-supplied tools to ensure that generative AI prompts can prevent sensitive information from being captured by large language models (LLMs) outside the bank’s control.

Banking CIOs must also enforce principles of explainable AI to make AI trustworthy, practical and legal in the banking industry. While explainability has macro implications for regulators and customer trust as it relates to preventing or addressing AI model bias, it also plays a role in the day-to-day work of employees for many of the same reasons.

For instance, frontline employees need to be able to explain to customers who are turned down for a loan why their application was denied by the bank’s AI-based credit decisioning model, or how the technology influenced the terms of their loan. Explainability also gives employees the opportunity to push back against the model or identify and report bias patterns that can negatively impact certain customers. Without explainability, banking jobs can become less fulfilling and even demoralising, employees can feel as if they are mere appendages to the AI juggernaut, and inequality and discrimination can creep into the system more easily.

CIOs can foster explainability, whether during in-house design or as a requirement for vendors, by working with data scientists and domain experts to vet the approaches used (e.g., so-called model-agnostic or model-specific explainability frameworks). Ultimately, decision model explainability must be understandable by human beings, and results must be translated into human-digestible terms and domain-specific context. CIOs should also ensure that solutions for bias mitigation are in place, such as fairness-aware algorithms that assess models for disparities and propose viable alternatives.

In conclusion, while AI holds immense promise for enhancing productivity and customer experiences, its potential can only be fully realized if employees adopt it and believe in its capacity to have a positive impact on people’s lives. Current ethical concerns pose significant challenges to this adoption. Therefore, CIOs must collaborate across the bank to influence the prioritization, development, deployment, and governance of AI tools, addressing these concerns to boost adoption. As they lead their function and engage with the rest of the bank to promote AI adoption, CIOs should focus on privacy, explainability, bias mitigation and elimination, use-case boundaries, and robust governance structures.

Gartner analysts will discuss top trends and best practices for CIOs and IT professionals at the Gartner IT Symposium/Xpo conference, taking place November 11-13, in Kochi, India. Media registration can be booked via [email protected].

 

(The author is Derek Frost, VP Analyst at Gartner, and the views expressed in this article are his own)