Navigating the AI-ML landscape: Risk management in the age of Artificial Intelligence

Oct 16, 2024
ai

Driven by advances in AI research along with faster and cheaper cloud computation capabilities that bring the research to life, the supply and demand for AI & ML is exploding. 

The advances in AI-ML have been rapid in recent years and continue to accelerate, presenting challenges for the legal and regulatory frameworks to keep pace. The usage of these AI tools is not without risk, primarily regarding bias and fairness. Numerous examples have been reported illustrating how AI-ML solutions have had an unanticipated impact on people.

This article outlines how practitioners, marketers and users of AI-ML should be thinking about the risks of this technology and how to manage these risks.

Regulatory framework

My specific domain expertise is in Financial Services and the sector has been navigating model risk management since the financial crisis of 2008/2009. Two guidelines that offer insights are:

Outside of FIs, we see emerging regulatory frameworks  in Canada with the proposed (artificial Intelligence and Data Act (AIDA) and the EU Artificially Intelligence Act which was the first to be passed (EU Artificial Intelligence Act).

Key areas of consideration

Whether it is a mature existing team or a brand-new team you are leading, let's start with a focus on the following three areas.

Data maturity and risk

The quality of any AI-ML solution depends on your data. This requires the development of a robust data strategy from which the required policies and procedures would flow. These policies and procedures would cover areas such as: 

  • Data inventory of all sources/tables/fields. 
  • Risk ratings for all the data. 
  • Assessments of data and metadata quality. 
  • Data retention.  
  • Metrics to track progress and issues.


Bias/fairness

It’s important to have AI-ML solutions tested rigorously for bias. A few questions to consider:

  • Do you have the right controls in place to ensure the proper usage of the AI-ML solution?  
  • Can you monitor the ongoing bias outcome?

 AIDA for example, focuses on human impact, with consideration for:

  • Effects on the health, safety, human rights of individuals.
  • Severity and the scale of the impact.
  • Opt-out difficulty.

While some biases may have business justifications, it is critical to have robust processes in place to assess and mitigate bias generated by AI-ML models.

Explainability

Can you explain why your model produced a certain output and explain the decision or action taken as a result? While legacy models were easier to explain due to the nature of the limited inputs, new AI-ML models are less transparent and use significantly more inputs making it much more difficult. Developers can leverage various tools and techniques to address the size and dimensionality problem. 

Looking forward

These areas are a starting point for building AI-ML teams and deploying solutions in your businesses. Future blogs will delve deeper into these topics and explore additional considerations.


AUTHORED BY
profile picture

Richard Nestor

Practice Lead – TECE AI/ML Practice Area TD Bank Group




UPCOMING EVENTS & LEARNING OPPORTUNITIES

|

VIEW ALL

Carousel title 2

/

Recent Work |

View All
Council
Council
Council
Council
Council
Council
Council
Council
Council
Council
Council
  • Key Topics
Council

Major Sponsors

  • BMO-800x450
  • CIBC-800x450
  • Microsoft-2023