TIMES OF TECH

shutterstock_2301467917.jpg

Organizational Processes for Machine Learning Risk Management – Open Data Science

1️. Forecasting Failure Modes

While it is crucial to identify and address possible problems in ML systems, turning this idea into action takes time and effort. However, in recent years, there has been a significant increase in resources that can help ML system designers predict issues more systematically. By carefully sorting out potential problems, making ML systems stronger and safer in real-world situations becomes easier. In this context, the following strategies can be explored:

Learning from past failures

Much like how transportation professionals investigate and catalog incidents to prevent future occurrences, ML researchers and organizations have started collecting and analyzing A.I. incidents. The A.I. Incident Databasewhich we also brought up in the last articleis a prominent repository that allows users to search for incidents and glean valuable insights. When developing an ML system, consulting this resource is crucial.

If a similar approach has caused an incident in the past, it serves as a strong warning sign that the new system may also pose risks, necessitating careful consideration.

Preventing Repeated Real-World A.I. Failures by Cataloging Incidents: The A.I. Incident Database | Source: https://arxiv.org/pdf/2011.08512.pdf

Addressing failures of imagination

Overcoming Failures of Imagination in AI-Infused System Development and Deployment | Source: https://arxiv.org/pdf/2011.13416.pdf

Often, A.I. incidents stem from unforeseen or poorly understood contexts and details of ML systems’ operations. Structured approaches outlined in the paper “Overcoming Failures of Imagination in AI-Infused System Development and Deployment” offer ways to hypothesize about these challenging future risks, in addition to considering aspects such as “who” (including investors, customers, and vulnerable nonusers), “what” (covering well-being, opportunities, and dignity), “when” (including immediate, frequent, and long-term scenarios), and “how” (involving actions and belief alterations) related to A.I. incidents.

While AI incidents can be embarrassing, costly, or even illegal for organizations, foresight can mitigate many known incidents, potentially leading to system improvements. In such cases, the temporary delay in implementation is far less costly than the potential harm to the organization and the public from a flawed system release.



Source link

Share this post on

Facebook
Twitter
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

Categories