In response to growing enterprise usage of artificial intelligence (AI) systems – and a corresponding lack of guidance on how to manage their risks – the U.S. National Institute of Standards and Technology (NIST) introduced the AI Risk Management Framework (AI RMF) in January 2023. According to NIST, the goal of the AI RMF is to, “offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.” The AI RMF is a voluntary framework and can be applied across any company, industry or geography.
The RMF is divided into two parts. Part 1 includes an overview of risks and characteristics of what NIST refers to as “trustworthy AI systems.” Part 2 describes four functions to help organizations address the risks of AI systems: Govern, Map, Measure and Manage. The illustration below reviews the four functions.
The functions in the AI risk management framework. Courtesy: NIST
It is important for organizations to consider risk management principles to minimize the potential negative impacts of AI systems, such as hallucination, data privacy, and threats to civil rights. This consideration also extends to the use of third-party AI systems or third parties’ use of AI systems. Potential risks of third-party misuse of AI include:
According to NIST, the RMF will help organizations overcome these potential risks.
The NIST AI RMF breaks down its four core functions into 19 categories and 72 subcategories that define specific actions and outcomes. NIST offers a handy playbook that further explains the actions.
The table below reviews the four functions and select categories in the framework and suggests considerations to address potential third-party AI risks.
NOTE: This is a summary table. For a full examination of the NIST AI Risk Management Framework, download the full version and engage your organization’s internal audit, legal, IT, security and vendor management teams.
NIST AI RMF Category | TPRM Considerations |
---|---|
Govern is the foundational function in the RMF that establishes a culture of risk management; defines processes; and provides structure to the program. |
|
GOVERN 1: Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively. GOVERN 2: Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks. GOVERN 3: Workforce diversity, equity, inclusion, and accessibility processes are prioritized in the mapping, measuring, and managing of AI risks throughout the lifecycle. GOVERN 4: Organizational teams are committed to a culture that considers and communicates AI risk. GOVERN 5: Processes are in place for robust engagement with relevant AI actors. GOVERN 6: Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues. |
Build AI policies and procedures as part of your comprehensive third-party risk management (TPRM) program in line with your broader information security and governance, risk, and compliance frameworks. Seek out experts to collaborate with your team on defining and implementing AI and TPRM processes and solutions; selecting risk assessment questionnaires and frameworks; and optimizing your program to address AI risks throughout the entire third-party lifecycle – from sourcing and due diligence, to termination and offboarding – according to your organization’s risk appetite. As part of this process, you should define:
|
Map is the function that establishes the context to frame risks related to an AI system. |
|
MAP 1: Context is established and understood. MAP 2: Categorization of the AI system is performed. MAP 3: AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood. MAP 4: Risks and benefits are mapped for all components of the AI system including third-party software and data. MAP 5: Impacts to individuals, groups, communities, organizations, and society are characterized. |
Developing a sound risk management process and understanding the context of AI usage begins with profiling and tiering third parties, and that involves quantifying [inherent risks] (/use-cases/vendor-inherent-risk-scoring/) for all third parties – in this case the inherent AI risks. Criteria used to calculate inherent risk for third-party classification and categorization include:
Rule-based tiering logic enables vendor categorization using a range of data interaction and regulatory considerations. |
Measure is the function that analyzes, assesses, benchmarks, and monitors AI risk and related impacts. |
|
MEASURE 1: Appropriate methods and metrics are identified and applied. MEASURE 2: AI systems are evaluated for trustworthy characteristics. MEASURE 3: Mechanisms for tracking identified AI risks over time are in place. MEASURE 4: Feedback about efficacy of measurement is gathered and assessed. |
Look for solutions that feature a large library of pre-built templates for third-party risk assessments. Third-party vendors should be evaluated for their AI practices at the time of onboarding, contract renewal, or at any required frequency (e.g., quarterly or annually) depending on material changes. Assessments should be managed centrally and be backed by workflow, task management, and automated evidence review capabilities to ensure that your team has visibility into third-party risks throughout the relationship lifecycle. Importantly, a TPRM solution should include built-in remediation recommendations based on risk assessment results to ensure that third parties address risks in a timely and satisfactory manner, while providing the appropriate evidence to auditors. To complement vendor AI evaluations, continuously track and analyze external threats to third parties. As part of this, monitor the Internet and dark web for cyber threats and vulnerabilities. All monitoring data should be correlated with assessment results and centralized in a unified risk register for each vendor, streamlining risk review, reporting, remediation, and response initiatives. Monitoring sources typically include:
Finally, continuously measure third-party KPIs and KRIs against your requirements to help your team uncover risk trends, determine third-party risk status, and identify exceptions to common behavior that could warrant further investigation. |
The Manage function entails allocating risk resources to mapped and measured risks on a regular basis and as defined by the GOVERN function. This includes plans to respond to, recover from, and communicate about incidents or events. |
|
MANAGE 1: AI risks based on assessments and other analytical output from the MAP and MEASURE functions are prioritized, responded to, and managed. MANAGE 2: Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented, and informed by input from relevant AI actors. MANAGE 3: AI risks and benefits from third-party entities are managed. MANAGE 4: Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly. |
As part of your broader incident management strategy ensure that your third-party incident response program enables your team to rapidly identify, respond to, report on, and mitigate the impact of third-party vendor AI security incidents. Key capabilities in a third-party incident response service include:
Armed with these insights, your team can better manage and triage third-party entities; understand the scope and impact of the incident; what data was involved; whether the third party’s operations were impacted; and when remediations are completed. |
Prevalent can help your organization improve not only its own AI governance, but also how it governs third-party AI risks. Specifically, we can help you:
Leveraging the NIST AI Risk Management Framework in your TPRM program will help your organization establish the controls and accountability over third-party AI usage. For more on how Prevalent can help simplify this process, request a demo today.
Prevalent continues to set the pace in third-party risk management with customer-focused enhancements that simplify the...
06/12/2024
The European Union today approved sweeping AI regulations, set to go into effect in 2026. Here...
03/13/2024
World governments and standards bodies have started to respond to AI technologies with new compliance regulations...
01/04/2024