The NIST AI Risk Management Framework and Third-Party Risk Management

Leverage this guidance to align your TPRM program with the NIST AI RMF to better govern third-party AI risk at your organization.
By:
Scott Lang
,
VP, Product Marketing
August 30, 2023
Share:
Blog nist ai rmf 0823

What Is the NIST AI Risk Management Framework?

In response to growing enterprise usage of artificial intelligence (AI) systems – and a corresponding lack of guidance on how to manage their risks – the U.S. National Institute of Standards and Technology (NIST) introduced the AI Risk Management Framework (AI RMF) in January 2023. According to NIST, the goal of the AI RMF is to, “offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.” The AI RMF is a voluntary framework and can be applied across any company, industry or geography.

The RMF is divided into two parts. Part 1 includes an overview of risks and characteristics of what NIST refers to as “trustworthy AI systems.” Part 2 describes four functions to help organizations address the risks of AI systems: Govern, Map, Measure and Manage. The illustration below reviews the four functions.


The NIST AI Risk Management Framework

The functions in the AI risk management framework. Courtesy: NIST

How Does the NIST AI Risk Management Framework Apply to Third Party Risk Management?

It is important for organizations to consider risk management principles to minimize the potential negative impacts of AI systems, such as hallucination, data privacy, and threats to civil rights. This consideration also extends to the use of third-party AI systems or third parties’ use of AI systems. Potential risks of third-party misuse of AI include:

  • Security vulnerabilities in the AI application itself. Without the proper governance and safeguards in place, your organization could be exposed to system or data compromise.
  • Lack of transparency in methodologies or measurements of AI risk. Deficiencies in measurement and reporting could result in underestimating the impact of potential AI risks.
  • AI security policies inconsistent with other existing risk management procedures. Inconsistency results in complicated and time intensive audits which could introduce potential negative legal or compliance outcomes.

According to NIST, the RMF will help organizations overcome these potential risks.

Key Third-Party Risk Management Considerations in the NIST AI Risk Management Framework

The NIST AI RMF breaks down its four core functions into 19 categories and 72 subcategories that define specific actions and outcomes. NIST offers a handy playbook that further explains the actions.

The table below reviews the four functions and select categories in the framework and suggests considerations to address potential third-party AI risks.

NOTE: This is a summary table. For a full examination of the NIST AI Risk Management Framework, download the full version and engage your organization’s internal audit, legal, IT, security and vendor management teams.

NIST AI RMF Category TPRM Considerations

Govern is the foundational function in the RMF that establishes a culture of risk management; defines processes; and provides structure to the program.

GOVERN 1: Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively.

GOVERN 2: Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks.

GOVERN 3: Workforce diversity, equity, inclusion, and accessibility processes are prioritized in the mapping, measuring, and managing of AI risks throughout the lifecycle.

GOVERN 4: Organizational teams are committed to a culture that considers and communicates AI risk.

GOVERN 5: Processes are in place for robust engagement with relevant AI actors.

GOVERN 6: Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues.

Build AI policies and procedures as part of your comprehensive third-party risk management (TPRM) program in line with your broader information security and governance, risk, and compliance frameworks.

Seek out experts to collaborate with your team on defining and implementing AI and TPRM processes and solutions; selecting risk assessment questionnaires and frameworks; and optimizing your program to address AI risks throughout the entire third-party lifecycle – from sourcing and due diligence, to termination and offboarding – according to your organization’s risk appetite.

As part of this process, you should define:

  • Governing policies, standards, systems, and processes to protect data from AI risks
  • Legal and regulatory requirements, ensuring that third parties are assessed accordingly
  • Clear roles and responsibilities (e.g., RACI) for team accountability
  • Risk scoring and thresholds based on your organization’s risk tolerance
  • Assessment and monitoring methodologies that are based on third-party criticality and continually reviewed
  • Third-party AI inventories
  • Fourth-party mapping to understand exposure to AI usage-based risks in your extended ecosystem
  • Key performance indicators (KPIs) and key risk indicators (KRIs) for internal stakeholders
  • Contractual requirements and right to audit
  • Incident response requirements
  • Risk and internal stakeholder reporting
  • Risk mitigation and remediation strategies

Map is the function that establishes the context to frame risks related to an AI system.

MAP 1: Context is established and understood.

MAP 2: Categorization of the AI system is performed.

MAP 3: AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood.

MAP 4: Risks and benefits are mapped for all components of the AI system including third-party software and data.

MAP 5: Impacts to individuals, groups, communities, organizations, and society are characterized.

Developing a sound risk management process and understanding the context of AI usage begins with profiling and tiering third parties, and that involves quantifying [inherent risks] (/use-cases/vendor-inherent-risk-scoring/) for all third parties – in this case the inherent AI risks. Criteria used to calculate inherent risk for third-party classification and categorization include:

  • Type of content required to validate controls
  • Criticality to business performance and operations
  • Location(s) and related legal or regulatory considerations
  • Level of reliance on fourth parties (to avoid concentration risk)
  • Exposure to operational or client-facing processes
  • Interaction with protected data
    From this inherent risk assessment, your team can automatically tier suppliers according to AI risk exposure; set appropriate levels of further diligence; and determine the scope of ongoing assessments.

Rule-based tiering logic enables vendor categorization using a range of data interaction and regulatory considerations.

Measure is the function that analyzes, assesses, benchmarks, and monitors AI risk and related impacts.

MEASURE 1: Appropriate methods and metrics are identified and applied.

MEASURE 2: AI systems are evaluated for trustworthy characteristics.

MEASURE 3: Mechanisms for tracking identified AI risks over time are in place.

MEASURE 4: Feedback about efficacy of measurement is gathered and assessed.

Look for solutions that feature a large library of pre-built templates for third-party risk assessments. Third-party vendors should be evaluated for their AI practices at the time of onboarding, contract renewal, or at any required frequency (e.g., quarterly or annually) depending on material changes.

Assessments should be managed centrally and be backed by workflow, task management, and automated evidence review capabilities to ensure that your team has visibility into third-party risks throughout the relationship lifecycle.

Importantly, a TPRM solution should include built-in remediation recommendations based on risk assessment results to ensure that third parties address risks in a timely and satisfactory manner, while providing the appropriate evidence to auditors.

To complement vendor AI evaluations, continuously track and analyze external threats to third parties. As part of this, monitor the Internet and dark web for cyber threats and vulnerabilities. All monitoring data should be correlated with assessment results and centralized in a unified risk register for each vendor, streamlining risk review, reporting, remediation, and response initiatives.

Monitoring sources typically include:

  • 1,500+ criminal forums; thousands of onion pages; 80+ dark web special access forums; 65+ threat feeds; and 50+ paste sites for leaked credentials — as well as several security communities, code repositories, and vulnerability databases covering 550,000 companies
  • Databases containing 10+ years of data breach history for thousands of companies around the world

Finally, continuously measure third-party KPIs and KRIs against your requirements to help your team uncover risk trends, determine third-party risk status, and identify exceptions to common behavior that could warrant further investigation.

The Manage function entails allocating risk resources to mapped and measured risks on a regular basis and as defined by the GOVERN function. This includes plans to respond to, recover from, and communicate about incidents or events.

MANAGE 1: AI risks based on assessments and other analytical output from the MAP and MEASURE functions are prioritized, responded to, and managed.

MANAGE 2: Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented, and informed by input from relevant AI actors.

MANAGE 3: AI risks and benefits from third-party entities are managed.

MANAGE 4: Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly.

As part of your broader incident management strategy ensure that your third-party incident response program enables your team to rapidly identify, respond to, report on, and mitigate the impact of third-party vendor AI security incidents.

Key capabilities in a third-party incident response service include:

  • Continuously updated and customizable event and incident management assessments
  • Real-time questionnaire completion progress tracking
  • Defined risk owners with automated chasing reminders to keep surveys on schedule
  • Proactive vendor reporting
  • Consolidated views of risk ratings, counts, scores and flagged responses for each vendor
  • Workflow rules to trigger automated playbooks to act on risks according to their potential impact on the business
  • Built-in reporting templates for internal and external stakeholders
  • Guidance from built-in remediation recommendations to reduce risk
  • Data and relationship mapping to identify relationships between your organization and third, fourth and Nth parties to visualize information paths and reveal at-risk data

Armed with these insights, your team can better manage and triage third-party entities; understand the scope and impact of the incident; what data was involved; whether the third party’s operations were impacted; and when remediations are completed.

Next Steps: Align Third-Party AI Controls with Your TPRM Program

Prevalent can help your organization improve not only its own AI governance, but also how it governs third-party AI risks. Specifically, we can help you:

  • Establish governing policies, standards, systems and processes to protect data and systems from AI risks as part of your overall TPRM program. (Aligns with category GOVERN 6.)
  • Profile and tier third parties, while quantifying inherent risks associated with third-party AI usage to ensure that all risks are mapped. (Aligns with category MAP 4.)
  • Conduct comprehensive third-party risk assessments and continuously monitor and measure AI-specific risks in the context of your TPRM program. (Aligns with the MEASURE category.)
  • Ensure comprehensive incident response to AI-specific risks from third-party entities. (Aligns with MANAGE 3.)

Leveraging the NIST AI Risk Management Framework in your TPRM program will help your organization establish the controls and accountability over third-party AI usage. For more on how Prevalent can help simplify this process, request a demo today.

Tags:
Share:
Leadership scott lang
Scott Lang
VP, Product Marketing

Scott Lang has 25 years of experience in security, currently guiding the product marketing strategy for Prevalent’s third-party risk management solutions where he is responsible for product content, launches, messaging and enablement. Prior to joining Prevalent, Scott was senior director of product marketing at privileged access management leader BeyondTrust, and before that director of security solution marketing at Dell, formerly Quest Software.

  • Ready for a demo?
  • Schedule a free personalized solution demonstration to see if Prevalent is a fit for you.
  • Request a Demo