Regulation of artificial intelligence technology is underway around the world, including in the United States, the United Kingdom, the European Union, and Canada. Regulatory bodies in those countries have either proposed or finalized documents emphasizing outright restrictions, conscientious development, and approval processes. The different focuses across multiple geographies create a patchwork of regulatory complexity for third-party risk managers and cybersecurity professionals seeking to strike a balance between efficiency gains and responsible development.
In this post, we will examine the AI regulatory developments and statements of intent from leading political figures and standards bodies in the U.S., the U.K., the European Union, and Canada. Additionally, we will look at each regulation’s impact on the third-party risk management landscape.
The EU made news in early December when the European Parliament reached a deal on the passage of the EU AI Act, which makes European regulators the first to have comprehensive legislation regulating generative AI and any future AI developments. It’s not unusual for Europe to be at the forefront of regulation. In 2016, they were the first to have comprehensive data privacy rules with the General Data Protection Regulation (GDPR). The AI Act marks the bloc continuing its tradition of leading the world with example regulations.
The European Union’s Artificial Intelligence Act, officially the “Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,” was originally proposed in 2021.
The rules in this law are designed to:
The European Parliament took a risk-based approach to the regulation. They defined four categories of risk, which it outlined using a pyramid model as shown below.
Source: European Commission
These four levels are defined as:
High-risk AI systems have specific, strict rules that they must comply with before they’re able to go on the market. According to the EU’s write-up on the AI Act, high-risk systems follow these rules:
All remote biometric systems, for example, are considered high-risk. The use of remote biometrics in public spaces for identification (e.g., facial recognition) in law enforcement will be prohibited under this act.
The AI Act establishes a legal framework for reviewing and approving high-risk AI applications, aiming to protect citizens' rights, minimize bias in algorithms, and control negative AI impacts.
Companies located in the EU or those who do business with EU organizations need to be aware of and comply with the laws after the applicable notice period. Given the expansive definition of “high risk” in the law, it may make sense to ask vendors and suppliers more concrete questions about how they’re using AI and how they’re following the other relevant regulations.
Organizations should also thoroughly examine their own AI implementation practices. Other European technology laws still apply, so organizations needing GDPR compliance should explore ways to integrate AI Act compliance into their workflow as well.
How Will AI Impact Your TPRM Program?
Read our 16-page report to discover how AI can lower third-party risk management costs, add scale, and enable faster decision making.
While there are no official AI regulations in the United States today, major political figures and standards bodies have published extensive guidance in the form of governance frameworks and statements of intent. This includes the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) introduced in January 2023, President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and Senator Chuck Schumer’s SAFE Innovation Framework that’s designed to inform future congressional legislation.
In the U.S., existing AI guidance centers around promoting responsible development and addressing business risk. The NIST framework exemplifies this approach, providing a methodology for crafting an AI governance strategy within your organization. President Biden’s Executive Order and Senator Schumer’s SAFE Innovation Framework are policy guidance documents for the executive branch and Congress as they debate laws designed to govern AI development.
The NIST AI framework provides guidance around how to craft an AI governance strategy in your organization. The RMF is divided into two parts. Part 1 includes an overview of risks and characteristics of what NIST refers to as “trustworthy AI systems.” Part 2 describes four functions to help organizations address the risks of AI systems: Govern, Map, Measure, and Manage. The illustration below reviews the four functions.
The functions in the AI risk management framework. Courtesy: NIST
Organizations should consider risk management principles to minimize the potential negative impacts of AI systems, such as hallucination, data privacy, and threats to civil rights. This consideration also extends to the use of third-party AI systems or third-parties’ use of AI systems. Potential risks of third-party misuse of AI include:
According to NIST, the RMF will help organizations overcome these potential risks.
President Biden released his executive order on AI development at the end of October 2023. The goal of this EO is to define guidance around what Biden calls responsible development and use of AI, as well as outline the specific principles that the executive branch – and ideally the entire U.S. federal government – will use as part of ensuring responsible development.
Biden outlines eight guiding principles and priorities in the executive order designed to guide AI development. In the following table, we describe the guiding principles and what they could mean for your TPRM program.
Guiding Principle |
What the EO Says About It |
What It Means for You |
Artificial Intelligence must be safe and secure. |
President Biden wants to implement some guardrails around AI development, ensuring that the products developed using this technology are resilient against attack, can be readily evaluated, and are as safe as possible to use. |
Expect more guidance from the federal government around how to use AI in your products and what to look for with regards to any AI usage in your supplier’s work. In the interim, consider the NIST AI RMF as guidance. |
Promoting responsible innovation, competition, and collaboration will enable the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges. |
Biden intends to invest in AI-related education, training, and research to give the United States a leg up in the global AI arms race. The intent is also to encourage competition, so large enterprises with deep pockets don’t capture the market. |
There could be a lot of smaller software vendors and suppliers leveraging AI toolkits in the future. Be aware of which companies in your supply chain include AI capabilities and be prepared to assess them accordingly. |
The responsible development and use of AI require a commitment to supporting American workers. |
The Biden administration wants to ensure that AI tools don’t cause widespread unemployment or challenges in the labor market. |
Start to expand your examinations of AI risks beyond cybersecurity and data privacy. This includes looking at how AI is used in hiring practices or other day-to-day operations such as customer support and inventory management. AI will likely have a significant societal impact. As part of your ESG monitoring, you need to understand how suppliers use the technology today and how they intend to use it in the future. |
Artificial Intelligence policies must be consistent with the Biden Administration’s dedication to advancing equity and civil rights. |
This guiding principle is about preventing bias in AI algorithms, as well as ensuring that organizations don’t use AI to further disadvantage historically underrepresented groups. |
AI will soon become an even bigger ESG concern. Expect to see questions about AI usage focused on the non-technical side of third-party risk management in the next year or so. Be sure your TPRM program includes a library of updated assessment content to capture this important information from vendors and suppliers. |
The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected. |
The federal government plans to enforce consumer protections against AI technology usage. They also plan to examine potential new regulations for the technology. |
Take a hard look at how your suppliers intend to use AI in their business. Current federal regulations still apply to this growing technology sector, and you should ask your suppliers about their plans for potential future compliance concerns – including data privacy. |
Americans’ privacy and civil liberties must be protected as AI continues advancing. |
The Biden administration wants to emphasize data privacy considerations. This is especially pertinent with how powerful AI can be to extract personal data. |
Pay strict attention to how your suppliers comply with data privacy laws. Large language models and other AI tools could become privacy risks if not properly governed. |
It is important to manage the risks from the federal government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans. |
This principle is about making sure the federal government has the right professionals with AI skills in its ranks. Biden notes that he intends to focus on training for the federal workforce on AI technology. |
If you’re working with federal suppliers or are a federal supplier yourself, be aware that the government is going to be focused on upskilling its people in respect to AI. |
The federal government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change. |
The Biden administration plans to work with industry and international allies to develop a “framework to manage AI’s risks, unlock AI’s potential for good, and promote common approaches to shared challenges.” |
Expect more documentation about AI risk management to come out of the federal government. As noted above, consider the NIST AI RMF as guidance. |
The EO matters most to federal contractors and companies that supply federal contractors. The point here is that President Biden is addressing the challenges of AI use more broadly. It might be indicative of a future focus on technology as well.
Not to be outdone, Senator Chuck Schumer (D-NY) in June 2023 unveiled what he called the “SAFE Innovation Framework” for artificial intelligence. The framework aims to establish a policy response for the inevitable legislation and regulatory guidance around AI technology. The word SAFE in the name of the framework stands for:
What’s clear in this framework is the U.S. Senate’s intention to take a more concrete look at regulating AI technology at the federal level. This is outside the scope of the Executive Order that President Biden issued and may influence future legislation.
In the United Kingdom, Lord Holmes of Richmond introduced an AI Regulation Bill in the House of Lords. This is the second bill introduced in Parliament designed to regulate the usage of artificial intelligence in the UK. The initial bill, addressing both AI and worker’s rights, was presented in the House of Commons towards the conclusion of the 2022 to 2023 legislative session but was discontinued in May 2023 due to the session's conclusion.
This new AI bill, introduced in November 2023, is broader in focus. Lord Holmes introduced the regulation to put some guardrails around AI development and define who would be responsible for defining future legislative restrictions on AI in the United Kingdom.
There are a few key features of the bill, which had its first reading in the House of Lords on November 22, 2023. These include:
This proposed regulation is still in the early phases of negotiations. It could take a very different form after the second reading in the House of Lords, followed by a subsequent reading in the House of Commons.
Depending on how much of the bill survives the legislative process, it could have a substantial impact on how AI is used in the UK, how models are trained, and the transparency of the broader data-gathering process. Each of these areas has a direct impact on third-party vendor or supplier usage of AI technologies.
In June 2022, the government of Canada began consideration of the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022. The larger C-27 bill is designed to modernize existing privacy and digital law and includes three different sub-acts: the Consumer Privacy Protection Act, the Artificial Intelligence and Data Act, and the Personal Information and Data Protection Tribunal Act.
The AIDA’s main goal is to add consistency to AI regulations throughout Canada. There are a few regulatory gaps identified in the companion document of the Act, such as:
The act is currently under discussion, and the Canadian government anticipates it will take approximately two years for the law to pass and be implemented. There were six core principles identified in the companion document to AIDA, outlined in the table below:
Guiding Principle |
How AIDA Describes It |
What It Could Mean for TPRM |
Human Oversight & Monitoring |
Human Oversight means that high-impact AI systems must be designed and developed to enable people managing the operations of the system to exercise meaningful oversight. This includes a level of interpretability appropriate to the context. Monitoring, through measurement and assessment of high-impact AI systems and their output, is critical in supporting effective human oversight. |
Vendors and suppliers must establish easily measurable methods to monitor AI usage in their products and workflows. Following potential passage of the AIDA, organizations will need to understand how their third parties monitor AI usage and incorporate AI into their broader governance and oversight policies. Another way to ensure more thorough human oversight and monitoring is to build human reviews into reporting workflows to check for accuracy and bias. |
Transparency |
Transparency means providing the public with appropriate information about how high-impact AI systems are being used. The information provided should be sufficient to allow the public to understand the capabilities, limitations, and potential impacts of the systems. |
Organizations should be asking their vendors and suppliers about how they’re using AI and what sort of data is included in the models. Be aware of how this is integrated as well. |
Fairness and Equity |
Fairness and Equity means building high-impact AI systems with an awareness of the potential for discriminatory outcomes. Appropriate actions must be taken to mitigate discriminatory outcomes for individuals and groups. |
Organizations should inquire how their third parties are controlling for potential bias in their AI usage. There might be an additional impact here in terms of net new ESG regulations. |
Safety |
Safety means that high-impact AI systems must be proactively assessed to identify harms that could result from use of the system, including through reasonably foreseeable misuse. Measures must be taken to mitigate the risk of harm. |
AIDA may introduce new regulations regarding data usage in the context of AI. Expect new security requirements in AI tooling, and make sure that current and prospective vendors answer questions about the security of their AI usage – including basic controls such as data security, asset management, and identity and access management. |
Accountability |
Accountability means that organizations must put in place governance mechanisms needed to ensure compliance with all legal obligations of high-impact AI systems in the context in which they will be used. This includes the proactive documentation of policies, processes, and measures implemented. |
New regulations are likely, prompting companies to inquire with third parties about compliance with any emerging reporting requirements and mandates. |
Validity & Robustness |
Validity means a high-impact AI system performs consistently with intended objectives. Robustness means a high-impact AI system is stable and resilient in a variety of circumstances. |
Organizations should ask their third parties about any validity issues with AI models in their operations, a concern that may be relevant to technology vendors and potentially extend to the physical supply chain. |
Ultimately, the Canadian government is taking a hard look at how to regulate AI usage nationwide. There are going to be new mandates and new laws to comply with no matter what. So, it makes sense for companies doing business in Canada or working with Canadian companies to understand any upcoming requirements as AIDA comes closer to passage.
Governments around the world are actively debating how to regulate artificial intelligence technology and its development. Regulatory discussions have so far focused on specific use cases identified as potentially the most impactful on a societal level, suggesting that AI laws will focus on a combination of privacy, security, and ESG concerns.
The next 12 to 18 months should offer more clarity on how organizations worldwide need to adapt their third-party risk management programs to AI technology. Companies are rapidly integrating AI into their operations, and governments will respond in kind. At this point, adopting a more cautious and considerate approach to AI in operations and asking questions of vendors and suppliers is the correct choice for third-party risk managers.
For more on how Prevalent incorporates AI technologies into our Third-Party Risk Management Platform to ensure transparency, governance, and security, download the white paper, How to Harness the Power of AI in Third-Party Risk Management, or request a demonstration today.
Prevalent continues to set the pace in third-party risk management with customer-focused enhancements that simplify the...
06/12/2024
The European Union today approved sweeping AI regulations, set to go into effect in 2026. Here...
03/13/2024
Our newest AI capability equips customers with unmatched guidance and context for managing their third-party risks.
10/17/2023