Today, the European Parliament passed one of the first regulations governing artificial intelligence (AI) technology and its applications around the world. The European Union’s “Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,” made headlines in 2021 when it was originally proposed – and not just for its long name.
Now, three years later, it has been approved in the European Parliament. The Act is expected to enter into force at the end of the legislative session in May 2024, and will become fully applicable 24 months following its publication in the official journal. This new regulation is likely going to have a transformative impact on your third-party risk management program, especially for companies based in other countries that want to do business in Europe.
This blog takes a deep dive into the AI Act and provides some context for what it means for your TPRM program going forward.
The EU AI Act is designed to offer a governance and compliance framework for AI within the European Union. Ultimately, the goal is to place guardrails around how AI can be used in the EU and what the responsibility is of companies doing business in Europe that wish to build an AI tool or apply AI to their existing technology.
The rules in the AI Act are designed to:
The act defines specific use cases that are banned and/or highly regulated in Europe once the regulation goes into effect. These include:
There are carve-outs for law enforcement use of real-time biometric identification, but these are strictly time-bound and geographically limited in scope. Law enforcement needs to get judicial authorization before they use these systems, and when they want to use such biometric systems after the fact as well.
The European Union adopted a risk-based approach to building their legislation. This took the form of outlining four distinct categories of risk, as outlined in the pyramid below.
Source: European Commission
These risk categories are explained as:
AI applications defined as Unacceptable Risk are banned from use throughout the European Union. These types of systems include:
By contrast, High Risk AI systems have specific, strict rules that they must comply with before they’re able to go on the market. These rules force high-risk systems to include:
The AI Act establishes a legal framework for reviewing and approving high-risk AI applications, aiming to protect citizens' rights, minimize bias in algorithms, and control negative AI impacts. The goal here is to apply governance to AI development and ensure that the rights of EU citizens are protected while also enabling development to continue.
The EU also calls out general-purpose generative AI, such as ChatGPT, which would need to comply with transparency requirements:
High-impact general-purpose AI like ChatGPT-4 needs to go through a thorough evaluation and any incidents must be reported.
Contrast this with Limited Risk systems, which need to comply with transparency requirements that enable users to make informed decisions. One example is a website’s AI chatbot. Users need to be made aware that they’re using an AI system and given the opportunity to opt out of doing so.
In the context of third-party risk management, the passage of the EU AI Act means that companies with third-party vendors and suppliers located in the EU or those that do business in Europe need to be aware of the restrictions placed upon them. Much like how multinational or U.S.-based companies need to continue to comply with GDPR regulations on data privacy since that law was passed, companies that want to do business within the borders of the European Union need to comply with the transparency requirements in the AI Act.
Given the expansive definition of “high risk” in the law, it will make sense to ask vendors and suppliers more concrete questions about how they’re using AI and how they’re following the other relevant regulations. The fines for noncompliance are 7% of global revenue or 35 million euros (about $38 million), whichever is higher, so it behooves organizations to pay attention. Surveys like SIG and SIG Lite already include AI content in vendor questionnaires, so it’s worth making sure you include that content in the questions you ask vendors. Also, consider how standards bodies (such as NIST in the United States) approach AI risk.
Organizations should also thoroughly examine their own AI implementation practices. Other European technology laws still apply, so organizations needing GDPR compliance should explore ways to integrate AI Act compliance into their workflow. This is especially key as more software vendors integrate AI into their offerings.
Keep in mind that there are several key risks related to AI usage regardless of what regulators say, including:
TPRM teams will have their work cut out for them in the run-up to the AI Act going into effect in 2026. Ensuring that vendors comply with transparency laws around the inclusion of AI in their offerings is a good first step, but more guidance around compliance is sure to come out in the next few months.
Companies are rapidly integrating AI into their operations, and governments are responding. Adopting a more cautious and considerate approach to AI in operations and asking questions of vendors and suppliers is the smart choice for third-party risk managers.
For more on how Prevalent incorporates AI technologies into our Third-Party Risk Management Platform to ensure transparency, governance, and security, download the white paper, How to Harness the Power of AI in Third-Party Risk Management, or request a demonstration today.
Prevalent continues to set the pace in third-party risk management with customer-focused enhancements that simplify the...
06/12/2024
World governments and standards bodies have started to respond to AI technologies with new compliance regulations...
01/04/2024
Our newest AI capability equips customers with unmatched guidance and context for managing their third-party risks.
10/17/2023