The European Union Artificial Intelligence Act and Its TPRM Impact

The European Union today approved sweeping AI regulations, set to go into effect in 2026. Here we take a deep dive into how this will impact your TPRM program.
By:
Matthew Delman
,
Product Marketing Manager
March 13, 2024
Share:
Blog EU AI Act

Today, the European Parliament passed one of the first regulations governing artificial intelligence (AI) technology and its applications around the world. The European Union’s “Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,” made headlines in 2021 when it was originally proposed – and not just for its long name.

Now, three years later, it has been approved in the European Parliament. The Act is expected to enter into force at the end of the legislative session in May 2024, and will become fully applicable 24 months following its publication in the official journal. This new regulation is likely going to have a transformative impact on your third-party risk management program, especially for companies based in other countries that want to do business in Europe.

This blog takes a deep dive into the AI Act and provides some context for what it means for your TPRM program going forward.

What Is the EU AI Act?

The EU AI Act is designed to offer a governance and compliance framework for AI within the European Union. Ultimately, the goal is to place guardrails around how AI can be used in the EU and what the responsibility is of companies doing business in Europe that wish to build an AI tool or apply AI to their existing technology.

The rules in the AI Act are designed to:

  • Address risks specifically created by AI applications;
  • Propose a list of high-risk applications;
  • Set clear requirements for AI systems for high-risk applications;
  • Define specific obligations for AI users and providers of high-risk applications;
  • Propose a conformity assessment before the AI system is put into service or placed on the market;
  • Propose enforcement after such an AI system is placed in the market;
  • Propose a governance structure at the European and national levels.

The act defines specific use cases that are banned and/or highly regulated in Europe once the regulation goes into effect. These include:

  • Biometric categorization systems based on sensitive characteristics
  • Untargeted scraping of facial images from the internet or CCTV footage for facial recognition databases
  • Emotion recognition in workplaces and schools
  • Social scoring
  • Predictive policing is based solely on profiling a person or assessing their characteristics
  • AI that manipulates human behavior or exploits people’s vulnerabilities

There are carve-outs for law enforcement use of real-time biometric identification, but these are strictly time-bound and geographically limited in scope. Law enforcement needs to get judicial authorization before they use these systems, and when they want to use such biometric systems after the fact as well.

How Does the EU AI Act Regulate Artificial Intelligence?

The European Union adopted a risk-based approach to building their legislation. This took the form of outlining four distinct categories of risk, as outlined in the pyramid below.

Source: European Commission

These risk categories are explained as:

  • Unacceptable Risk – This level refers to any AI systems that the EU considers a clear threat to the safety, livelihoods, and rights of EU citizens.
  • High Risk – AI systems marked as high risk are those that may operate within use cases that are crucial to society. This can include AI used in education access, employment practices, law enforcement, border control and immigration, critical infrastructure, education access, and other situations where someone’s rights might be infringed. Many of these systems will have to be registered in an EU database.
  • Limited Risk – This category refers to AI applications with limited overall impact. Think of an AI chatbot on a website.
  • Minimal Risk – Also called “no risk,” these are AI systems used in media like video games or AI-enabled email spam filters. This appears to be the bulk of AI in use within the EU today.

Unacceptable Risk

AI applications defined as Unacceptable Risk are banned from use throughout the European Union. These types of systems include:

  • Manipulation of people or specific vulnerable groups, such as the voice-activated toys previously mentioned, that encourages dangerous behavior in kids.
  • Social scoring: classifying people based on behavior, socio-economic status, or personal characteristics.
  • Biometric identification and categorization of people.
  • Real-time and remote biometric identification systems, such as facial recognition.

High Risk

By contrast, High Risk AI systems have specific, strict rules that they must comply with before they’re able to go on the market. These rules force high-risk systems to include:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimize risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimize risk;
  • High level of robustness, security, and accuracy.

The AI Act establishes a legal framework for reviewing and approving high-risk AI applications, aiming to protect citizens' rights, minimize bias in algorithms, and control negative AI impacts. The goal here is to apply governance to AI development and ensure that the rights of EU citizens are protected while also enabling development to continue.

The EU also calls out general-purpose generative AI, such as ChatGPT, which would need to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

High-impact general-purpose AI like ChatGPT-4 needs to go through a thorough evaluation and any incidents must be reported.

Limited Risk

Contrast this with Limited Risk systems, which need to comply with transparency requirements that enable users to make informed decisions. One example is a website’s AI chatbot. Users need to be made aware that they’re using an AI system and given the opportunity to opt out of doing so.

What Does the EU AI Act Mean for Third-Party Risk Management?

In the context of third-party risk management, the passage of the EU AI Act means that companies with third-party vendors and suppliers located in the EU or those that do business in Europe need to be aware of the restrictions placed upon them. Much like how multinational or U.S.-based companies need to continue to comply with GDPR regulations on data privacy since that law was passed, companies that want to do business within the borders of the European Union need to comply with the transparency requirements in the AI Act.

Given the expansive definition of “high risk” in the law, it will make sense to ask vendors and suppliers more concrete questions about how they’re using AI and how they’re following the other relevant regulations. The fines for noncompliance are 7% of global revenue or 35 million euros (about $38 million), whichever is higher, so it behooves organizations to pay attention. Surveys like SIG and SIG Lite already include AI content in vendor questionnaires, so it’s worth making sure you include that content in the questions you ask vendors. Also, consider how standards bodies (such as NIST in the United States) approach AI risk.

Organizations should also thoroughly examine their own AI implementation practices. Other European technology laws still apply, so organizations needing GDPR compliance should explore ways to integrate AI Act compliance into their workflow. This is especially key as more software vendors integrate AI into their offerings.

Keep in mind that there are several key risks related to AI usage regardless of what regulators say, including:

  • Data quality and bias – AI algorithms are only as good as the data they ingest and learn from. Poor data quality can lead to erroneous risk assessments, while biased data can perpetuate unfair treatment of suppliers or third parties.
  • Lack of transparency and comprehension – Limited insight into how AI models arrive at their decisions and what data they use for their results makes these algorithms a “black box” in many cases. Be wary of an AI model that doesn’t offer explanations for how it reached a decision.
  • Cybersecurity and data privacy risks – AI systems that handle sensitive risk and supplier data become attractive targets for cyber-attacks and data breaches. Ensuring these systems are secure and follow all applicable privacy laws is paramount.
  • Shortfalls in human-AI collaboration and oversight – Overreliance on AI without human oversight can lead to errors or unintended consequences that may go unnoticed – especially as the model is being trained.
  • AI talent scarcity and skills gaps – Few people have extensive experience in AI and machine learning models. As AI becomes more prominent and integrated into your or your vendors’ operations, this skills gap will become increasingly pronounced.

TPRM teams will have their work cut out for them in the run-up to the AI Act going into effect in 2026. Ensuring that vendors comply with transparency laws around the inclusion of AI in their offerings is a good first step, but more guidance around compliance is sure to come out in the next few months.

Next Steps: Learn to Leverage and Manage AI Safely

Companies are rapidly integrating AI into their operations, and governments are responding. Adopting a more cautious and considerate approach to AI in operations and asking questions of vendors and suppliers is the smart choice for third-party risk managers.

For more on how Prevalent incorporates AI technologies into our Third-Party Risk Management Platform to ensure transparency, governance, and security, download the white paper, How to Harness the Power of AI in Third-Party Risk Management, or request a demonstration today.

Tags:
Share:
Matthew delman
Matthew Delman
Product Marketing Manager

Matthew Delman has more than 15 years of marketing experience in cybersecurity, financial technology, and data management. As product marketing manager at Prevalent, he is responsible for customer advocacy, product content, enablement, and launch support. Before joining Prevalent, Matthew held marketing leadership roles at Techstrong Group and LookingGlass Cyber, and owned product positioning for EASM and breach prevention technologies.


  • Ready for a demo?
  • Schedule a free personalized solution demonstration to see if Prevalent is a fit for you.
  • Request a Demo