Editor's Note: This article, authored by Alastair Parr, Prevalent Senior Vice President, Global Products & Services, was originally published on www.corporatecomplianceinsights.com
---
While a great deal of attention is currently being focused on internal compliance with emerging AI regulations, Prevalent’s Alastair Parr argues that companies shouldn’t overlook a major external consideration: third parties.
Artificial intelligence (AI) is rapidly reshaping the modern world, and governments are rushing to build safeguards to ensure it is deployed responsibly. The rapid growth of technology has also led businesses in nearly every industry vertical to embrace AI, as it offers productivity and efficiency gains, with the ultimate goal of enhancing their bottom line.
However, alongside these opportunities come significant responsibilities for companies to deploy AI ethically and within the bounds of the law. This responsibility should extend not only to their own practices but also to those of all third parties they engage with, including vendors and service providers.
Navigating the many moving parts that come with safe and responsible AI deployment will be particularly challenging for companies based in regions at the forefront of AI regulation, including the U.S., Canada, the EU and the UK.
These regions are developing unique frameworks to regulate this fast-moving technology. Understanding and complying with these regulations will be critical for businesses operating in these regions to avoid legal repercussions and maintain trust with stakeholders.
Regulatory bodies worldwide are deciding how to regulate artificial intelligence, and businesses should pay close attention as proposals become binding laws. And though there will be variations country by country, most proposed rules focus on privacy, security and ESG matters regarding how businesses can ethically and legally use AI.
For example, in the U.S. the NIST AI risk management framework was introduced in January 2023 to “offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.” This voluntary framework offers comprehensive guidance on developing an AI governance strategy for organizations.
Organizations should apply risk management principles to mitigate the potential negative impacts of AI systems, such as:
All of the above relate not only to businesses but to the partners, vendors and other third parties with whom they do business. Increasingly, companies should expect to be held liable for how their vendors, suppliers and other third-party partners use AI, especially in terms of how they manage their customer data.
The coming years will clarify how organizations worldwide need to adapt their AI strategies, and managing third-party risk will likely become an increasingly important part of the equation.
With the passage of new laws will come new realities for businesses in every industry. It’s time to begin preparing for these new realities, including establishing acceptable use policies for AI and communicating those policies to third parties.
Regardless of location, a cautious approach and proactive engagement with vendors are essential strategies for managing these risks. Companies must recognize that responsible AI governance extends beyond their internal operations and encompasses the practices of all parties involved in their AI ecosystem.
Every business has unique objectives and challenges, meaning relationships with third-party partners will vary widely. But there are some fundamental steps that any company can take to mitigate AI-related risks associated with third-party relationships proactively:
As governments introduce new regulatory and legal frameworks around AI, businesses must increasingly look to their vendors and third-party partners as another source of risk that must be mitigated and managed. Taking these important steps requires expertise in AI governance, which is currently in high demand. Companies that lack dedicated AI risk management teams can find external assistance from organizations that specialize in navigating this complex landscape effectively.