AI Security Policies: Questions to Ask Third-Party Vendors

Learn how to structure an AI security policy for your organization and ask your vendors and suppliers these 16 questions to assess their AI security controls.
By:
David Allen
,
Chief Technology Officer & Chief Information Security Officer
August 23, 2023
Share:
Blog ai third party questions 0823

Most organizations today are exploring the use of emerging AI-powered technologies to improve their workflows and processes, analyze and summarize data, and generate content faster than ever before. You and your coworkers may already be using AI tools and frameworks for activities such as conducting research, generating content, and solving coding challenges.

However, it’s important to temper your organization’s enthusiasm for AI with appropriate guidance and restrictions. A wholesale ban on AI technologies could undermine business needs and goals, so how can you be vigilant about protecting sensitive data and ensuring that generated content is accurate? The solution starts with developing an AI security policy.

A strong AI security policy will empower your organizations to reap the benefits of AI, while prescribing the risk assessments and security controls necessary to protect sensitive data and ensure content accuracy. At the same time, it’s essential to ensure that your vendors, suppliers and other third parties have security controls that can sufficiently protect your critical systems and data.

In this post, I examine AI security policies; review how AI security policies can be applied to third parties; and share key questions to assess your vendors’ and suppliers’ AI security controls.

What Is an AI Security Policy?

An AI security policy is a set of guidelines for evaluating and using artificial intelligence tools and frameworks in a way that maximizes insight, control, and data protection. It also outlines AI vulnerabilities and presents measures to mitigate their potential risks.

With a well-designed AI security policy, an organization can safeguard the security and integrity of its AI systems and any data handled by artificial intelligence. AI policies typically include provisions for:

  • Protecting sensitive data through encryption and access controls
  • Ensuring authorized user access via authentication and authorization mechanisms
  • Maintaining network security through firewalls, intrusion detection systems, and other tools

An AI security policy is typically an extension of an organization’s general information security policy and associated controls, with some shared concepts regarding data protection, privacy and accuracy.

What Are the Components of an AI Security Policy?

Tool Evaluation Policies

These policies specify workflows and procedures for security teams who need to evaluate AI tools for use at their organizations. They outline required levels of data protection for different types of services and indicate procedures for reviewing how sensitive data may be used as AI training inputs and/or appear in AI-generated content.

Source Code Policies

Source code policies emphasize secure development practices (e.g., adhering to coding standards, conducting regular code reviews) and specify monitoring and logging mechanisms for tracking system behavior and detecting anomalies. These policies often require the organization to track the use of source code as an input to AI tools, as well as the use of any code generated by AI.

Incident Response Policies

Incident response policies outline protocols for handling security breaches and emphasize compliance with industry standards and regulations calling for such protocols.

Data Retention and Privacy Policies

Data retention policies ensure that input data uploaded to AI tools and services is deleted within an acceptable timeframe. Privacy considerations must also be evaluated, as some regions have temporarily banned the use of generative AI tools due to concerns around the collection and use of personal data.

Ethics Policies

Ethical considerations and awareness training are included in AI security policies to address biases, ensure accountability, and foster a culture of security. For example, they may require users of generative AI to review and edit created content for accuracy, bias, and offensive content.

Acknowledgement of AI Hallucination and Similar Risks

AI security policies should acknowledge that AI tools have been known to produce incorrect, biased, or offensive results. Of particular concern are “AI hallucinations,” which occur when generative AI tools create content that is unexpected, untrue, or not backed up by evidence and real-world data. AI tools also tend to have a limited knowledge of recent real-world events, which can lead to further inaccuracies and omissions in generated content.

How Do AI Security Policies Apply to Third Parties?

Your AI security policy should include standards for evaluating, mitigating, and monitoring the risks for all AI solutions that process or generate data for your organization – including those provided and/or used by your third-party vendors, suppliers, and service providers. It’s critical to ensure that third-party tools and services protect sensitive data, sanitize inputs to remove confidential information, and follow other required security controls.

There are three primary ways to leverage AI security policies in your third-party risk management (TPRM) program: pre-contract due diligence, vendor contracting, and vendor assessment.

Pre-Contract Due Diligence

Your AI security policy should guide the due diligence process when evaluating potential vendors and suppliers. By referencing the policy, your organization can systematically assess a vendor's security controls, data protection mechanisms, and access protocols. This minimizes potential vulnerabilities by ensuring that external parties meet the same rigorous security criteria as that applied to your internal systems.

Vendor Contracting

Contractual agreements with vendors and suppliers can be informed by AI security policy provisions. By incorporating policy guidelines into agreements, your organization sets clear expectations regarding security requirements, data handling practices, and incident response procedures. This alignment ensures that vendor-provided AI solutions or services uphold the organization's security standards, contributing to a more secure and resilient AI ecosystem.

Vendor Assessment

When used as a part of vendor assessments, your AI security policy can act as a reference point for gauging their security practices against your organization's defined standards. This also ensures consistency in setting security expectations across your vendor ecosystem.

Overall, an AI security policy acts as a comprehensive framework for evaluating and aligning vendor and supplier security practices with your organization's strategic AI objectives.

AI Security Controls Assessment: 16 Questions to Ask Your Third Parties

Hidden threats can lurk within third-party AI providers, posing risks that might not be immediately evident. These threats encompass security vulnerabilities, potential data breaches, covert malicious code, data misuse, and biases in algorithms – each of which could compromise your organization's data, reputation, and operations.

To counter these risks, your organization must conduct diligent evaluations of its third-party providers. This due diligence should assess security measures, data protection practices, and algorithmic transparency. Continuous monitoring, robust contractual agreements, and contingency planning are also critical to revealing and mitigating hidden AI threats.

In December 2021, Microsoft released an AI security risk assessment framework to help organizations audit, track and improve the security of the AI systems. Prevalent built on this framework to create a 16-question survey that you can use to assess the AI security controls employed by your vendors and suppliers.

Use this third-party AI security assessment to:

  • Gather information about the state of AI security across your vendor ecosystem.
  • Perform a gap analysis and build a roadmap for working with vendors to remediate risks.
  • Conduct repeated, periodic assessments to track remediation progress over time.
Questions Possible Responses

1. Is data collected for the AI system(s) from trusted sources?

Select one:

a) Yes, the organization ensures that data collected is received from trusted sources only.

b) Data collected is not exclusively from trusted sources.

c) Not applicable.

2. Has a data policy been developed that includes the privacy and protection of sensitive data types?

Select all that apply:

a) A formal data policy has been established.

b) The data policy is communicated and made available to all personnel involved with the use or creation of AI systems.

c) Data is classified using data classification labels.

d) None of the above.

3. Does the organization ensure the secure storage of data based on its classification?

Select all that apply:

a) The organization utilizes secure storage, based on a defined data classification process.

b) Data used in AI systems is classified and protected based on a defined classification policy.

c) Access to data is audited, and formal user access request approval is enforced.

d) Datasets are version controlled and follow defined change control processes.

e) None of the above.

4. Are datasets appropriately tracked and verified via cryptographic hash before use?

Select all that apply:

a) The organization enforces role-based access control for datasets.

b) Access audits are performed on a regular basis.

c) Steps are implemented to ensure any third-party resource provider or external parties cannot access test data assets.

d) None of the above.

5. How does the organization ensure data integrity remains throughout an AI system lifecycle?

Select all that apply:

a) Unique identification is applied to datasets.

b) A central location is used to track datasets and their cryptographic descriptions.

c) Access to datasets is audited periodically.

d) changes made to datasets go through management approval before submission.

e) None of the above.

6. Are data processing pipelines secured?

Select one:

a) Yes, the organization takes steps to ensure the security of its data processing arrangements.

b) No, the organization does not take steps to ensure the security of its data processing arrangements.

c) Not applicable.

7. Does the organization secure subsets of data in the same manner as datasets?

Select one:

a) Yes, the organization applies the same level of security and data categorization processes for data subsets.

b) No, the organization does not apply the same level of security and data categorization processes for data subsets.

c) Not applicable.

**8. Does the organization review its model training code within an appropriate environment?

Select one:

a) Yes, the organization reviews and manages model code in secure and dedicated environments away from production.

b) Model code is not formally reviewed within dedicated environments away from production.

c) Not applicable.

9. Is training for the model used in the AI system conducted under the same conditions that would be found at deployment?

Select one:

a) Yes, the organization ensures that training the model is conducted in the same conditions as when at deployment stage.

b) The organization does not conduct training of the model in the same conditions as when at deployment stage.

c) Not applicable.

10. Does the model design and training algorithm include explicit or implicit model regularization?

Select one:

a) Yes, regularization is ensured for the model design and training algorithms.

b) No, regularization is not ensured for the model design and training algorithms.

c) Not applicable.

11. Are models continuously retrained as new training data flows into training pipelines?

Select one:

a) Yes, the organization has established a continual retraining program to test and validate new training data.

b) No, the organization has not established a continual retraining program to test and validate new training data.

c) Not applicable.

12. How does the organization secure its AI systems prior to deployment?

Select one:

a) Formal acceptance testing criteria have been defined and documented for new AI systems, upgrades, and new versions.

b) New AI systems, upgrades or new versions are implemented with formal testing.

c) The organization leverages automated tools for testing information systems, upgrades, or new versions.

d) Test environments closely resemble the final production environment.

e) The frequency, scope, and method(s) for independent security reviews is documented.

f) None of the above.

13. How does the organization secure and manage the underlying network where the AI system resides?

Select one:

a) Gateway devices are to filter traffic between domains and block unauthorized access.

b) Secure configuration guidelines are documented and periodically reviewed.

c) Networks are segregated, in line with a defined access control policy.

d) Requirements are established to segregate and restrict use of publicly accessible systems, internal networks, and critical assets.

e) None of the above.

14. How does the organization log and monitor its AI systems and supporting infrastructure?

Select one:

a) The organization has an event logging system in place (e.g., a SIEM solution) for monitoring event and security logs.

b) Event and security logs are reviewed regularly for abnormal behavior.

c) Consolidated reports and alerts are produced and reviewed for system activity.

d) Logging and monitoring include storage, pipelines, and production servers.

e) None of the above.

15. How does the organization identify and manage security incidents related to, or impacting AI systems?

Select one:

a) A formal process is established for reporting AI systems incidents.
b) Roles are established for managing incidents.

c) Formal incident response and escalation procedures have been established.

d) Incident response procedures are tested on a periodic basis.

e) None of the above

16. Does the organization have processes to ensure that AI systems can be remediated and recovered after an incident?

Select one:

a) Critical AI assets are identified and inventoried.

b) The organization has developed a formal business continuity plan.

c) Impact assessments include planning for the impact of losing critical AI systems to attacks.

d) Business continuity testing is conducted and occurs on a repeated schedule for critical AI systems.

e) None of the above.

Next Steps for Managing Third-Party AI Risks

Use the above questionnaire as a starting point to uncover risks in the AI systems employed by your vendors and suppliers. By proactively identifying and managing third-party AI risks, you can protect your organization’s systems and data while avoiding potential issues related to fairness, transparency and accountability. Third-party AI risk assessments not only safeguard your operations, but also aid in ethical decision-making, vendor selection, and long-term partner relations.

For more information about how Prevalent can assist your organization in assessing vendor and supplier AI security in the context of overall third-party risk, request a demo today.

Tags:
Share:
Headshot david allen
David Allen
Chief Technology Officer & Chief Information Security Officer

David Allen is the Chief Technology Officer & Chief Information Security Officer for Prevalent, where he oversees software development, information technology, information security and cloud operations. He has over 20 years’ experience building and managing teams, enterprise software products, and evaluating systems and processes for efficiency and security. David’s focus is to align business needs with technical vision, and evolve strategy and process for technological resources. David’s passion is building efficient processes, teams, and workspaces with an emphasis on communication, morale, job satisfaction, and career growth. He strongly believes that empathy, inclusiveness, and a holistic view of team dynamics and processes are just as important as technology and strategy. Prior to Prevalent, David held technical leadership roles at Quest Software, NetPro, eEye Digital Security, and BeyondTrust where he built high-performance software engineering teams to achieve category leadership and sales growth for enterprise software frameworks and applications. He holds a Bachelor of Computer Science degree from Monash University.

  • Ready for a demo?
  • Schedule a free personalized solution demonstration to see if Prevalent is a fit for you.
  • Request a Demo