Cloud 101CircleEventsBlog
Master CSA’s Security, Trust, Assurance, and Risk program—download the STAR Prep Kit for essential tools to enhance your assurance!

An Overview of Microsoft DPR, Its New AI Requirements, and ISO 42001’s (Potential) Role

Published 10/16/2024

An Overview of Microsoft DPR, Its New AI Requirements, and ISO 42001’s (Potential) Role

Originally published by Schellman.


Within a few months of their latest update to their Data Protection Requirements (DPR) to address a coding incident (version 9.1), Microsoft released a draft or “pre-read” for their version 10 requirements that will be utilized for its Supplier Security and Privacy Assurance (SSPA) process as of the 2025 fiscal year. Arguably the largest update to the DPR since September 2018, v10’s new mandates address artificial intelligence (AI) and include important references to ISO 42001 that suppliers may want to take advantage of during their next compliance cycle.

In this comprehensive blog post, we’ll provide a high-level overview of the SSPA program—including highlights of what’s new in v10 of the DPR—before doing a deep dive into the new AI requirements, ISO 42001’s relevance, and specifics as to how the latter maps to the former. With this breakdown in hand, you’ll be able to more easily pursue the course best for your organization in maintaining a green status with Microsoft.


An Introduction to the Microsoft SSPA

For those wholly unfamiliar, let’s lay a little groundwork first.

Anyone wanting to do business with Microsoft first has to implement its DPR, which is a set of guidelines designed to help suppliers establish a tailored data protection framework meant to ensure the security, privacy, and compliance of data across Microsoft’s cloud services. Once you’ve implemented the measures specified in the DPR, you must validate your efforts and the effectiveness of your controls through an assessment.

For more details on the SSPA program, you can delve into the following documents that Microsoft has made available:

Document

What It Does

SSPA Program Guide

Details how the SSPA process works and what to expect in the initial stages and on an annual basis thereafter, from the supplier profile to the self-assessment, to the independent assessment, including details on the various “profiles” the supplier could classify as which would then determine their next SSPA steps.

Preferred Assessor List

Provides suppliers with a list of vetted and preferred assessors who have been working with Microsoft over the years, as well as their contact information.

Microsoft DPR

Walks suppliers through the requirements they are expected to meet when providing services to Microsoft, as not all of the requirements may be applicable based on the services you provide (something that Microsoft will determine in the self-assessment phase).


Microsoft SSPA Carve-Outs (Alternatives to the Independent Assessment)

Generally, suppliers can perform an independent assessment of their specific DPR compliance to achieve that validation. However, when they implemented the SSPA process, Microsoft acknowledged that adherence to other frameworks could also prove suppliers were addressing the tech giant’s concerns regarding data protection standpoint.

That’s why they added carve-outs—i.e., in lieu of suppliers addressing specific requirements during an independent assessment against the DPR, Microsoft will accept several other reporting/certifications from its vendors, including:

  • ISO 27001 for security requirements
  • ISO 27701 for privacy requirements
  • HITRUST certification for security and privacy requirements (this option is limited to covered entities under HIPAA or U.S. healthcare service providers)


What’s New in Microsoft DPR v10?

Important additions to the accepted carve-outs are part of the new updates to v10 of the Microsoft DPR, which altogether are substantial. Most of those notable changes are focused within the new AI Section K of v10, but before we get into that, we should highlight some of the other revisions, which include:

  • Consolidation of prior training requirements, including the combination of what was previously security requirement #45 with management requirement #4
  • A new requirement relating to proper records of disclosures under Section F: Data Subjects, for any recipients with whom Microsoft Personal Data has been or will be shared
  • Consolidation of requirements related to incident response and reporting of information, including the merging of security requirement #44 with monitoring and enforcement requirement #34
  • New guidance regarding the maintenance of security assessment and patch management records for at least 90 days
  • Specification that Microsoft devices accessed by suppliers via Microsoft-issued credentials should not be managed by suppliers, but rather entirely by Microsoft
  • Callouts for the use of multi-factor authentication in security requirement #40 that deals with access rights management

While these may not seem particularly impactful, you should be aware of these changes when preparing for the next independent assessment.


Artificial Intelligence and the Microsoft DPR

Meanwhile, what is impactful within v10 of the DPR is the inaugural incorporation of AI requirements. Such a development shouldn’t come as a surprise, as AI integration in service delivery has been increasing exponentially given how it helps to improve user experience and overall output.

At the same time—because we still only have limited insight into how AI operates and applies its associated algorithms and modeling to associated data sets—the confidentiality and privacy risks that come with the implementation of AI have also increased.


18 New Requirements

Microsoft, it seems, has recognized the need to not lose sight of AI-associated risks—v10 of its DPR contains a set of 18 new requirements within the aforementioned new “Section K” and all suppliers providing services involving AI systems must satisfy these mandates, which cover administrative and technical controls around the implementation of AI. These requirements include but are not limited to:

  • New contractual terms surrounding the use of AI that must be included in connection with the services provided to Microsoft
  • A mandatory designation of a person or group within the company as having oversight and ultimate responsibility for the AI systems utilized during and after deployment
  • New training and incident response procedures that reflect the additional risks associated with the use of AI
  • Clear lines of accountability and responsibility for risk assessment and risk management
  • Transparency disclosures for intended use and health monitoring with explainability regarding decision-making processes and any impact on certain groups of individuals
  • Specifications regarding incident and error reporting

Full compliance with these new requirements will require, at the very least, several updates to:

  • Supplier risk assessment processes;
  • General oversight and administrative updates to policies and procedures to ensure proper management of any AI involved in service delivery; and
  • Information that is shared and disclosed to Microsoft on the use of AI and any impacts it may have.

Moreover, suppliers must get started in making the necessary adjustments to comply, as Microsoft has specified that they will not issue any new purchase orders nor will they allow any data processing from their vendors currently leveraging AI until those vendors have met Section K requirements and completed an Independent Assessment (as applicable).


ISO 42001 Certification as a New Carve-Out—and Requirement

Speaking of the requisite independent assessment, that’s the other big update in the DPR v10.

Together with the addition of the Section K AI requirements, Microsoft has specified that a supplier can submit an ISO 42001 certification in lieu of having an independent assessment performed against the AI-specific requirements.

And for those suppliers delivering an AI-related service that includes “sensitive use”—a definition detailed in their latest program guide—Microsoft now requires an ISO 42001 certification from those vendors (i.e., these organizations do not have the option for an independent assessment against the DPR).


How ISO 42001 Maps to Microsoft SSPA

Such developments are clear indications that Microsoft has embraced ISO 42001 as a method of demonstrating a supplier’s trustworthy use of AI in providing services to Microsoft.

To help those AI-service organizations who will now be weighing their options in satisfying the requirements, as well as those considering certification and/or doing business with Microsoft, here’s our analysis on how the 18 new AI requirements that comprise Section K map to ISO 42001:

Microsoft Supplier Data Protection Requirements: Section K: AI Systems

ISO 42001:2023 Requirements

Where AI Systems are included in connection with providing a service, Supplier must have the applicable AI Systems terms in place with Microsoft.

Any change to the Intended Uses must disclosed without undue delay and reviewed at least annually for accuracy and compliance.

A.9.4

A.8.5

A.10.2

A.10.4

Assign responsibility and accountability for troubleshooting, managing, operating, overseeing, and controlling the AI System during and after deployment to a designated person or group within the company.

AIMS clause 5.3

A.3.2

Establish, maintain, and perform annual privacy and security training for anyone that will have access to or Process data within AI Systems by supplier in connection with Performance.

AIMS clause 7.3

A.8.2

Supplier has an AI System incident response plan that requires supplier to notify Microsoft per contractual requirements, as described by applicable privacy law, or without undue delay, whichever is sooner, upon becoming aware of a Data Incident or a discovered failure that would adversely impact any of the Intended Uses and Sensitive Uses listed for an AI System.

A.3.3

A.8.2

A.8.4

A.8.5

Supplier must have Red Teaming of AI Systems. Vulnerabilities must be addressed prior to AI System deployment.

N/A – no mapping to ISO 42001*

Supplier has a Responsible AI program to ensure data compliance through disclosures and documentation.

AIMS clause 5.2

AIMS clause 7.4

A.2.2

A.9

Supplier has Intended Uses Transparency disclosures.

A.8.2

A.8.5

A.9.4

Signed Agreement: When engaging with AI Suppliers, organizations should establish clear contractual terms in a signed agreement. These agreements should explicitly address data handling, confidentiality, intellectual property rights, liability, incident response, and any applicable Sensitive Uses.

A.10.3

Accountability: Define clear lines of accountability and responsibilities for AI deployment and risk management within the organization. Organizations must identify responsible parties for the outcomes of AI Systems. This includes addressing ethical concerns, biases, and any issues that may arise over time. Regular monitoring and auditing of AI models are essential to maintain compliance with ethical guidelines.

AIMS clause 5.3

A.3.2

Risk Assessment: Conduct privacy, security, and/or Responsible AI risk assessment to consider potential biases, security vulnerabilities, and unintended consequences. If any Sensitive Uses are included, guidance for required controls or mitigations must be included.

AIMS clauses 6.1.2 and 8.2

AIMS clauses 6.1.3 and 8.3

AIMS clauses 6.1.4 and 8.4

A.5

Transparency and Explainability: AI Systems must be transparent and explainable. Supplier must provide insights into how decisions are made. Disclosures should encourage transparency in model architecture, training data, and decision-making processes.

A.6.2.7

A.8.2

Monitoring and Adaptation: Supplier must demonstrate continuous monitoring of AI Systems and adapt and update the AI Systems as new risks emerge.

AIMS clause 9.1

A.6.2.6

Supplier must provide required disclosures, reporting, or other similar documentation with all required error types, performance metrics definitions, data performance, safety, and reliability indicators for each Intended Use.

A.6.2.7

A.8.2

A.8.5

Supplier will update the transparency disclosures, including Sensitive Use and Intended Uses, and notify Microsoft if:

  • New uses are added,
  • Functionality changes,
  • The product moves to a new release stage,
  • New information about reliable and safe performance that impact Intended Use are discovered or applied,
  • New information about system accuracy and performance becomes available.

A.6.2.6

A.6.2.7

A.8.2

As part of the transparency disclosures, Supplier must document a standard operating procedure and system health monitoring action plan for each AI system or data model that includes:

  • Processes for reproducing system failures to support troubleshooting and prevention of future failures,
  • Which events will be monitored,
  • How events will be prioritized for review,
  • He expected frequency of those reviews,
  • How events will be prioritized for response and timing to resolution,
  • Third party AI components, including open source software, are kept up to date.

A.6.2.6

Establish and document a detailed inventory of the system health monitoring methods to be used, which include:

  • Data and insights generated from data repositories, system analytics, and associated alerts,
  • Processes by which customers can submit information about failures and concerns, and
  • Processes by which the general public can submit feedback.

A.6.2.6

A.6.2.7

A.8.3

If evidence is discovered that the AI system is not fit for the Intended Use(s) at any point before or during the system’s use, Supplier will:

  • Remove the Intended Use from customer-facing materials and make current customers aware of the issue, take action to close the identified gap, or discontinue the system,
  • Revise documentation related to the Intended Use, and
  • Publish the revised documentation to customers.

A.6.2.6

A.6.2.7

A.8.2

A.9.4

Supplier must identify and disclose all known demographic groups, including marginalized groups, who may be at risk of experiencing worse or adverse quality of service based on the Intended Use(s) of the AI System, geographic areas where the AI System will be deployed, or inherent biases within the AI System.

Demographic groups include:

  • Groups defined by a single factor, and
  • Groups defined by a combination of factors.

AIMS clauses 6.1.4 and 8.4

A.4.3

A.5.5

A.7.3

A.7.4

*While there is not an AIMS requirement / Annex A control in ISO 42001 specific to red teaming, an effective risk assessment and system impact assessment of an AI system (AIMS clauses 6 & 8) would benefit from the results of red teaming as it relates to potential / actual vulnerabilities of an AI system’s security which could lead to bad actors taking control and using the system for unintended purposes.

Additionally, Annex A control A.6.2.5 (AI system deployment) states that “The organization shall document a deployment plan and ensure that appropriate requirements are met prior to deployment.” That said, an effective pre-deployment plan should consider technical vulnerabilities in the system, which could be derived from red teaming efforts.


Moving Forward with the Latest Version of Microsoft’s DPR

V10 of the DPR has been implemented as of September 23, 2024, so suppliers of Microsoft services must quickly determine how they’ll want to proceed in satisfying the new AI governance requirements as applicable—be it through an independent assessment or successful ISO 42001 certification—so that you can maintain your organization’s green status in Microsoft’s supplier portal.

Share this content on your favorite social network today!