Key Takeaway
A unified compliance matrix reduces duplicate engineering effort by identifying shared controls across regulations. By mapping twenty-four controls against six regulatory frameworks, teams can implement a common baseline that satisfies the majority of requirements across all jurisdictions simultaneously, then layer on regulation-specific additions where needed.
Prerequisites
- Familiarity with your organization's AI system inventory and risk classifications
- Understanding of your data processing activities and data flow diagrams
- Access to legal counsel with AI regulation expertise (EU AI Act, CCPA/CPRA, HIPAA)
- Working knowledge of ISO 27001 or SOC 2 control frameworks
- An existing or planned AI governance structure with defined roles (see: AI Governance Framework)
- Basic understanding of ML model lifecycle: training, validation, deployment, monitoring
The Compliance Landscape
AI compliance is not a single regulation you can read and implement. It is a web of overlapping, sometimes contradictory requirements that span jurisdictions, industries, and system types. An AI system that processes health data for EU residents must simultaneously satisfy the EU AI Act's risk-based requirements, GDPR's data protection obligations, HIPAA's protected health information rules (if touching US health data), and potentially SOC 2 trust service criteria demanded by enterprise customers. Each regulation was written by a different body, with different terminology, different enforcement mechanisms, and different timelines.
The practical problem for engineering teams is that reading each regulation independently leads to redundant implementation work. A data lineage system built to satisfy GDPR Article 30's records-of-processing requirement also satisfies most of the EU AI Act's data governance obligations under Article 10, and contributes to SOC 2's processing integrity criteria. But without a cross-regulation view, teams often build three separate systems. This matrix exists to prevent that waste.
This guide covers six regulatory frameworks that collectively represent the compliance surface area most AI-deploying organizations face. Not every framework applies to every organization. A US-only healthcare startup has a different profile than a multinational financial services firm. The matrix is designed to be filtered: identify which regulations apply to your organization, then focus on the controls that are required or recommended for that subset.
This matrix is a technical implementation guide, not legal advice. Regulatory interpretation varies by jurisdiction, industry, and use case. Always validate your compliance approach with qualified legal counsel before treating any control as sufficient for regulatory compliance.
Regulations Covered
Each regulation below is summarized with its AI-specific implications. The collapsible sections provide the detail you need to understand why each control in the matrix is classified as required, recommended, or not applicable for that regulation.
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive AI-specific regulation. It establishes a risk-based classification system with four tiers: unacceptable risk (banned), high-risk (heavy obligations), limited risk (transparency obligations), and minimal risk (voluntary codes of conduct). The Act entered into force on August 1, 2024, with a phased compliance timeline: prohibited practices apply from February 2025, obligations for general-purpose AI models from August 2025, and high-risk system requirements from August 2026.
For engineering teams, the high-risk tier creates the most implementation work. Article 9 requires a documented risk management system that is iterative and updated throughout the AI system lifecycle. Article 10 mandates data governance practices including examination of training data for biases, relevance, and representativeness. Article 11 requires technical documentation sufficient for authorities to assess compliance. Article 13 demands transparency measures so deployers understand the system's capabilities and limitations. Article 14 requires human oversight mechanisms that allow human operators to understand, monitor, and override the system. Article 15 mandates accuracy, robustness, and cybersecurity requirements appropriate to the system's intended purpose.
General-purpose AI model providers face obligations under Article 53: maintaining technical documentation, providing information to downstream providers, establishing a copyright compliance policy, and publishing a training content summary. Models with systemic risk (above 10^25 FLOP training threshold) face additional obligations under Article 55 including adversarial testing, incident tracking, and energy consumption reporting.
The California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act (CPRA), grants consumers rights over their personal information and imposes obligations on businesses processing it. While not AI-specific, several provisions directly affect AI systems. Section 1798.100 establishes the right to know what personal information is collected and how it is used, which extends to AI training data and inference inputs. Section 1798.105 creates deletion rights that complicate model retraining when training data must be erasable.
Most critically for AI, Section 1798.185(a)(16) directed the California Privacy Protection Agency to issue regulations governing automated decision-making technology (ADMT). These ADMT regulations, currently in rulemaking, would require businesses to provide consumers with access to information about the logic of automated decisions, the right to opt out of ADMT in certain contexts, and pre-use notices for profiling decisions with significant effects. Engineering teams should design AI systems with opt-out mechanisms and decision explanation capabilities now, even before final ADMT rules are published.
The Health Insurance Portability and Accountability Act (HIPAA) predates modern AI but its requirements for protected health information (PHI) create significant constraints on AI systems in healthcare. The Privacy Rule (45 CFR Part 164, Subpart E) requires minimum necessary use of PHI, which means AI systems should only receive the PHI fields actually needed for the task. The Security Rule (45 CFR Part 164, Subpart C) mandates technical safeguards including access controls, audit controls, integrity controls, and transmission security for electronic PHI (ePHI) processed by AI systems.
AI-specific HIPAA concerns include: model memorization of PHI in training data, which can lead to PHI exposure through inference-time attacks; the use of PHI for model training without proper authorization or de-identification under the Safe Harbor or Expert Determination methods (45 CFR 164.514); business associate agreement (BAA) requirements when third-party AI services process PHI; and the breach notification obligations under the Breach Notification Rule (45 CFR Part 164, Subpart D) when AI system vulnerabilities lead to unauthorized PHI disclosure. The HHS Office for Civil Rights has signaled increased scrutiny of AI systems handling PHI.
SOC 2 (System and Organization Controls 2) is an audit framework based on the AICPA Trust Service Criteria. While not a regulation, SOC 2 compliance is effectively required for B2B AI service providers because enterprise customers demand it. The five trust service categories — Security, Availability, Processing Integrity, Confidentiality, and Privacy — each have AI-specific implications that extend beyond traditional software controls.
Processing Integrity (PI1.1 through PI1.5) is particularly relevant for AI: you must demonstrate that system processing is complete, valid, accurate, timely, and authorized. For AI systems, this means documenting model accuracy metrics, establishing validation procedures for model outputs, and maintaining evidence that the system performs as described. Security (CC6.1 through CC6.8) requires logical and physical access controls that extend to model artifacts, training data, and inference endpoints. Confidentiality (C1.1, C1.2) requires protecting confidential information throughout the AI pipeline, including training data, model weights, and inference inputs/outputs.
ISO/IEC 42001:2023 is the first international management system standard for artificial intelligence. It follows the Annex SL high-level structure shared by ISO 27001 and ISO 9001, making it integrable with existing management systems. The standard requires organizations to establish an AI management system (AIMS) covering the planning, development, deployment, and monitoring of AI systems.
Key clauses include: Clause 4 (Context) requiring organizations to identify interested parties and their AI-specific requirements; Clause 6 (Planning) mandating AI risk assessment and treatment processes; Clause 7 (Support) requiring AI-specific competence, awareness, and communication; Clause 8 (Operation) covering AI system lifecycle processes including data management, model development, and deployment; and Clause 9 (Performance Evaluation) requiring monitoring, measurement, analysis, and internal audit of AI systems. Annex A provides a reference set of AI-specific controls covering responsible AI, data management, system development, and third-party relationships. ISO 42001 certification is becoming a market differentiator for AI service providers.
The NIST AI Risk Management Framework (AI RMF 1.0, published January 2023) provides a voluntary framework for managing AI risks. Unlike the EU AI Act, it is not legally binding, but it is increasingly referenced in US federal procurement requirements and serves as a de facto standard for AI risk management practices. The framework is organized around four core functions: Govern, Map, Measure, and Manage.
Govern (GV) establishes the organizational context, policies, and processes for AI risk management. Map (MP) identifies and contextualizes AI risks, including risks from third-party components. Measure (MS) employs quantitative and qualitative methods to analyze, assess, and track AI risks. Manage (MN) prioritizes and acts on risks through mitigation, transfer, or acceptance. Each function contains categories and subcategories (e.g., GV-1.1, MP-2.3) that map to specific practices. The companion NIST AI RMF Playbook provides implementation suggestions for each subcategory. While voluntary, implementing the NIST AI RMF demonstrates due diligence and can support compliance arguments for other regulations.
The Master Compliance Matrix
The matrix below maps twenty-four controls across six categories to each of the six regulatory frameworks. Each cell indicates whether the control is required (the regulation explicitly mandates it), recommended (the regulation supports or implies it, or it constitutes best practice for compliance), or not applicable (the regulation does not address this area). Use this matrix to identify your baseline: controls that are required across all your applicable regulations should be implemented first.
Control Implementation Guide
The following sections provide implementation details for ten high-priority controls. Each includes what the control requires, how to implement it technically, and what evidence artifacts you need for audit purposes.
DG-01: Training Data Inventory
A training data inventory is the foundation of AI compliance. Without knowing what data your models were trained on, you cannot answer questions about consent, bias, or data rights. The inventory must be machine-readable, versioned, and linked to your model registry so that for any deployed model you can trace back to the exact datasets used in training. EU AI Act Article 10(2) requires documentation of data provenance, preparation design choices, and data collection processes.
interface DatasetRecord {
id: string;
name: string;
version: string;
source: string;
collectionDate: string;
consentBasis: "explicit" | "legitimate-interest" | "contract" | "legal-obligation" | "public-interest";
containsPII: boolean;
piiCategories?: string[];
demographicCoverage: Record<string, number>;
knownLimitations: string[];
dataSubjectCount: number;
retentionPolicy: string;
lastAuditDate: string;
}
interface TrainingDataInventory {
modelId: string;
modelVersion: string;
datasets: DatasetRecord[];
preprocessingSteps: {
step: string;
description: string;
dataImpact: string;
}[];
dataQualityScore: number;
lastUpdated: string;
}
// Evidence artifacts: inventory JSON per model version,
// data source agreements, consent records, preprocessing logsDG-03: Bias Detection in Training Data
Unlock the full Knowledge Base
This article continues for 54 more sections. Upgrade to Pro for full access to all 93 articles.
That's just $0.11 per article
- Full access to all blueprints, frameworks, and playbooks
- Interactive checklists with progress tracking
- Downloadable templates (.xlsx, .pptx, .docx)
- Quarterly Technology Radar updates