Introduction
Artificial Intelligence has rapidly transitioned from academic curiosity to critical corporate asset. Organizations across industries deploy AI systems to automate processes, generate insights, and enhance customer experiences. Yet this rapid adoption outpaces governance maturity. Many corporations deploy AI systems with incomplete understanding of risks, minimal oversight mechanisms, and inadequate controls. This governance vacuum exposes organizations to substantial legal, financial, and reputational jeopardy.
The risks AI systems pose are multifaceted and often subtle. AI models can perpetuate or amplify historical biases encoded in training data, leading to discriminatory outcomes in hiring, lending, or criminal justice decisions. Large Language Models hallucinate—generating plausible-sounding but factually incorrect information—with potential consequences ranging from embarrassing to catastrophic depending on deployment context. Data security vulnerabilities in AI systems create pathways for proprietary information leakage. Uncontrolled AI experimentation generates regulatory exposure as compliance officers discover AI systems processing sensitive data without appropriate safeguards.
Simultaneously, regulatory pressure intensifies. The European Union's AI Act imposes strict requirements on high-risk AI applications. The U.S. Office of Management and Budget issued executive orders and memoranda requiring federal agencies to manage AI risks. State-level privacy regulations increasingly intersect with AI practices. Industry-specific regulators—banking, insurance, healthcare—are developing AI-specific guidance. Organizations ignoring this regulatory trajectory face substantial compliance exposure.
This article addresses AI governance comprehensively, examining the risks AI systems introduce, the regulatory landscape organizations navigate, and the governance frameworks that enable responsible AI adoption. The article targets legal counsel and risk officers responsible for ensuring corporate AI initiatives align with risk tolerance and regulatory requirements.
The Risks of Corporate AI Deployment
Bias and Algorithmic Discrimination
Perhaps the most recognized AI risk is bias. Machine learning models trained on historical data learn patterns present in that data, including discriminatory patterns reflecting historical prejudices. A hiring model trained on historical hiring decisions may learn to discriminate against protected classes if historical hiring demonstrated bias. A lending model trained on historical loan performance may deny credit to applicants sharing characteristics with historically underperforming groups, even if those characteristics don't causally influence loan performance.
The mechanisms generating bias are subtle and pernicious. Direct discrimination—explicitly including protected class information in models—is relatively easy to prevent through feature engineering. Proxy discrimination—where seemingly neutral features correlate with protected classes—proves more challenging. A model using zip code as a predictor might indirectly encode race-based discrimination if zip code correlates with racial demographics. Intersectional discrimination—where bias emerges from combinations of features—adds complexity.
The consequences of algorithmic discrimination extend beyond the immediate individuals affected. Discrimination claims expose organizations to litigation, regulatory enforcement, and substantial damages. In high-stakes domains like hiring or lending, discriminatory AI creates legal exposure under civil rights statutes. Even in lower-stakes domains, discriminatory AI damages brand reputation and customer trust.
Mitigating bias requires multifaceted approaches: careful attention to training data and potential representation biases; explicit fairness constraints in model optimization; testing for disparate impact across protected classes; external audits by third parties; and ongoing monitoring as model performance evolves.
Hallucinations and Factual Unreliability
Large Language Models, while remarkably capable at generating human-like text, suffer from a concerning limitation: they hallucinate. Models generate plausible-sounding text without access to truth. When asked questions they cannot answer, models fabricate plausible responses rather than acknowledging uncertainty. In conversational contexts, this creates awkward moments. In corporate decision-making, hallucinations create substantial risk.
An executive relying on an AI system to summarize contract terms might receive summaries containing invented clauses or mischaracterized obligations. A customer service representative providing information from an AI-generated knowledge base might confidently relay incorrect information to customers. A financial analyst leveraging AI for market analysis might base investment decisions on hallucinated market data. In each case, hallucinations cause decisions based on false premises.
The root causes of hallucination are partially understood. Models attempt to produce text matching expected patterns even when actual knowledge is insufficient. Techniques like retrieval-augmented generation (RAG)—providing models with relevant source documents before generating responses—improve reliability by grounding responses in actual data. Confidence calibration techniques enable models to express appropriate uncertainty. Nevertheless, hallucinations remain significant concern.
Managing hallucination risk requires constraining AI use to domains where hallucinations are acceptable or can be mitigated. AI systems shouldn't provide unreliable information on critical matters without verification mechanisms. For high-stakes applications, human review of AI outputs becomes essential. Organizations must establish policies distinguishing between AI-as-advisor (where inaccuracy is acceptable with human judgment) and AI-as-decider (where reliability requirements are higher).
Data Security and Privacy Risks
AI systems create novel data security and privacy risks. Large Language Models trained on vast text corpora memorize substantial portions of training data. Researchers have demonstrated that sensitive information—credit card numbers, email addresses, passwords—inadvertently included in training data can be extracted through careful prompting. When organizations fine-tune models on proprietary data, they risk that model outputs can reveal aspects of training data.
AI systems require substantial data to function effectively. This creates pressure to centralize data, expanding the potential blast radius if security is compromised. Integration between AI systems and operational systems may create attack vectors—compromising AI systems could provide pathways to compromise operational systems or data stores.
Generative AI systems deployed in customer-facing contexts create particular exposure. Customers might input sensitive information expecting confidentiality. Organizations using third-party AI services must trust those vendors with sensitive data. The complexity of AI supply chains—models from one vendor, computing infrastructure from another, data storage from a third—creates numerous potential failure points.
Privacy regulations increasingly scrutinize AI data use. GDPR's rights to explanation and data subject access create obligations around AI systems. California's Consumer Privacy Act, Colorado's Privacy Act, and similar state regulations impose requirements on data collection and use. Using data for AI model development may violate terms of use or user consent originally given for different purposes.
Mitigating data risks requires robust data governance: inventorying what data is used where; establishing clear data classification and handling procedures; implementing access controls and monitoring; using privacy-preserving techniques like differential privacy or federated learning; and carefully evaluating third-party AI services before integration.
Regulatory and Compliance Violations
As regulatory frameworks around AI develop, organizations face growing compliance exposure. The European Union's AI Act categorizes AI applications into risk tiers, imposing progressively stricter requirements on high-risk applications. High-risk systems require documentation, risk assessments, quality management systems, and human oversight. Prohibited applications—like social scoring for general purposes—are banned entirely.
U.S. regulators are developing sector-specific approaches. The Federal Reserve and bank regulators have issued guidance on responsible AI in banking. The Federal Trade Commission has sent letters to AI companies warning about bias and privacy violations. Healthcare regulators are developing AI-specific frameworks. Each sector faces distinct regulatory requirements.
The challenge for corporations is that AI governance requirements are rapidly evolving and sometimes contradictory across jurisdictions. A compliance posture adequate in the U.S. might violate EU requirements. Practices compliant with GDPR might not satisfy regulatory expectations in other regions. Organizations must track evolving requirements and adjust practices accordingly.
Non-compliance carries substantial consequences. Regulatory enforcement can result in substantial fines (the EU AI Act contemplates fines up to 6% of global revenue), operational constraints (restrictions on using certain AI approaches), or reputational damage when enforcement actions become public.
Model Transparency and Explainability Failures
Many high-stakes decisions require explanation. When an AI system denies a loan application, the applicant has rights to explanation. When an AI system is used in employment decisions, employment law may require explanation. When a healthcare AI system recommends treatment, providers need to understand the reasoning.
Yet many powerful AI systems—particularly deep learning models and ensemble methods—are notoriously opaque. Neural networks learn representations in hidden layers that don't correspond to human-interpretable concepts. Ensemble models combine predictions from hundreds of base models in ways that resist interpretation. Even simpler models can be difficult to explain when they interact with complex data preprocessing and feature engineering.
Organizations cannot simply accept model opacity. Explainability is increasingly a regulatory requirement. It's often an ethical requirement—decisions affecting people deserve explanation. It enables identifying whether problematic decisions result from bias, inadequate training data, or legitimate business factors.
Improving explainability requires approaches like: using inherently interpretable model classes; applying post-hoc explanation techniques like LIME or SHAP; maintaining documentation of model development and data sources; establishing human review processes for consequential decisions; and testing whether explanations actually clarify decision-making.
Intellectual Property and Security Concerns
AI systems created by organizations constitute valuable intellectual property. Training data, model architectures, learned weights, and the knowledge embedded in models represent significant asset value. Yet sharing model access with third parties creates risks. External parties might reverse-engineer models, extract embedded data, or use models in ways organizations never intended.
Using open-source models or third-party AI services creates supply chain risks. The organization depends on third parties for model updates, security patches, and operational stability. If the external model is compromised, the organization's systems become compromised. If the third party faces regulatory action, the organization might face collateral consequences.
Organizations must carefully evaluate third-party AI services, understand data handling practices, establish contractual protections, and maintain independence where strategically important. For critical capabilities, organizations may choose to develop proprietary models rather than depend on external services, despite higher development costs.
The Evolving Regulatory Landscape
Global Regulatory Development
The regulatory landscape for AI is fragmenting rapidly, with different jurisdictions pursuing distinct approaches:
European Union AI Act: The most comprehensive regulatory framework to date, the EU AI Act imposes risk-based requirements. Prohibited applications include social scoring and real-time biometric identification in public places. High-risk systems (used in employment, credit, criminal justice, and other critical domains) face substantial compliance requirements including documentation, risk assessment, and quality management. Lower-risk AI follows less stringent requirements.
United States Approach: The U.S. relies more heavily on existing sectoral regulation supplemented by executive orders and guidance. The Biden Administration issued an executive order requiring federal agencies to manage AI risks and promote responsible AI development. Various regulatory agencies—FTC, SEC, banking regulators, healthcare regulators—are developing sector-specific guidance.
United Kingdom Post-Brexit Approach: The UK is pursuing a lighter-touch regulatory approach, focusing on principles and sector-specific regulation rather than comprehensive AI legislation. This creates regulatory arbitrage opportunities but also uncertainty.
Other Jurisdictions: China, Singapore, Canada, and other countries are developing AI governance frameworks. China emphasizes content governance and state control. Singapore stresses innovation alongside responsibility. These varying approaches complicate governance for multinational corporations.
Regulatory Themes Across Jurisdictions
Despite different approaches, consistent themes emerge across regulatory frameworks:
- Risk-based regulation targeting high-stakes applications
- Requirements for transparency and explainability
- Data protection and privacy safeguards
- Human oversight requirements
- Bias and fairness requirements
- Documentation and audit requirements
Corporations navigating this landscape must develop governance sufficient to satisfy the most stringent requirements applicable to their operations, establishing governance that enables compliance across multiple jurisdictions.
Emerging Best Practices
As regulation develops, patterns of best practice are emerging:
- AI Impact Assessments: Systematic analysis of potential harms before deployment
- Bias Testing and Mitigation: Comprehensive testing for discriminatory impacts and continuous monitoring
- Explainability and Documentation: Maintaining clear documentation of model development, training data, and decision logic
- Human Oversight: Establishing human review for high-stakes decisions
- Incident Reporting: Tracking and reporting failures and adverse outcomes
- External Audit: Third-party review of AI governance and practices
- Supplier Due Diligence: Careful vetting of third-party AI providers
Establishing an AI Governance Framework
Governance Principles and Values
Effective AI governance begins with explicit principles guiding AI development and deployment. These principles should reflect organizational values and regulatory requirements:
Accountability: Clear assignment of responsibility for AI system performance and outcomes. Organizations must know who is responsible when problems occur and have mechanisms to ensure accountability.
Transparency: Clear documentation of how AI systems work, what data they use, and how they make decisions. While complete transparency isn't always achievable, striving toward transparency builds trust.
Fairness: Commitment to identifying and mitigating bias and discrimination. Organizations must actively test for and address fairness concerns rather than assuming algorithms are neutral.
Security: Protecting AI systems and the data they operate on from unauthorized access, modification, or misuse.
Privacy: Respecting individual privacy and complying with privacy regulations in how AI systems collect, process, and use personal data.
Contestability: Enabling individuals affected by AI decisions to contest those decisions and seek remedies if appropriate.
Human Agency: Maintaining human oversight of critical decisions, preserving human autonomy and dignity.
Governance Structure and Accountability
Organizations need clear governance structures assigning accountability for AI governance. A typical structure includes:
Board-Level Oversight: The board or board committee receives regular briefings on AI strategies, risks, and governance. Board-level engagement ensures that AI governance receives appropriate executive attention.
Executive Leadership: A Chief AI Officer, Chief Information Officer, or similar executive leads AI strategy and governance. This role should have sufficient authority to enforce governance requirements despite pressure to move quickly.
Cross-Functional AI Governance Committee: Representatives from legal, compliance, risk management, business units, and technical functions. This committee develops governance policies, reviews proposed AI initiatives, and monitors governance compliance.
Functional Governance Roles:
- Data governance teams manage data quality, access, and privacy
- Model governance teams manage model development, testing, and deployment
- Compliance teams ensure regulatory alignment
- Security teams protect AI systems and data
- Business unit owners are accountable for AI initiatives within their domains
Technical AI Teams: Data scientists, machine learning engineers, and software engineers build AI systems within governance constraints.
Risk Assessment and Categorization
Organizations should classify AI use cases by risk level, applying governance intensity proportionate to risk:
High-Risk Uses: AI systems making consequential decisions affecting individuals—hiring, lending, criminal justice, medical treatment, insurance underwriting. These systems require:
- Comprehensive impact assessments
- Testing for bias and fairness
- Explainability and human review
- Regular external audit
- Careful documentation
Medium-Risk Uses: AI systems providing advice or insights that inform decisions but don't directly make consequential decisions—market analysis, customer segmentation, content recommendations. These require:
- Assessment for potential harms
- Testing for material biases
- Documentation of assumptions and limitations
- Monitoring for adverse outcomes
Low-Risk Uses: AI systems used for routine tasks or internal analysis with minimal potential for harm—email filtering, basic content moderation, internal reporting. These require:
- Basic documentation of function and data sources
- Minimal formal governance overhead
This risk-based approach enables organizations to apply governance proportionate to actual risk rather than treating all AI systems identically.
Data Governance and Privacy Integration
AI systems depend fundamentally on data quality and responsible data stewardship. AI governance must integrate with data governance:
Data Inventory: Comprehensive cataloguing of data assets, their sensitivity levels, and regulatory constraints. This enables understanding what data is available for AI development and what constraints apply.
Data Quality Standards: Clear standards for data completeness, accuracy, and recency. Poor data quality compromises AI reliability, so data quality standards should be established before AI systems use data.
Data Access Controls: Clear policies governing who can access what data. Access should be restricted to individuals with legitimate business need and appropriate training.
Privacy Impact Assessments: Before using data for AI development, assess privacy impacts. Does AI development align with the purposes for which data was collected? Have individuals consented to AI use of their data?
Retention Policies: Clear policies on how long data is retained and when it's deleted. Retention should be no longer than necessary for legitimate business purposes, and data should be deleted once those purposes are satisfied.
Sensitive Data Protections: Enhanced protections for particularly sensitive data—health information, financial information, biometric data, precise location information. Sensitive data requires explicit justification for AI use and enhanced security.
Model Development and Testing Standards
Organizations should establish standards for responsible AI model development:
Development Documentation: Comprehensive documentation of model development including:
- Problem definition and business justification
- Data sources, including assessment of data quality and potential biases
- Feature engineering approaches
- Model architecture and training procedures
- Hyperparameter selection and rationale
- Performance metrics beyond standard accuracy (fairness metrics, explainability measures)
Bias Testing: Before deployment, systematic testing for bias:
- Testing for disparate impact across demographic groups
- Testing for performance variation across important subpopulations
- Testing for specific proxy discrimination mechanisms
- Human review of detected biases and mitigation strategies
Fairness Constraints: Where appropriate, incorporating fairness constraints directly into model optimization:
- Demographic parity approaches ensuring equal outcomes across groups
- Equalized odds approaches ensuring equal error rates
- Individual fairness approaches ensuring similar individuals receive similar decisions
- Trade-off analysis between fairness and accuracy objectives
Explainability Analysis: Analysis of model decision-making:
- Inherent interpretability where possible
- Post-hoc explanation generation and validation
- Feature importance analysis
- Sensitivity analysis examining how decisions change with input variations
Validation on Held-Out Test Data: Testing on data not used during model development, with particular attention to:
- Representative coverage of important subpopulations
- Performance across operationally relevant scenarios
- Robustness to data perturbations or adversarial inputs
External Validation: For high-stakes applications, validation by independent parties without conflicts of interest.
Deployment Governance
Even well-developed models require governance in deployment:
Deployment Authorization: Formal approval processes before deploying AI systems to production, with explicit authorization from relevant governance bodies.
Staging and Monitoring: Deploying models to limited populations initially, monitoring performance, before full rollout.
Human Oversight: For high-stakes decisions, maintaining human review of model recommendations. Humans should review a portion of model decisions, have authority to override model recommendations, and be trained to exercise appropriate judgment.
Performance Monitoring: Continuous monitoring of model performance in production:
- Tracking performance metrics over time
- Detecting performance degradation (model drift, data drift)
- Monitoring for unexpected failure patterns
- Tracking fairness metrics to detect whether model develops biases in production
Incident Response: Clear procedures for responding to model failures or adverse outcomes:
- Incident documentation and analysis
- Determination of root causes
- Implementation of remediation
- Communication to affected parties
Regular Model Audits: Periodic comprehensive reviews of model performance and governance compliance.
Third-Party AI Supplier Management
Organizations using third-party AI services or models face governance challenges inherent in supply chain management:
Vendor Due Diligence: Before adopting third-party AI services:
- Assess vendor security practices
- Understand vendor data handling practices
- Clarify data governance and privacy protections
- Assess vendor governance and compliance practices
- Evaluate vendor financial stability and viability
Contractual Protections: Contracts with AI vendors should address:
- Data handling and privacy protections
- Security obligations
- Indemnification for vendor negligence or wrongdoing
- Audit rights enabling verification of vendor compliance
- Termination rights if vendor fails governance standards
- Liability limitations and allocation
Ongoing Monitoring: After adoption:
- Regular communication with vendor about security, performance, and governance
- Monitoring for vendor communications about security issues or service changes
- Assessment of whether vendor continues meeting governance standards
Alternative Sources: Maintaining strategic independence by:
- Avoiding excessive dependence on single vendors
- Maintaining capability to transition to alternative vendors if necessary
- For strategically important capabilities, developing proprietary alternatives
Specific Governance Areas
Algorithmic Transparency and Explainability
Organizations should establish policies clarifying when explainability is required:
Legal Requirements: Certain jurisdictions and applications legally require explainability—GDPR's right to explanation, lending law requirements, employment law requirements.
Ethical Requirements: Even without legal requirements, decisions affecting individuals often warrant explanation as an ethical matter.
Business Requirements: In many contexts, customers or stakeholders expect explanation regardless of legal requirements.
Practical Limitations: Some applications don't feasibly provide meaningful explanation—real-time video analysis, large language model output generation.
Organizations should establish standards for what "sufficient" explanation entails, balancing explainability requirements against practical limitations.
Bias Monitoring and Mitigation
Bias mitigation is not a one-time activity but ongoing responsibility:
Baseline Assessment: Before deployment, comprehensive testing establishes baseline understanding of potential bias sources.
Ongoing Monitoring: In production, continuous monitoring for bias:
- Performance across demographic groups
- Decision distribution across groups
- Outcome disparities
- Evidence of feedback loops where model decisions influence future training data
Mitigation Strategies: When bias is detected:
- Root cause analysis understanding why bias is occurring
- Corrective action—retraining models, adjusting decision thresholds, collecting additional training data
- Communication to affected parties if significant bias was discovered
Feedback Mechanisms: Mechanisms enabling individuals to report perceived bias, enabling continuous improvement.
Incident Reporting and Response
Clear procedures for handling AI failures:
Incident Definition: Clear definition of what constitutes AI incident requiring response—significant performance degradation, fairness violations, security breaches, unexpected failures.
Reporting Procedures: Internal procedures for incident reporting, ensuring incidents are surfaced to governance bodies.
Investigation: Systematic investigation of incidents:
- Root cause analysis
- Assessment of scope (how many individuals/decisions affected?)
- Assessment of impact
Remediation: Response proportionate to incident severity:
- Immediate operational response (pause systems, manual override, communication to affected parties)
- Short-term fixes (modify model parameters, adjust decision thresholds)
- Long-term improvements (retrain models, redesign systems)
External Reporting: Where required by regulation, reporting incidents to regulators and affected individuals.
Documentation and Audit
Comprehensive documentation and audit enables governance verification:
Model Documentation: "Model cards" and similar documentation capturing:
- Model intended use
- Performance characteristics including fairness metrics
- Known limitations and biases
- Dataset descriptions
- Training procedures
- Appropriate and inappropriate use cases
Data Documentation: "Data sheets" capturing:
- Dataset composition and characteristics
- Data collection procedures
- Known biases and limitations
- Appropriate and inappropriate uses
System Documentation: Documentation of how AI systems are deployed and used:
- Who is accountable?
- How are decisions made?
- What human oversight exists?
- How are failures detected?
Audit Procedures: Regular audits verifying:
- Governance policies are followed
- Documentation is maintained
- Systems operate as designed
- No significant risks are unmanaged
Emerging Challenges and Future Directions
Generative AI Governance
Large Language Models and generative AI introduce novel governance challenges:
Content and Output Governance: Generative AI outputs cannot be pre-screened. Organizations must manage risks of:
- Hallucinations generating false information
- Biased outputs reflecting training data biases
- Toxic or inappropriate content generation
- Inadvertent private information disclosure
Supply Chain Transparency: Many organizations using genAI have limited visibility into training data, model development, or security practices. This opacity creates governance challenges.
Continuous Learning: Models that learn from user interactions present ongoing governance challenges—can feedback loops amplify biases? Can adversarial users manipulate models?
Intellectual Property: Questions about whether genAI tools can be used for certain purposes without violating training data rights-holders' interests.
Distributed AI Systems
As AI systems become more sophisticated, distributed approaches involving multiple models and data sources create governance complexity:
Model Chains: Systems where one model's output feeds into another model's input create compounding error and bias risks.
Data Lineage: Tracing how data flows through complex systems becomes increasingly difficult.
Explainability at Scale: Explaining decisions made by systems involving dozens of models becomes practically challenging.
AI in High-Risk Domains
Particular governance focus should address:
Criminal Justice: AI increasingly used in policing, bail decisions, and sentencing. Failures create enormous human cost and social justice implications.
Healthcare: AI increasingly used for diagnosis and treatment recommendations. Failures create patient safety risks.
Employment: AI increasingly used in hiring and evaluation. Discrimination carries legal, ethical, and business consequences.
Financial Services: AI used in lending, investment, and fraud detection. Failures create financial and economic justice concerns.
These domains warrant enhanced governance intensity given potential consequences of failures.
Conclusion
AI governance is no longer optional for corporations serious about responsible AI deployment. The risks—bias causing discrimination, hallucinations generating false decisions, data leakage compromising privacy, regulatory violations creating enforcement exposure—are material and substantial. Regulatory frameworks are rapidly developing, and organizations ignoring these developments face compliance exposure.
Effective AI governance requires multifaceted approaches: establishing principles guiding AI development; creating governance structures assigning accountability; implementing processes for risk assessment and management; integrating data and model governance; maintaining documentation and audit capability; and monitoring systems in deployment.
The governance frameworks outlined in this article provide a foundation that legal counsel and risk officers can adapt to their organizational contexts. However, governance is not a static achievement but ongoing responsibility as AI capabilities evolve and regulatory requirements develop.
Organizations that establish robust AI governance early gain competitive advantages by enabling responsible innovation. They avoid the reputational damage, regulatory exposure, and direct costs associated with AI failures. They position themselves to navigate evolving regulatory requirements effectively. They build stakeholder trust in their AI systems.
Conversely, organizations that fail to establish governance face mounting risks. As regulatory enforcement accelerates and high-profile AI failures mount, the pressure on unprepared organizations will intensify. The time to establish AI governance is now, before failures force reactive governance responses.
The most successful organizations will integrate AI governance into their strategic planning and risk management, treating AI governance as core to responsible business operations rather than peripheral compliance exercise. This integration requires commitment from boards and executive leadership, but the investment is essential for long-term organizational success in the AI-driven era.
References
Barocas, S., & Selbst, A. D. (2016). Big Data's Disparate Impact. California Law Review, 104, 671-732.
Buolamwini, B., & Buolamwini, J. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Conference on Fairness, Accountability and Transparency (pp. 77-91).
Chouldechova, A. (2017). Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data, 5(2), 153-163.
Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predictive algorithms. Science Advances, 4(1), eaao5580.
European Commission. (2021). Proposal for a Regulation on Artificial Intelligence. EU.
Federal Reserve Board, Office of Comptroller of the Currency, & Federal Deposit Insurance Corporation. (2023). Interagency Guidance on Third-Party Relationships: Risk Management. U.S. Regulators.
Gianella, C. (2022). The EU AI Act: The First Comprehensive Approach to AI Regulation. Harvard Journal of Law & Technology, 36(1), 1-45.
Goodman, B., & Flaxman, S. (2016). European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation". arXiv preprint arXiv:1606.03490.
Jobin, A., Ienca, M., & Andorno, R. (2019). The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence, 1(9), 389-399.
Kaminski, M. E. (2019). The Right to Explanation, Explained. Berkeley Technology Law Journal, 34, 189-218.
Kirwan, J., & White, B. (2020). Algorithmic Governance: Implementing AI-Driven Systems of Accountability. Journal of Regulatory Economics, 58(2), 204-231.
Knight, W. (2019). AI Algorithms Can Discriminate—No Matter Your Intentions. Wired.
Lipton, Z. C. (2016). The Mythos of Model Interpretability. arXiv preprint arXiv:1606.03490.
Mittelstadt, B., Allo, P., Taddeo, M., Watcher, S., & Floridi, L. (2016). The Ethics of Algorithms: Mapping the Debate. Science and Engineering Ethics, 22(3), 519-560.
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., ... & Gebru, T. (2019). Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220-229).
Morley, J., Taddeo, M., & Floridi, L. (2020). From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Science and Engineering Ethics, 26(4), 2141-2168.
O'Neill, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing.
Reisman, D., Schultz, J. M., Crawford, K., & Whittaker, M. (2018). Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability. AI Now Institute.
Selbst, A. D., & Barocas, S. (2019). The Intuitive Appeal of Explainable Machines. Fordham L. Rev., 87, 1085.
U.S. Executive Office of the President. (2023). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. White House.
Verma, A., Ghosh, S., Ho, Q., & Zemel, R. (2018). Fairness Through Computationally-Bounded Awareness. arXiv preprint arXiv:1803.03239.
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76-99.
Whittaker, M., Crawford, K., Doroudi, R., Fried, G., & Kazansky, E. (2018). AI Now 2018 Report. AI Now Institute.
Zarsky, T. (2019). Algorithmic Transparency and Accountability: A Roadmap. European Data Protection Law Review, 5(2), 131-167.

