Privacy, Data Protection & Internet TechnologiesPublications

AI’s Goldilocks Problem: Get AI Risk Management “Just Right” or Face Uninsured Exposure

August 30, 2024Articles

From optimizing revenue to refining strategy and decision-making and safeguarding data – AI's potential benefits for businesses and their management are enormous. However, businesses and their corporate fiduciaries face an immense Goldilocks problem: too little reliance on AI can leave a company behind its competitors and breach standards of care. Too much reliance can lead to unpredictable results, damaging a company’s operations and reputation, creating significant risks and legal challenges, and threatening insurance coverage. Understanding these risks is essential to fully capitalize on the power of AI while getting risk management just right. This article outlines critical AI risk areas and the vital steps your company can take to mitigate them.

Do you understand what AI is and what it does?

Generative AI, also known as machine learning (“ML”) or “hard AI,” broadly refers to a group of technologies that utilize an algorithm—given to it with a finite data set—to process data, form new inferences, and make predictions beyond those provided in the original data. Developers can implement this technology in a variety of contexts and for a multitude of purposes. For example, ChatGPT is a large language model (“LLM”): an AI model that can generate, process, and classify general-purpose natural language tasks.

But state, federal, and international regulations are unsettled and disjointed on the definition of “AI.” Many federal agencies use the definition of AI in the National Artificial Intelligence Initiative: “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”[1] The White House has pointed out that AI, defined as such, can “exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”[2]Utah, the first state in the US to enact a specific AI-use statute, defines AI as “an artificial system that: (i) is trained on data; (ii) interacts with a person using text, audio, or visual communication; and (iii) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight.”[3]

AI is defined in the EU more specifically as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that . . . infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions[.]”[4] This definition does not cover simpler traditional software systems or programming approaches. Instead, it focuses on certain prohibited, unacceptable risks and places most obligations on providers and developers. American companies may need to account for all three definitions, given the EU AI Act’s extra-territorial effects.

AI is also difficult to define because it constantly evolves, with modern AI capabilities exploding over the past few years.[5] AI is developing exponentially and has already been incorporated into robo-advisors, trading systems, legal and compliance, customer service, and more. AI has already outpaced legal frameworks for intellectual property,[6] legal ethics,[7] and data security,[8] to name just a few. This rapid evolution demands constant vigilance and adaptation from businesses and insurers. Without tested legal frameworks to allocate liability, companies may have difficulty deciding with certainty. Further, AI relies on the vast amount of data humans continually produce to make decisions, and thus, AI changes as rapidly as data is fed into it. This reliance also subjects AI to many of the same biases and shortcomings that plague its creators.

How is AI use regulated?

The regulatory landscape of AI is constantly evolving, which makes it difficult for insureds to use or develop AI with certainty about legal exposure. In the United States, the FTC ramped up efforts to crack down on unfair and deceptive uses of AI.[9] The National Association of Insurance Commissioners is actively exploring AI’s implications for the insurance sector, focusing on consumer protection, regulatory frameworks, and technological advancements.[10] Meanwhile, the EU finalized the world’s first comprehensive AI law last year.[11] The very same day, Utah became the first state in the US to enact an AI-focused consumer protection law,[12] and more states followed suit.

Thereafter, Colorado enacted a new law similar to the EU AI Act. Colorado’s comprehensive AI Act adopts a risk-based approach to AI regulation by imposing specific requirements on developers and deployers of AI systems concerning notice, documentation, disclosure and impact assessments. The Colorado AI Act focuses primarily on consumer protection and requires companies to use reasonable care to protect consumers from risks of AI discrimination.[13] Both the Colorado AI Act and the EU AI Act place emphasis on regulating certain high risk industries, including education, employment, finance or lending services, healthcare, housing, insurance, and legal services.

Additionally, the White House issued an Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI. One of the mandates of the EO was for the National Institute of Standards and Technology (NIST) to update its AI Risk Management Framework to improve the safety, security, and trustworthiness of AI systems. NIST’s AI framework has substantial legal significance because it is one of the most frequently cited risk management frameworks in the US.[14] The NIST AI Risk Management Framework (AI RMF) helps organizations identify and propose actions for AI risks, and it provides more than 200 actions across 12 different risk categories for managing AI risks.[15] Notable elements of the NIST AI RMF include:

  • Trustworthiness: NIST’s AI RMF draws on standards from the International Standard of Organization (ISO), including validity, reliability, resilience, security, and fairness, to provide a means to assess whether an AI system is trustworthy.
  • Monitoring: The AI RMF calls for continuous monitoring and testing of AI systems to ensure they functions as designed and meet performance expectations. Organizations are also encouraged to regularly improve systems based on findings from monitoring.
  • Controls, training, and evidence: The AI RMF contains its own set of controls, employee training information, and suggested evidence to be collected.
  • Ethical Uses: The framework helps organizations use AI in an ethical manner while promoting accountability and transparency among developers and deployers.

What risks does AI create for businesses, officers, and directors?

AI is used in applications for products ranging from automobiles to healthcare to Internet-of-things (“IoT”) devices.[16] The speed at which data is generated and processed has already greatly benefitted businesses and insurers. “Traditional statistical models cannot handle the large quantity of data [that AI can process]. As AI can execute complex analyses and computations at a speed impossible for humans, it generates faster insights.”[17]

While AI minimizes some risks, it also introduces new exposures. In part because AI depends on the massive amounts of data humans feed it. For example, a business training AI with sensitive personal information may face financial damage if the data is improperly used or disclosed.[18] The effectiveness of merger due diligence can be eviscerated if a company blindly uses AI that cannot filter out wrong information.[19] AI also introduces new public-facing service endpoints, creating more entrances for cyberattacks; failing to take appropriate precautions against these attacks can lead to a breach of fiduciary duties and civil liability. Moreover, faster data processing means a quicker rate of errors, more complex vulnerabilities, and more potential legal violations. AI creates questions about legal exposure, risk management, adequate insurance coverage, and the discharge of fiduciary duties.

An organization’s reputation is the cornerstone of its business model. And misuse of AI can also harm a company’s reputation. For example, there is ample evidence that bias is “baked into the outcomes AI is asked to predict.”[20] AI trained using data from primarily Caucasian or male subjects may mistreat racial minorities and women. This tendency has led to housing, financial lending, hiring, and corporate discrimination.[21]

Does your company and management have a “Black Box” Problem?

AI systems can experience malfunctions and failures resulting from improper maintenance, design defects, or human error. These defects can lead to financial loss, property damage, or bodily injury.[22] For example, generative AI has a “well-documented tendency to provide plausible-sounding answers that are factually incorrect or so incomplete as to be misleading[.] AI might generate a description of a product with non-existent features or provide product instructions that are dangerous when implemented.”[23]Such lies by omission may make companies liable for deceptive marketing or for injuries caused by defects in the AI components of their products.

Critically, AI developers keep their algorithms under lock and key. This lack of transparency makes it hard to determine the cause of errors. Insureds, in turn, may not fully understand risks when purchasing AI products. Meanwhile, insurers cannot differentiate between covered unintended errors and intentional acts that would be excluded from coverage. Furthermore, this confusion hinders risk assessment and accurate pricing.

How does using or not using AI affect my fiduciary duties and insurance coverage?

Boards and management owe fiduciary duties to their company and stakeholders. The law generally recognizes that corporate directors and officers often make hard choices and avoids second-guessing decisions made with reasonable information. This core legal principle is called the business judgment rule (BJR). The BJR fundamentally protects good faith decisions that, in retrospect, prove erroneous; this encourages innovation and promotes risky decisions with high returns.[24] Acts that exceed the BJR are decisions made without proper diligence or good faith, such as fraud, self-dealing, and indecision.[25]

If a board over-relies on flawed AI tools to make business decisions, it may exceed BJR protection and breach its fiduciary duties. [26] This is especially likely when the goals and values of AI, the corporate fiduciary, the shareholders, and the data subjects are not aligned. [27] When a fiduciary uses a human consultant, “the fiduciary can at least do reasonable due diligence about that human’s training background, experience, and level of performance for other corporate clients. An algorithm, however, requires a new level of expertise input that which might make due diligence difficult or impossible to conduct. . . . [A] corporate fiduciary relying on an AI tool is unlikely to know the expertise of the designer, let alone what, if any, experts were involved in the design process, or what these experts’ credentials are.” [28] Moreover, the black box problem makes it difficult for AI to explain why it reaches its conclusions. [29]Deep learning AI systems may recommend a course of action that leads to negative consequences and cannot explain its rationale. For example, in 2010 and 2015, AI-managed EFTs caused crashes in the stock market, resulting in trillions of dollars in losses for investors, but human fiduciaries were left with the blame. [30]

Finally, AI may adversely impact corporate fiduciaries’ other duties. The recently established duty of supervision is a derivative of the duty of care and requires corporate leaders to “assure that a corporate information and reporting system, which the board concludes is adequate, exists[.]”[31] A sustained or systematic failure of the Board to exercise oversight may constitute bad faith for which fiduciaries may be personally liable. This duty to monitor and ensure adequate information and reporting systems certainly extends to AI systems. Thus, boards should implement reporting, information systems, and controls that govern the company’s use of AI technology.[32] Furthermore, passively deferring to an algorithmic decision may violate the duty of loyalty because such abdication constitutes bad faith.[33] Legal experts have even suggested imposing a new fiduciary duty of loyalty, an affirmative duty to protect sensitive information.[34]

At the same time, failing to incorporate AI into decision-making could also constitute a breach of fiduciary duty. Hence, the Goldilocks problem. “Given the inevitable pervasiveness of AI tools for corporate fiduciaries, those who fail to educate themselves about the existence of these tools and how to use them properly may soon find that such failures constitute a breach of their fiduciary duties.”[35] “Any director or executive officer (CEO, CFO, CLO/GC, CTO, and others) of a publicly held company who ignores the risks and fails to capitalize on the benefits, of Generative AI does so at his or her individual peril because of the risk of personal liability for failing to properly manage a prudent GenAI strategy.”[36]

AI’s potential to create value will become an avenue for differentiation as management continues incorporating AI even deeper within their operations.[37] But with this demand comes increased regulatory scrutiny on AI practices from the regulatory agencies, Congress, and international authorities. The SEC, in particular, has pursued companies that make false and misleading statements regarding the use, value, and risks within the company’s AI systems.[38] And the FTC recently signaled its intent to increase consumer protections for AI-related cybersecurity incidents.[39] These regulatory changes will likely increase demand for AI insurance while simultaneously shaping what acts are excluded from coverage.

These novel risks AI presents are worsened because experts have not yet determined how to remedy and apportion liability for harm. At least one scholar has argued that AI is a judgment-proof agent and that the human principal who exercises control over the AI is responsible for the damages caused by its AI agent.[40] The problematic question of whether the AI itself or a human—and, if so, which human—is liable for harm can have a massive impact on insurance coverage. Further, regulatory change in the United States and abroad could demand policy exclusions or affect what constitutes a regulatory violation that leads to a denial of coverage.

How can corporate fiduciaries mitigate AI risk and avoid the Goldilocks problem?

Some scholars argue that these risks are not dissimilar to how human corporate executives make decisions, and thus, AI does not present insurmountable risks to fiduciary duty.[41] Others have argued that, under some state statutes, AI is a “person” whose opinion executives may be entitled to rely on when making informed decisions.[42] Critically, nearly half of IT executives expect AI devices to be on corporate boards by 2025.[43]

To avoid the risks AI presents to Boards hoping to maintain compliance with their fiduciary duties, Boards must supervise all AI-facilitated processes, ensure adequate data security controls, and conduct at least annual reviews of AI vulnerabilities. Board oversight of AI risks and risk mitigation is vital. Therefore, Boards should actively engage in overseeing AI strategy and implementation. “Any publicly held company that does not establish policies and procedures regarding its GenAI use is setting itself up for potential litigation by stockholders and vendors, customers, regulatory agencies, and other third parties.”[44]

A comprehensive AI governance policy should incorporate the following risk management principles:

  • Requirement for regular risk assessments for AI systems to identify potential vulnerabilities;
  • Development of mitigation strategies and implement controls to safeguard against identified risks;
  • Monitoring of AI systems continuously for emerging risks; and
  • Establishment of incident response plans to handle AI-related failures and breaches effectively.

Furthermore, AI governance policies should clearly define roles and responsibilities for oversight of AI use. Those assigned roles should include an AI ethics officer, data protection officer, and AI project managers to ensure accountability and effective management of AI. Moreover, a company’s AI governance policy should establish an AI governance committee responsible for overseeing AI strategy, policy enforcement, and risk management. The governance committee, oversight officers, and AI project managers should provide regular updates and reviews to the Board on AI initiatives’ progress, challenges, and compliance status. Upon receiving any such updates, it’s important for Boards to ask critical questions about AI’s strategic potential, risks, and ethical implications. When overseeing AI use and establishing governance policies, Boards should keep in mind the following ethical and legal considerations:

  • Bias mitigation: Boards must implement measures to identify and mitigate biases in AI systems to reduce the potential for erroneous, harmful, and discriminatory outputs.
  • Transparency: It is important for companies and their Boards to maintain transparency and accountability in AI decision-making processes. AI transparency helps consumers understand and trust how AI systems work.
  • Compliance: Boards must also ensure compliance with data protection regulations and intellectual property rights. This includes establishing liability frameworks for AI-related errors and harms.

In addition to establishing a comprehensive AI governance policy, companies should explore coverage products to further mitigate AI risks.

How can companies use insurance to mitigate AI risks?

AI governance and oversight steps are also crucial to ensuring that all corporate risk management tools are not vulnerable. The Errors and Omissions (“E&O”) and other Professional Liability policies that cover claims arising from negligence, errors, and omissions in providing personal services should be mainly focused. Similarly, Directors and Officers (“D&O”) policies cover corporate executives’ assets in the event of claims alleging mismanagement or breach of fiduciary duty, making it critical to corporate decision-making and governance. Finally, Employment Practices Liability (“EPL”) provides coverage for claims related to discrimination, harassment, or wrongful termination. In these critical areas, businesses must avoid blind overreliance on AI. Yielding too much discretion to “the black box” is likely to lead to denial of E&O, D&O, and EPL insurance coverage and result in claims of breaches of fiduciary duty. To avoid this pitfall, companies should promote explainable AI and prioritize AI models that are transparent and explainable to limit the “black box” problem and aid in risk analysis and liability determination.

Traditional insurance products may soon become outdated if companies do not review and update them regularly. In the age of AI, substantial gaps in current coverage exist. For example, human error remains a driving cause of cyber risks, and many insurance policies do not cover incidents like fraudulent fund transfers that AI may worsen if improperly trained.[45] Insurers may exclude from cybersecurity coverage the use of unlawfully collected data to train AI.[46] Or intellectual property insurance that protects against infringement claims “arising from software or computer hardware” may contemplate infringement from fixed code but not from an AI algorithm.[47] To improve their posture vis-à-vis AI threats, businesses need insurance policies that integrate machine learning to more accurately reflect patterns in business, make data-driven decisions, and account for unknowns. Businesses may also want to consider specialized AI insurance. Some insurers have already begun offering new insurance products tailored to the unique risks posed by AI. These products potentially provide more comprehensive coverage than traditional policies. These products can help cover LLM hallucinations, algorithmic bias, regulatory investigations, IP infringement claims, and other class action lawsuits. In the changing landscape of AI, safe is better than sorry. When obtaining coverage for AI use, companies should:

  • Evaluate and obtain AI-specific insurance coverage to address unique risks posed by AI;
  • Consider specialized AI insurance products for comprehensive coverage, including LLM hallucinations, algorithmic bias, and regulatory investigations;
  • Regularly review and update insurance policies to reflect the evolving AI landscape; and
  • Prepare for renewal discussions by articulating AI strategies, current uses, and compliance status.

On the other hand, these new products are not without their limits. Like cyber insurance before it, AI insurance premiums may be high, and coverage limits may be low until insurers have more certainty about the risks AI presents. For this reason, insureds need to understand the technology that underlies AI. Furthermore, new AI-related insurance exclusions are sure to come. Insurers may soon seek to exclude from coverage:

  • Losses stemming from intentional misuses of AI;
  • Standard software failures;
  • Cybersecurity breaches caused by vulnerabilities not accounted for by existing cyber insurance;
  • Non-compliance with data privacy and other regulations, particularly as the regulatory landscape of AI evolves; and
  • The unpredictability of AI behavior causes other unforeseen events.

These strategies are essential in the context of insurance renewal. Companies should prepare for renewal discussions by articulating their AI strategy, the types of AI they currently use, the credentials of those overseeing and monitoring AI use, and regulatory compliance status.

How can your company get the most out of AI?

To make accurate and responsible AI decisions, directors and management must also be adequately trained regarding how AI works and, by extension, its potential for failure. When corporate fiduciaries understand AI's risks, they can select AI structures that best fit their business needs and vulnerabilities and minimize the possibility of breach of fiduciary duty. In particular, they must “do more than simply shop for good AI products, since their fiduciary duty will not end with the purchase process for such products.”[48]

To show that their reliance on AI products fulfills their fiduciary duties, at minimum, the board of any company should incorporate the following best practices into their AI framework:

  • Every member of the board and executive teams must understand how AI works, its different forms, and how the company uses AI. This understanding must include what data are being used to train any AI the company will use and regular reports from the IT department.
  • Risk-mitigation strategies should be institutionalized by creating an AI-specialized team across the organization. The business, legal, technology, and public relations departments should all take responsibility for evaluating and mitigating AI risk.
  • The board should include well-rounded perspectives on AI and even establish an AI committee that oversees AI risks and opportunities.
  • Executives should understand what regulatory requirements apply to their AI infrastructure. Most importantly, how the Security and Exchange Commission’s, FTC’s, and various states’ data breach public disclosure laws will affect the company’s AI usage should be incorporated into disclosure protocols.
  • Technological and HR mechanisms should be deployed to monitor the performance of any AI controls regularly, assess the impact on business indicators, adapt to any weaknesses at least annually, and follow any AI-related incident.
  • Clear and consistent communication should be maintained with legal counsel, data security vendors, and internal IT and public relations teams to ensure compliance, minimize AI risk, standardize AI usage, and proactively manage incident response.
  • All AI protocols and procedures should be written, accessible, address ethical standards, and be included in disclosures to regulators as appropriate.[49]
  • Corporate leaders should understand how the spectrum of AI risks affects their business profile and insurance needs, ensure the reliability of data used in AI, and implement a data security policy and other governance and oversight mechanisms that account for the threats AI poses.
  • AI initiatives should be integrated into the organization’s strategic planning processes. Regular reviews should be conducted to ensure AI projects strategically align with business goals and objectives.

Conclusion

AI is growing not just rapidly but exponentially. Daily innovations in AI tools drive even faster generative AI and vice versa. AI tools and devices are more likely to increase, embed in existing solutions, and become the new normal. LLMs may develop into even more complex multimodal LLMs, deepfake technology will become more convincing, and more manual functions will be automated. The implication for legal and insurance frameworks is an increased rate of obsolescence. As previously discussed, regulatory change is inevitable, whether permitting AI to sit on corporate boards, more jurisdictions with comprehensive AI regulations, or allocating liability for AI harms to its human controllers.

Specialized AI risk insurance is thus vital to promoting growth in the AI field by giving businesses a financial safety net when they adopt AI technologies and facilitating a responsible approach to the risks presented by AI. Insurers should not seek to stifle the development of new and better AI but rather should enable careful and thoughtful adoption through risk management and comprehensive insurance products. By proactively addressing AI-specific insurance complexities with brokers and counsel, businesses can ensure adequate protection and customized risk mitigation strategies as they navigate this rapidly evolving technological landscape. When in doubt, companies should contact a legal services firm with a comprehensive knowledge of how AI technology works and the risks it presents.


[1]See15 U.S.C. 9401(3).

[2] Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023).

[3] Utah S.B. 149 “Artificial Intelligence Policy Act” (2024).

[4] Artificial Intelligence Act, 5662/24, Article 3(1) (March 13, 2024).

[5] Charlie Giattino et al., Artificial Intelligence, Our World in Data (https://ourworldindata.org/artificial-intelligence#).

[6] Brittany Jones, Navigating the Intersection of AI and Intellectual Property Law, JD Supra (March 5, 2024) (https://www.jdsupra.com/legalnews/navigating-the-intersection-of-ai-and-9469978/).

[7] Molly Bohannon, Lawyer Used ChatGPT In Court—And Cited Fake Cases. A Judge Is Considering Sanctions, Forbes (June 8, 2023) (https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/?sh=11a68d0b7c7f).

[8] Commissioner Lina Khan, Remarks at the Federal Trade Commission PrivacyCon, March 6, 2024.

[9] See, e.g., Federal Trade Commission, FTC Launches Inquiry into Generative AI Investments and Partnerships (Jan. 25, 2024).

[10] Jim Charron, Brad John & Matt King, Artificial Intelligence: Insurance Considerations and Use Cases, The Hartford Financial Services Group, Inc. (Nov. 3, 2021) (https://www.thehartford.com/insights/technology/artificial-intelligence-insurance-considerations).

[11] European Union Parliament, EU AI Act: first regulation on artificial intelligence (Aug. 6, 2023) (https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence).

[12] Utah S.B. 149.

[15] NIST AI Risk Management Framework (AI RMF) (https://www.techpolicy.press/unpacking-new-nist-guidance-on-artificial-intelligence/;https://www.nist.gov/itl/ai-risk-management-framework).

[16] Charron, John & King, Artificial Intelligence: Insurance Considerations and Use Cases.

[17] National Association of Insurance Commissioners Center for Insurance Policy and Research, Artificial Intelligence (Jan. 31, 2024) (https://content.naic.org/cipr-topics/artificial-intelligence).

[18] The Mahoney Group, Navigating the Risks and Exposures of Artificial Intelligence: Essential Insurance Coverage for AI-Related Claims (May 8, 2023) (https://www.mahoneygroup.com/artificial-intelligence-insurance-claims/).

[19] Thomas Belcastro, Getting on Board with Robots: How the Business Judgment Rule Should Apply to Artificial Intelligence Devices Serving as Members of a Corporate Board, 4 Geo. L. Tech. Rev. 263, 275-76 (2019).

[20] Olga Akselrod, How Artificial Intelligence Can Deepen Racial and Economic Inequities, American Civil Liberties Union (July 13, 2023).

[21] Id.

[22] The Mahoney Group, Navigating the Risks and Exposures of Artificial Intelligence: Essential Insurance Coverage for AI-Related Claims.

[23] John Buchanan, Stuart Irvin & Megan Mumford Myers, Generative AI Loss Adds New Risk Area to Insurance Policies, Bloomberg Law (May 9, 2023).

[24] Lori McMillan, The Business Judgment Rule as an Immunity Doctrine, 4 William & Mary Bus. L. Rev. 521, 525 (2013).

[25] Id.

[26] Ambuj Sonal & Tanay Jha, The Fiduciary Duty Dilemma: Exploring the Legality of AI-Assisted Decision Making by Directors, Chambers & Partners (June 13, 2023) (https://chambers.com/articles/the-fiduciary-duty-dilemma-exploring-the-legality-of-ai-assisted-decision-making-by-directors).

[27] Claire Boine, Fiduciary Principles in AI: Utilizing the Duty of Loyalty to Align Artificial Intelligence Systems with Human Goals, paper presentation, We Robot 2023, 13 (forthcoming).

[28] Alfred R. Cowger, Jr., Corporate Fiduciary Duty in the Age of Algorithms, 14 J. of Law, Tech. & The Internet 138, 162 (2023).

[29] Belcastro, Getting on Board with Robots: How the Business Judgment Rule Should Apply to Artificial Intelligence Devices Serving as Members of a Corporate Board, 4 Geo. L. Tech. Rev. at 273.

[30] Id. at 272.

[31] In re Caremark Int’l Inc. Deriv. Litig., 698 A.2d 959, 971 (Del. Ch. 1996).

[32] Joseph R. Tiano Jr., et al., The Duty of Supervision in the Age of Generative AI: Urgent Mandates for a Public Company’s Board of Directors and Its Executive and Legal Team, American Bar Association (March 26, 2024) (https://businesslawtoday.org/2024/03/the-duty-of-supervision-in-the-age-of-generative-ai-urgent-mandates-for-a-public-companys-board-of-directors-and-its-executive-and-legal-team/).

[33] Cowger, Jr., Corporate Fiduciary Duty in the Age of Algorithms, 14 J. of Law, Tech. & The Internet at 156.

[34] See generally Neil Richards & Woodrow Hartzog, A Duty of Loyalty for Privacy Law, 99 Wash. U. L. Rev. 961 (2021).

[35] Cowger, Jr., Corporate Fiduciary Duty in the Age of Algorithms, 14 J. of Law, Tech. & The Internet at 139.

[36] Tiano Jr., et al., The Duty of Supervision in the Age of Generative AI: Urgent Mandates for a Public Company’s Board of Directors and Its Executive and Legal Team.

[37] Timothy M. Sullivan, Artificial intelligence: The new ESG?, Willis Towers Watson (March 28, 2024).

[38] Id.

[39] Commissioner Lina Khan, Remarks at the Federal Trade Commission PrivacyCon, March 6, 2024.

[40] Anat Lior, AI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogy, 46 Mitchell Hamline L. Rev. 1043 (2020).

[41] Belcastro, Getting on Board with Robots: How the Business Judgment Rule Should Apply to Artificial Intelligence Devices Serving as Members of a Corporate Board, 4 Geo. L. Tech. Rev. at 272.

[42] SeeKeith Bishop, Does The Business Judgement Rule Protect Decisions Based On AI?, JD Supra (April 25, 2023) (https://www.jdsupra.com/legalnews/does-the-business-judgement-rule-5104959/).

[43] Belcastro, Getting on Board with Robots: How the Business Judgment Rule Should Apply to Artificial Intelligence Devices Serving as Members of a Corporate Board, 4 Geo. L. Tech. Rev. at 272.

[44] Tiano Jr., et al., The Duty of Supervision in the Age of Generative AI: Urgent Mandates for a Public Company’s Board of Directors and Its Executive and Legal Team.

[45] Mike Pangilinan, How is AI impacting the cyber insurance landscape?, InsuranceBusiness (Dec. 19, 2023) (https://www.insurancebusinessmag.com/ca/news/cyber/how-is-ai-impacting-the-cyber-insurance-landscape-470720.aspx).

[46] See Rachel Curry, Companies want to spend more on AI to defeat hackers, but there’s a catch (“Policy around things like the use of generative AI can mitigate [the threat of uncovered intentional internal data misuses], but cyber blockades are also key”).

[47] Buchanan, Irvin & Myers, Generative AI Loss Adds New Risk Area to Insurance Policies.

[48] Cowger, Jr., Corporate Fiduciary Duty in the Age of Algorithms, 14 J. of Law, Tech. & The Internet at 145.

[49] Tiano Jr., et al., The Duty of Supervision in the Age of Generative AI: Urgent Mandates for a Public Company’s Board of Directors and Its Executive and Legal Team.