AI/ML 10 min read

Your Board Will Ask About AI Risk in 2026. Here's How to Have a Good Answer.

The FTC fined Cleo AI $17 million. Insurers are adding AI exclusions to liability policies. Here's how to build a defensible AI governance program before your next board meeting.

By Meetesh Patel

The FTC just fined Cleo AI $17 million for misleading claims about its AI-powered cash advance service. Insurers are quietly adding AI exclusions to professional liability policies. And if your company does any business in Europe, the EU AI Act's penalty regime went live in August, with fines reaching 7% of global revenue.

For most founders and executives, AI liability still feels abstract. That's about to change. Your next board meeting, investor call, or enterprise sales cycle will surface questions about AI governance that generic policies won't answer. The good news: building a defensible AI program isn't about stopping innovation. It's about documenting the decisions you're already making.

The Enforcement Picture Is Sharpening

The Federal Trade Commission's "Operation AI Comply" survived the administration change. In 2025 alone, the FTC brought enforcement actions against Cleo AI ($17 million settlement for deceptive AI claims), DoNotPay (false "robot lawyer" marketing), and IntelliVision (barred from making misleading claims about its facial recognition technology). The common thread: companies overpromising what their AI can do and underdelivering on consumer outcomes.

The SEC has joined the conversation too. In August 2025, the agency launched an AI Task Force and created a Chief AI Officer role to oversee AI-related disclosures. If you're making claims about AI capabilities in investor materials or public filings, expect scrutiny. The SEC's 2025 Compliance Plan specifically flags AI governance and disclosure accuracy as examination priorities.

Meanwhile, the EU AI Act's penalty regime is no longer theoretical. Since August 2, 2025, violations can trigger fines up to EUR 35 million or 7% of global annual turnover for prohibited AI practices, EUR 15 million or 3% for other violations. If your AI tools touch European customers or data subjects, you're in scope. Where you're headquartered doesn't matter.

The Board Oversight Gap

Here's the number that should concern you: according to a McKinsey analysis, 88% of organizations report using AI in at least one business function. But only 39% of Fortune 100 companies have disclosed any form of board oversight of AI. Fewer than 25% have board-approved AI policies.

That gap matters. A 2025 MIT CISR study found that organizations with digitally and AI-savvy boards outperform their peers by 10.9 percentage points in return on equity. Those without trail their industry average by 3.8%.

The implication for founders: sophisticated investors and acquirers will ask about AI governance. They'll want to see documented policies, risk assessments, and clear lines of accountability. If you're planning a financing round or exit in the next 18 months, building this infrastructure now gives you a cleaner story.

For larger companies, the fiduciary angle is sharpening. Directors who fail to ensure reasonable AI oversight may face derivative claims if an AI system causes significant harm. The legal theories are still developing, but the trajectory is clear: AI risk is becoming a board-level responsibility.

What "Good" Looks Like: A Practical Framework

The NIST AI Risk Management Framework provides a solid foundation for US organizations. It's voluntary, flexible, and increasingly recognized as a baseline for demonstrating reasonable care. The framework organizes around four core functions: Govern, Map, Measure, and Manage.

But frameworks alone don't reduce liability. Doing the work does. Here's what that looks like in practice:

Governance Structure. You need someone accountable for AI risk. In larger organizations, this typically means a cross-functional AI governance committee with representatives from legal, privacy, security, product, and compliance. At startups, it might be your General Counsel or a designated AI lead who reports to the CEO. The key is documented accountability. According to Deloitte's Board Governance Roadmap, about 40% of companies now assign AI oversight to a board-level committee. That's nearly four times the rate from 2024.

Use Case Inventory. The average enterprise runs 66 different GenAI applications, with 10% classified as high-risk. You can't govern what you don't know about. Maintaining a registry of AI systems, their purposes, risk classifications, and responsible owners is table stakes. This inventory becomes essential when responding to regulatory inquiries or conducting due diligence.

Tiered Controls. Not every AI application needs the same oversight. Low-risk uses like scheduling assistants or document summarization need basic acceptable use policies. High-risk uses like automated hiring decisions, credit underwriting, or healthcare recommendations need human-in-the-loop review, bias testing, and audit trails. Match your controls to your risk.

Documentation Discipline. When things go wrong, regulators and plaintiffs ask the same question: what did you know and when did you know it? Maintaining technical documentation, training data provenance records, and decision audit trails demonstrates due diligence. It also makes compliance with new regulations far more manageable.

The Insurance Problem

Here's a risk that's flying under most companies' radar: your professional liability insurance may not cover AI-related claims. At all.

Carriers including Berkley and Hamilton Insurance Group have introduced AI exclusions to D&O and E&O policies. Berkley's "absolute" AI exclusion eliminates coverage for any claim "based upon, arising out of, or attributable to" AI use, covering everything from AI-generated content failures to inadequate AI governance to chatbot communications.

The exclusions are broad. They're often buried in endorsement language. And many policyholders don't know they exist until they file a claim.

A few specialized carriers are moving the other direction. Armilla Insurance Services launched an AI liability policy in April 2025 that explicitly covers AI-specific perils like hallucinations, model performance degradation, and algorithmic failures. AIUC offers coverage up to $50 million for losses caused by AI agents. Munich Re has been offering AI coverage since 2018.

Your immediate action: pull your D&O and E&O policies and search for AI-related language. If you find exclusions, or if the policies are silent on AI, talk to your broker about coverage gaps. According to the American Bar Association, this is one of the most common blind spots in corporate risk management right now.

Vendor Contracts: Where Liability Gets Allocated

If you're using third-party AI tools (and nearly every company is), the vendor contract determines who bears the risk when something goes wrong. The current market is not favorable to buyers.

According to recent industry data, only 33% of AI vendors provide indemnification for third-party intellectual property claims. Many contracts disclaim indirect and consequential damages, cap liability at annual fees, and offer no indemnity for AI-generated content that turns out to be infringing, biased, or simply wrong.

Here's what to negotiate:

IP Indemnification. Require the vendor to indemnify you for IP infringement claims arising from their model or training data. Watch for carve-outs that swallow the coverage, like exceptions for "customer inputs" or "modified outputs."

Regulatory Compliance Warranties. New AI regulations, including the EU AI Act and state laws like the Colorado AI Act, impose obligations on both AI developers and deployers. Your contract should clarify who's responsible for which obligations. Get an explicit warranty that the vendor's service complies with applicable law.

Liability Caps That Aren't Illusory. A broad indemnity with a $50,000 liability cap provides little real protection. Push for "super-caps" on critical risks: IP infringement, data breaches, regulatory violations. If the vendor won't negotiate, that tells you something about their confidence in their product.

Data Use Restrictions. Restrict the vendor from using your data or outputs for model training without explicit consent. This protects trade secrets and reduces the risk that your proprietary information ends up in the vendor's foundation model.

State Law Is Fragmenting (But Not Blocking)

States introduced over 1,080 AI-related bills in 2025. Only 118 became law, an 11% passage rate. The landscape is messy but manageable.

Colorado delayed its AI Act until June 30, 2026. When it takes effect, it will impose requirements on "high-risk" AI systems and prohibit algorithmic discrimination. If you're deploying AI for consequential decisions affecting Colorado consumers, start your compliance assessment now.

California enacted SB 53, targeting "large frontier developers" with annual revenue exceeding $500 million. Most companies won't be directly covered, but the law signals where regulation is heading.

Texas passed TRAIGA, effective January 1, 2026. It's narrower than Colorado's law, focused primarily on prohibiting AI uses that encourage harm, enable illegal discrimination, or facilitate crimes.

Federal preemption efforts failed in 2025. A provision in an early version of budget legislation would have imposed a 10-year moratorium on state AI regulation, but it was stripped from the final bill. States will continue legislating. The practical path forward is building governance infrastructure flexible enough to adapt.

MD/DC Considerations

Neither Maryland nor the District of Columbia has enacted broad AI legislation. But organizations operating in the federal contracting space should monitor agency-specific requirements. The Trump administration's Executive Order 14179 directs federal agencies to develop AI Action Plans and update procurement policies. If you're selling to the government, AI governance documentation is increasingly part of the procurement conversation.

Practical Takeaways

This week:

  • Pull your D&O and E&O insurance policies and search for AI-related exclusions or endorsements
  • Designate a single point of accountability for AI risk, even if informal
  • Create a basic inventory of AI tools currently deployed across your organization

This quarter:

  • Form a cross-functional AI governance committee or working group with legal, security, and product representation
  • Review and renegotiate key AI vendor contracts for indemnification, liability caps, and data use restrictions
  • Conduct a risk tiering exercise: classify each AI use case as low, medium, or high risk
  • Develop an AI acceptable use policy for employees, including guidance on approved tools and data handling

Before your next board meeting:

  • Prepare a board-ready summary of AI governance activities, risk posture, and insurance coverage
  • Document your NIST AI RMF alignment or equivalent governance framework
  • Brief directors on material AI risks and mitigation measures

Watchlist

February 2, 2026: EU AI Act high-risk system requirements take full effect. If you're developing or deploying AI systems in healthcare, critical infrastructure, HR, or financial services that touch European markets, compliance deadlines are real.

June 30, 2026: Colorado AI Act takes effect. First broad state AI law covering private sector high-risk AI systems.

Q1 2026: California Privacy Protection Agency expected to finalize regulations on automated decision-making technology and risk assessments.

Ongoing: FTC "Operation AI Comply" enforcement actions continue. The agency has signaled it will require evidence of consumer harm but remains focused on deceptive AI marketing claims.

2026 Legislative Sessions: Multiple states have AI bills pending. Watch Connecticut, Illinois, and New York for movement on broad AI governance requirements.

Looking Ahead

The organizations that will operate with the most freedom in this environment are those that can demonstrate reasonable AI governance. Not because the law requires it everywhere yet, but because customers, investors, insurers, and partners increasingly expect it. Building that infrastructure now, before the next enforcement action or regulatory deadline, creates optionality. Waiting means playing catch-up.

Disclaimer: This article is provided for informational purposes only and does not constitute legal advice. The information contained herein should not be relied upon as legal advice and readers are encouraged to seek the advice of legal counsel. The views expressed in this article are solely those of the author and do not necessarily reflect the views of Consilium Law LLC.