← Back to Blog

S&P 500 AI Risk Disclosure Hit 83% in Two Years. Board AI Expertise? Still 2.7%.

· Don Ho · 5 min read

Last updated: April 27, 2026

The Conference Board dropped a report on April 22 that should land on every general counsel’s desk this week. From 2023 to 2025, the share of S&P 500 companies disclosing AI as a material risk climbed from 12% to 83%. In two years, AI went from a footnote to a near-universal 10-K disclosure. Meanwhile, the percentage of S&P 500 directors with disclosed AI expertise sits at 2.7%, up from 1.5% four years earlier.

That is the entire story. Risk acknowledgment has gone vertical. Board competence to oversee that risk has barely moved. Plaintiffs’ lawyers, regulators, and proxy advisors will read this report. So should you.

The numbers in plain English

The Conference Board surveyed 130 executives, mostly from US public companies, and pulled disclosure data through December 2025. Five findings matter:

83% of S&P 500 companies now disclose AI as a risk factor. Up from 12% in 2023. There is no remaining cover for the company that says nothing about AI in its risk factors. The disclosure norm is now universal.

2.7% of S&P 500 directors have disclosed AI expertise. For comparison, 27% have cybersecurity expertise and 51% have technology expertise. AI has become a top-five enterprise risk while the boards overseeing it have less AI fluency than they had cybersecurity fluency a decade ago.

Only 23% of executives say their board is highly fluent in AI. Half say moderate. A quarter say low or none.

9% say their company is “very prepared” for AI regulation. 58% say “somewhat prepared.” 28% say “early stages.” 5% say not prepared at all. The Colorado AI Act lights up June 30. The EU AI Act logging deadline hits in August. Most public companies are not ready.

Only 21% of companies plan to add directors with AI or technology expertise. Most are betting on training existing directors instead. Whether that is realistic depends entirely on how the directors approach it.

Why this is a litigation roadmap, not a governance update

When 83% of your peers disclose a risk and your board has no member with disclosed expertise to oversee it, you have built a Caremark claim into your proxy.

A Caremark claim is the Delaware doctrine that holds directors liable for failing to monitor known risks. Marchand v. Barnhill in 2019 made these claims viable again. Recent cases like McDonald’s and Boeing have widened the lane. The pattern is straightforward: a known material risk plus weak board-level oversight equals a fiduciary duty claim that survives a motion to dismiss.

The Conference Board report is the kind of document plaintiffs’ lawyers screenshot. “By 2025, 83% of S&P 500 companies disclosed AI as a material risk. Defendant Company filed a 10-K acknowledging this risk while assigning oversight to an audit committee with no AI-fluent members and conducting no documented director education on AI risk.” That is a complaint, not a hypothetical.

Add to that the SEC’s 2024 cybersecurity disclosure rules, which gave plaintiffs a template for arguing that disclosure quality matters. The same logic will apply to AI. If you disclose AI as a material risk and your board cannot answer basic questions about how the company governs it, the disclosure itself becomes evidence of inadequate oversight.

What the smart GCs are doing this quarter

Three moves separate companies that will weather the next two years from companies that will be sued by their own shareholders.

Map AI oversight to a specific committee with named accountability. Audit, technology, or risk. Pick one. Update the committee charter. Document the assignment in the proxy. “The board oversees AI risk holistically” is the answer that gets you sued. “The Risk Committee oversees AI governance and reports quarterly to the full board” is the answer that survives a Caremark complaint.

Build a director education curriculum and document it. Not a one-hour vendor pitch. A real curriculum: AI capabilities and limits, model risk management, third-party AI vendor diligence, regulatory landscape, incident response. Bring in outside counsel and outside technical experts. Take attendance. The 26% of executives who say their company will focus on board AI education are doing the right thing. The 74% who said something else are exposed.

Inventory your AI deployments and create a reporting line to the board. You cannot oversee what you cannot see. Most companies have shadow AI. Sales teams running ChatGPT. Engineering using Claude. Marketing using a half-dozen specialized tools. The board needs a quarterly report that shows what AI is in production, where the model risk lives, who owns each deployment, and what controls are in place. If you cannot produce that report, you have a governance problem before you have a regulatory problem.

The disclosure trap

The 83% number cuts both ways. Disclose AI as a material risk and you have just told the SEC, your auditors, and every plaintiff’s firm in the country that you consider this issue significant enough to require shareholder notice. That triggers expectations.

Companies that disclose AI as material and then conduct no board-level oversight have created a contradiction in their own SEC filings. The risk factor language says this matters enough to disclose. The proxy and committee charters say nobody is responsible for it. That gap is where derivative complaints get filed.

The reverse is also a trap. Companies that do not disclose AI as a risk factor while their competitors are at 83% will face a different question: why is your peer group treating this as material when you are not? In a securities class action, that question gets answered by experts.

What to do now

Pull your most recent 10-K. Read your AI risk factor language. If it is generic, fix it before the next filing. Specific risks invite specific oversight, which is the point.

Pull your committee charters. Find the words “artificial intelligence” or “AI.” If they are not there, add them, with a named committee owner.

Pull your board minutes for the past 12 months. Search for AI discussion. If you find less than four substantive discussions, schedule them now. Document them. The discussion is the audit trail.

Schedule a director education session before the next quarterly meeting. Pay for outside experts. Take attendance. Distribute materials. This is cheap insurance against a $50 million Caremark suit.

The Conference Board report did not create new risk. It documented risk that was already there. The window between “83% of companies disclose this” and “the first board gets sued for failing to oversee it” is shorter than most boards think. Use it.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO