Most board level risks are absorbed by the company. This one attaches to the individual directors who knew about it and didn’t act.
Most risks on your board table are company risks. Cyber incident, the company pays. Product liability claim, the company defends. AI governance failure is different. The claim reaches the individual directors who knew the gap existed and chose not to close it. And “knew” has a lower threshold than most directors assume.
What “Knew It Existed” Means in Practice
If you have read your own product roadmap and it references machine learning or adaptive features, you know. If your CTO has briefed the board on AI capabilities, you know. If your investor deck positions AI as a differentiator, you know. If someone on the board answered AI questions on a renewal form, you know.
The legal standard under section 174 of the Companies Act 2006 is the care, skill, and diligence that a reasonably diligent person with that director’s knowledge and experience would exercise. The St Andrews Law Review, analysing corporate liability and AI, stated it directly: directors cannot assume that a system which performed adequately at one point in time will continue to do so under changing conditions or datasets. Automation does not dilute legal responsibility. It intensifies it.
The question a director will need to answer is: “Did you know your AI was unreasonably risky?” You want to know the answer to that, and the subsequent questions it triggers, before you ever have to answer: “Did you know your insurance didn’t cover how your product actually behaves, and what did you do about it?”
The first is the governance question. It tests whether the board understood the risk. The second is the insurance question. It tests whether they acted on that understanding. A director who cannot answer the first has already failed the duty of care test before anyone examines the policy.
What Governance Actually Looks Like (It’s Less Than You Think)
Life sciences companies already live with this principle. When a biologic reaches the market, the manufacturer has a legal duty to monitor what happens next, including adverse events from off-label use they never intended or authorised. The MHRA’s Yellow Card scheme exists because the industry understood decades ago that a product’s risk profile changes after deployment. The manufacturer cannot say “we didn’t know” if signals were available and they were not monitoring.
An AI product that learns and adapts after deployment is, in effect, going off-label. It is behaving in ways the manufacturer did not explicitly programme. The governance obligation is the same. This is not a new principle. It is an established principle being applied to a new product type.
Insurers don’t need a two hundred page AI ethics policy. They need evidence of three things, and they map directly onto what post-market surveillance already delivers.
Ownership
Someone senior is named as accountable for how AI is used in the business. Not implied. Named. Minuted. In pharma, this is the Qualified Person for Pharmacovigilance. In an AI company, it is whoever can answer the first question.
Visibility
The board can articulate where AI touches products, services, and decisions. Not in technical detail. In enough detail to disclose honestly at renewal. As Norton Rose Fulbright noted, the dividing line is between using AI as a tool in decision-making and using AI in a way that constitutes an abdication of responsibility.
https://www.insidetechlaw.com/blog/2018/01/automating-the-board-an-english-law-perspective
Failure planning
The company has a clear answer to: what is the worst that could happen when the AI gets it wrong, and what would we do? In pharma, this is the risk management plan. In an AI company, it is the considered answer that satisfies both the underwriter and the duty of care.
These three things do double duty. They satisfy the underwriter writing your product liability cover. And they evidence the board’s duty of care if a D&O claim is ever brought. One governance action, two policies protected.
One Incident. Two Claims. Two Investigations.
An AI product causes harm. The client claims against the company under product liability. A shareholder or creditor then claims against the directors for failing to manage a known risk. Two insurers. Two investigations. Both running simultaneously.
This is not a future scenario. D&O shareholder class actions related to AI more than doubled in 2024, and the pace accelerated into 2025.
The companies where both policies respond are the ones that had governance in place before the incident. The rest discover the gap when the personal liability question is no longer theoretical.
Governance Closes Two Gaps at Once
AI governance is not bureaucracy. It is the single action that protects both the company’s product liability cover and the directors’ personal position. The companies that understand this act before the question is asked. The rest answer it under oath.
For the full consequence map including D&O exposure, read our complete guide: What Happens to Your Business When the First AI Product Liability Case Lands.
For how adaptive AI breaks product liability wordings, read Your Smart Product Just Outsmarted Your Insurance Policy.
For how insurance protects founder equity against dilutive investor behaviour, read You Pitched Your AI Strategy to Investors. They Went Straight to Your Insurance.








