When software learns after deployment, liability shifts in ways your current wording wasn’t built to handle. The law is already moving. Your business insurance isn’t.
You shipped a product. It worked. Clients loved it. Six months later, it has learned from usage data, adjusted its recommendations, refined its outputs. It is better now than when you sold it. Everybody is happy.
Except your insurer, who underwrote a product that no longer exists.
The product they priced cover for was static. The product you are now selling is dynamic. And between those two versions sits a liability question your AI product liability insurance was never designed to answer.
The Law Already Moved. Your Policy Didn’t.
No UK court has yet decided a product liability case involving an AI system that learned harmful behaviour after deployment. This article is upfront about that. But the legal direction is unmistakable, and the founder who waits for a decided case is the founder who discovers their cover doesn’t respond when their own claim lands.
The EU Revised Product Liability Directive came into force in December 2024. It explicitly includes software and AI systems as products. More critically, the concept of defect now encompasses a product’s ability to continue learning after it reaches the market. Where that continuous learning amounts to a substantial modification, the product is treated as newly placed on the market. The manufacturer remains liable for behaviours that emerge from self-learning, even if they were unforeseen at deployment.
The UK Jurisdiction Taskforce published its legal statement on AI liability in January 2026. It defines AI as technology that is “autonomous” and acknowledges that this autonomy and opacity create genuine uncertainty at claims stage. English courts will apply existing negligence principles, but the autonomous nature of AI complicates the causation analysis that sits at the heart of any product liability claim.
DLA Piper identified machine learning as the single most troubling feature from a legal perspective, specifically because the product gathers data and makes decisions it was not programmed to make.
The insurance market is already drawing a line that founders need to understand. Underwriters increasingly separate “versioned” updates, meaning deliberate changes that are tested and deployed by the company, from “learned” adaptations, meaning autonomous changes driven by data. The first fits within existing frameworks. The second is where product liability wordings break down.
Where Your Policy Wording Breaks
Three specific points. Each one is a door you can check today.
The product definition clause.
Your policy defines the covered product by reference to what was described at placement. If your product has autonomously adapted since then, the insurer may argue the product that caused the loss is not the product they agreed to cover. Norton Rose Fulbright stated it directly: the ability of AI systems to self-learn means the concept of defect now extends beyond the point at which the product was placed on the market.
The defect trigger.
Traditional wordings tie liability to a defect in design, manufacture, or instruction. When an AI product causes harm because it learned an unintended behaviour from data, that is an emergent property, not a design defect in the traditional sense. Silence in a policy wording favours the insurer at claims stage.
The recall question.
If your AI product needs correcting after it has autonomously changed, what does “recall” mean for software in the cloud? Product liability recall provisions were written for physical products. The cost and mechanism of correcting a learned behaviour in deployed AI is fundamentally different, and most wordings simply don’t contemplate it.
One Scenario Your Board Should Run Now
If your insurer asked you tomorrow to explain exactly how your product’s behaviour has changed since deployment, what governance you have over those changes, and what failure modes are possible, could you answer? Could your CTO? Could your board?
If the answer is no, your disclosure duty under the Insurance Act 2015 is already adding risk. And once case law confirms that adaptive AI creates liability beyond the point of sale, that question will not be hypothetical. It will be on the renewal form.
The consequences of reaching that moment unprepared cascade across insurance, commercial relationships, fundraising, and talent. We map those consequences in detail in our full guide.
Your Product Evolved. Your Policy Didn’t.
Adaptive AI products are a competitive advantage. But AI product liability insurance that doesn’t reflect how the product actually behaves after deployment isn’t cover. It is a certificate with a gap behind it. The founders who close that gap before the first case is decided are the ones whose innovation stays insured.
For the full consequence map of what happens to your business when the first AI product liability case lands, read our complete guide: What Happens to Your Business When the First AI Product Liability Case Lands.








