logo
Call us: +44 (0)1865 244727

  • Home
  • Scope
  • News
  • Products
    • RADAR
    • CALL-OFF PROJECTS
  • Clients
  • Contact
  • How we work
    • Independent
      • Common law orthodoxies
      • Sensationalism
      • Expert witness
      • Regulation and Politics
      • Tied services
    • Up-to-date
      • Timely
      • Insurance Scenarios
      • Probabilistic Methods
    • Expert
      • Personal Injury
      • Trends
    • Innovative
  • Database
    • Member’s login
    • Member’s Settings
    • Register
    • RADAR Database
  • Recent projects
    • EMFs
    • STRESS AT WORK
    • WHIPLASH
    • WELDING RODS: MANGANESE EXPOSURE
    • ENVIRONMENTAL TOBACCO SMOKE
    • Other Projects



AI Systems – who is liable?

Jun 02, 2023
by Andrew@Reliabilityoxford.co.uk
AI Systems, liability
0 Comment

All kinds of insurer will at some point need to risk assess AI systems where these influence their commercial policy-holders. AI systems automatically generate selected information outcomes intended as advice or for direct control of processes. Who is liable for any attributable damage? The question applies now, or will soon apply, wherever information processing is routine.

The answer: liability is vicarious. Whoever authorises the use of the AI system is strictly liable. This is a direct consequence of the way in which AI systems work.

Understanding AI Systems

An AI system is software which takes the place once solely occupied by meticulous analytical logic programming. It provides an automated decision-making process between the prompt for an information response and the output of that response.

However, unlike conventional programming, AI systems are probabilistic, not analytical. Decisions are weighted according to the probability with which they simulate successful outcomes in data. Unlike analytical programming a 100% probability match using AI systems is very unlikely.

What is an analytical relationship? Analytical relationships are exact and subject to reason or law. For example,

  • if a planet has a known mass its gravitational pull on its moon can be exactly calculated, any uncertainty is because of uncertainty in the facts not the relationship,
  • a wild animal must belong to a species,
  • driving at greater than 70 mph is illegal.

In optimised AI systems information outcome probabilities differ every time, but in accord with a statistical distribution. The resulting uncertainties of outcome are a combined effect of data uncertainties and decision uncertainties. Even if the data uncertainties remain the same, the statistical distribution changes each time the system is re-optimised. Each AI system is unique.

And this non-analytical programming is the main problem.

The law is analytical in nature. Facts are decided and these facts have foreseeable effects via analytical relationships, analytical relationships can be understood rationally. Built from these rules are: legal causation, foreseeability, defect, breach, malfunction and more. Without these rules, truth is subjective and law acts in an arbitrary way. Reliable law must be analytical.

The actual workings of an AI system are from a legal point of view, impenetrable. Was the system negligently designed? You cannot tell by examining the system itself. Was there a defect in the training data? Perhaps, but was it significant? You cannot tell by examining the system itself. Did this error cause the wrong information outcome to be generated? You cannot tell by examining the system itself. With analytical software the answer is always determinable. With AI systems you cannot know.

Extensive testing would establish a set of error statistics but you would not know the reason for any one error. Error statistics are useful for risk management but third party liability claims are for the single error that did damage to someone at a given time, not the overall average performance.

Instead of focussing on the impossible, what you can prove analytically are:

  • that someone authorised the AI system to be used in a situation where it could do damage and,
  • whether or not the given information outcome was the proximate cause of damage.

You need to know exactly what the prompt was, exactly what the information outcome was and, the situation. For example at speed x and distance y from an object in the road the information outcome was to apply brakes fully. Was the information provided at the right distance?

The logical result

Where a person, in furtherance of their own aims, places, permits or promotes an autonomous mechanism into a damage-causing situation and that mechanism does damage, the common law points to strict vicarious liability. The autonomous mechanism is regarded as an extension of the scope of influence of that legal person, acting in furtherance of their reasoned purposes and lawful amenity. Autonomous mechanisms include human beings, pets, invasive plants and control software for example. Each operates according to a natural or analytical programme.

It is often said of vicarious liability that the owner is liable. This is a rule of thumb only. The ‘owner is liable’ only applies when the behaviour of the autonomous mechanism is or should be foreseeable by that owner. It acts in furtherance of his reasoned purposes and amenity because he knows what it will do. In that way, it is fair and just for the owner to be liable for his disposition and use of that autonomous mechanism.

However, the AI system is neither natural nor analytical. The owner relies on it being “sold as”, and follows the instruction manual but cannot foresee in the usual way. Therefore with an AI system it is the person who approved of its programme and intended uses who is vicariously liable. If the owner uses the AI system in an innovative way, he becomes the authoriser. Otherwise, more often than not, the creator of the AI system is the authoriser, and must be liable.

The non-analytical workings of optimised AI systems and the uniqueness of each lead to the conclusion that vicarious liability applies. The authoriser is vicariously liable.

But this result may be problematic. In many situations even if the AI system is the more accurate the rate of indemnification can be higher.  The following example illustrates:

  • When an expert person makes a judgement call, which turns out to be damaging, he indemnifies the victim but only if that judgement is found to be negligent. So, if the error rate is 5% and of these the negligence rate is 15% then 0.75% of judgements will lead to indemnification.
  • The potential problem is that if an AI system is employed instead of the expert person, liability would be strict, so the AI system would need to have a 99.25% success rate if the rate of indemnification is to be the same. Such a high rate of accuracy may not be achievable.
  • However, if the AI system can be relied upon in 98% of assessments and each assessment is signed off when reviewed by an expert person the legal system returns to one of negligence and the indemnification rate might now be 0.3% (i.e. 15% of 2%) instead of 0.75%. The relationship between expert and AI system provider would be specified in contract and may also be insurable.

In general, if used as stand-alone decision-makers, AI systems need to be right far more often than do human ones if the liability exposure is to be the same. EU legislators seem to have been persuaded that exacting standards would suppress innovation and so prefer instead to insist on concepts such as negligence and defect, neither of which can actually be proved analytically. To make this work they propose schedules of defect and negligence with selective causal presumption as the way forward.

Principles-based law would usually be the better option. Vicarious liability is a mature, reliable and accurate way ahead.

Dr Andrew Auty

About the Author
Social Share

Leave a Reply Cancel reply

*
*

captcha *

Search Documents


Categories

  • Causation
    • de minimis
    • material contribution
  • Date of knowledge
  • Diagnosis
  • Duty of Care
  • Exposure estimation data
  • Mitigation
  • Motor related injury
  • News
  • Uncategorized

Archives

  • July 2023
  • June 2023
  • November 2020
  • January 2020
  • November 2019
  • May 2019
  • April 2019
  • March 2019
  • January 2019
  • December 2018
  • November 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • April 2018
  • November 2017
  • July 2017
  • April 2017
  • May 2016
  • April 2016
  • November 2015
  • April 2015
  • March 2015
  • December 2014
  • October 2014
  • July 2014
  • April 2014
  • February 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • July 2013
  • June 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012

© Re: Liability (Oxford) Ltd. 2012. All rights reserved.
Website Design by The Big Picture