Who's liable for decisions AI and robotics make?

robots

Reuters news agency reported on February 2017 16 that "European lawmakers called [...] for EU-wide legislation to regulate the rise of robots, including an ethical framework for their development and deployment and the establishment of liability for the actions of robots including self-driving cars."

The question of determining "liability" for decision making achieved by robots or artificial intelligence is an interesting and important subject as the implementation of this technology increases in industry, and starts to more directly impact our day to day lives.

Indeed, as application of Artificial Intelligence and machine learning technology grows, we are likely to witness how it changes the nature of work, businesses, industries and society. And yet, although it has the power to disrupt and drive greater efficiencies, AI has its obstacles: the issue of "who is liable when something goes awry" being one of them.

Common standards

Like many protagonists in industry, Members of the European Parliament (MEPs) are trying to tackle this liability question. Many of them are calling for new laws on artificial intelligence and robotics to address the legal and insurance liability issues. They also want researchers to adopt some common ethical standards in order to "respect human dignity."

Therese Comodini Cachia MEP, of the Maltese centre-right Nationalist Party and Parliament's rapporteur for robotics, believes that "for the purposes of the liability for damages caused by robots, the various legal possibilities need to be explored [...] How will any legal solution affect the development of robotics, those who own them and victims of the damage?"

To answer these questions she has invited the European Commission to consider the impact of different solutions to ensure that any unintentional harm caused by them can be properly addressed.

Route to adoption

The idea of creating legislation that will create more transparency over the liability issue is a much needed step. In essence, the European Parliament’s report does a good job and I agree that the liability issues need to be tackled. The parliament’s report also doesn’t want robots and artificial intelligence to replace humans, but to complement them.

This is something we agree with and an approach we are already starting to see across sectors. One example we have seen is being employed in the legal sector with solicitors, Linklaters and Kemp Little. Andrew Joint, Partner and part of the Commercial Technology Team at Kemp Little explains: "I see artificial intelligence being used more and more in the people-heavy, junior lawyer tasks where a huge amount of the fees are being spent on each transaction. I don’t see it replacing the lawyer as the trusted advisor because I have the ability to think around problems and solutions."

Through our own work with a global law firm, we have seen how technology can be used to streamline work that requires many man-hours, whilst also informing lawyers about specific details, relating to projects, prior to discussions with clients.

A significant challenge with adoption of AI techniques, and the question of liability, is transparency in the decisions made by these systems. Many data driven machine learning technologies are essentially a black box, unable to provide a justification to the answers they give. However, a new breed of technologies are now available with the ability to give a rationale for a decision. This increases the possibilities for AI to integrate within the workplace.

The cognitive reasoning capabilities of emerging AI technology can be used to support workforces, whilst enabling them to supply more efficient ways to service and support clients. The ability for these technologies to provide a justification for the decisions they make will lead to easier adoption in the face of complex liability decisions.

Crash test liability

If we take one of the more talked-about scenarios - self driving cars - as an example, we can start to understand why this topic is reaching the upper levels of governance in our society.

In 2016, industry consultation about connected and autonomous vehicles opened up in the UK around self-driving vehicles, with the UK’s department of transport discussing the industry and looking to answer some of the biggest liability questions associated. A document was produced "Pathway to Driverless Cars: Proposals to support advanced driver assistance systems and automated vehicle technologies."

Journalist Graham Jarvis used an interesting example in his recent article for TU Automotive magazine: "A case in point would be the two drivers who crashed their Tesla vehicles while it was on Autopilot: In the first accident the driver died but, fortunately, the second driver wasn’t injured in the second crash. [...] Tesla blamed the drivers for not using the system properly but the incidents have led German regulators to question the use of 'Autopilot' as a term to describe it, arguing that it gives the impression that the vehicle is more capable of autonomous driving than it really is."

In fact, Germany’s transport minister, Alexander Dobrindt has gone so far as to ask Tesla to ditch the term Autopilot. Tesla responded by claiming that it warns drivers of the system’s limits and defended the term. In a press statement, Tesla wrote: "This is how the term has been used for decades in aerospace: to denote a support system that operates under the direct supervision of a human pilot."

Human vs robot?

It’s natural that questions will arise wherever a human is being partially or completely replaced by artificial intelligence -- by a "robot." Drivers, for example, are currently responsible for their actions in that they can be held liable if they are found to have been the cause of an accident.

In the case of autonomous vehicles there is expected to be a shift in liability from the individual human driver to the product or service: if a machine has taken control of a vehicle, then this assumption should ideally mean that the driver is no longer responsible for what the vehicle does. Therefore the blame could even lie with a software developer if there is a bug in the software or if their software causes a crash.

Draft legislation to clarify liability in accidents involving self-driving cars is consequently high on the European Parliament’s agenda. Vehicle manufacturers Volvo, Google, and Mercedes have already said they will accept full liability if their autonomous vehicles cause a collision. However, this is not currently a legal requirement.

Essentially, the view is that some commonality in standards is much needed as soon as possible, and that the EU needs to take a lead to avoid being forced to adopt international legislation by third party countries and institutions outside of the European Union.

MEPs are therefore seeking the creation of "a mandatory insurance scheme and a supplementary fund to ensure that victims of accidents involving driverless cars are fully compensated." With this will be a voluntary ethical code of conduct for robotics researchers and designers. There are also calls for a basic income and for discussion about how artificial intelligence and robots will affect jobs.

Not a new issue

Indeed, issues of liability for autonomous systems are not new. We’ve lived with systems that include some degree of software automation for many years.

Over the last 30 years there have been a number of high profile failures of autonomous, software driven systems, that have forced us to consider the issue of where liability lies.

Between 1985 and 1987 a software failure in the Therac-25 radiation therapy machine caused at least six accidents, several of which were fatal. The software, developed as part of the machine by Atomic Energy of Canada Limited (AECL), was intended to replace the complex, human, judgement required when setting up and delivering a dose of radiation.

The question of who is liable for the Therac-25 accidents is not straight forward. Some hospitals implemented their own, extra, safety measures, not trusting AECL’s safety statements. Those that did so suffered no accidents, so is the hospital liable?

AECL repeatedly affirmed the safety of their system, despite significant failures in their software testing process, so are they liable? It can be argued that the software developers themselves had a duty of care to patients when developing the system, so are they liable?

The question of liability in the Therac-25 case is still debated today. The law has yet to catch up on the nuances of cases like this, despite it being a thirty year old problem.

Overcoming resistance

The current ambiguity surrounding the issue of liability has indeed plagued the further development of artificial intelligence and the establishment of common standards, creating an environment where some sectors have been more reluctant than others to roll out artificial intelligence systems, in spite of their revolutionary abilities.

But as artificially intelligent systems start to impact the decisions that are made about us more and more, the questions raised by the self-driving car liability example should be no different to how we should be thinking about other usage. To determine liability, the ability to provide a rationale for an AI-generated decision could become crucial.

Liability, however, is not the only hurdle -- nor is it the greatest. The main challenge to the development of artificial intelligence, machine learning and common standards is us, as humans.

We have a fear of new technology and, in the case of AI, Sci-Fi has not done us any favors in explaining it to the people. To encourage successful adoption of artificial intelligence, it requires training and support to help us understand how it will support a business and its workers, rather than rely on the pre-conceived assumptions that it will disrupt and change it. It’s this challenge relating to human perceptions, that’s likely to be the real struggle -- not just with regards to autonomous vehicles.

Laying down some guidelines will undoubtedly lead the way to encouraging more to adopt the technology.

A clarification in law would lead to more decision-making transparency, opening up options and opportunities to improve efficiencies and to support staff. As an example, the insurance sector could use it to reduce some of the repetitive day to day tasks, while enabling insurance experts to better analyze claim and liability information in ways that are quicker than any traditional approach to assessing a claim.

Absolutely, AI technology should be able to help businesses make decisions quicker, transparently, whilst supporting workers, rather than taking their jobs from them. It's important for businesses to consider liability in their planning but also, they should look at what’s trying to be achieved and how the impact of the technology will impact all stakeholders of the business, from the worker to the end customer. Without thinking ahead and understanding the impact of AI, it won’t be possible to truly benefit from it.

Ben Taylor, CEO, Rainbird

Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.

Image Credit: Tatiana Shepeleva / Shutterstock

14 Responses to Who's liable for decisions AI and robotics make?

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.