Who should be held legally responsible when a self-driving car hits a pedestrian? Should the finger be pointed at the car owner, manufacturer or the developers of the artificial intelligence (AI) software that drives the car?
Struggles to assign liability for accidents involving bleeding-edge technologies like AI have dominated global conversations even as trials are ongoing in many places around the globe including in Singapore, South Korea and Europe.
During the third annual edition of the TechLaw.Fest forum last Wednesday, panelists said that Singapore’s laws are currently unable to effectively assign liability in the case of losses or harm suffered in accidents involving AI or robotics technology.
“The unique ability of autonomous robots and AI systems to operate independently without any human involvement muddies the waters of liability,” said parliamentary counsel and chief knowledge officer of the Attorney-General’s Chambers Charles Lim.
He was speaking in a webinar titled “Why Robot? Liability for AI system failures”.
Mr Lim is co-chair of the Singapore Academy of Law’s 11-member Robotics and Artificial Intelligence Sub-committee, which last month published a report on what can be done to establish civil liability in such cases.
“There are multiple factors (in play) such as the AI system’s underlying software code, the data it was trained on, and the external environment the system is deployed in,” he said.
For example, an accident caused by a self-driving car’s AI system could be due to a bug in the system’s software, or even an unusual situation such as a monitor lizard crossing the road that the system has not been trained to recognize.
A human’s decisions could still influence events leading up to an accident too, as existing self-driving cars have a manual mode that allows a human driver to take control.
A software update applied by the AI system’s developers could also introduce new, unforeseen bugs.