It's hard to escape the fact that AI is an ever-increasing part of our daily lives – even aside from the more obvious daily interactions with digital assistants, Netflix content recommendations, chat bots and vacuum cleaning robots, AI is also busily working away in the background to help diagnose our ailments, manage household energy demands and process ACC claims.
AI may be prevalent, but it's certainly not perfect. Consequences of well-reported AI blunders have resulted in everything from general amusement to loss of human life. This begs the question: who is responsible when AI goes wrong? A ground-breaking yet-to-be-heard UK court case may soon be able to provide us with some answers…
The case involves a wealthy Hong-Kong based real-estate investor, Samathur Li Kin-kan, who is suing his former hedge fund manager, Tyndaris Investments, to the tune of US$23 million ($34.9m) over allegations of misrepresentation.
While a wealthy real-estate tycoon losing part of his fortune may not seem particularly newsworthy, this is not the end of the story. Li is not alleging that the loss was caused by human trader error, but rather by Tyndaris' AI-powered supercomputer money manager, K1. Tyndaris denies Li's allegations and is counter-suing Li for US$3m in unpaid fees.
So who caused the loss of Li's lamented millions?
At first blush, Tyndaris Investments is the obvious candidate. Tyndaris may not have understood what K1 was doing, but it was certainly Tyndaris that made K1 available to Li for a fee.
Raffaele Costa, Tyndaris' CEO, personally marketed K1's capabilities and the skills and experience of the developers who created it to Li. Li alleges that Costa's claims were grossly misrepresented. It's also worth mentioning the role that may have been played by other Tyndaris employees involved in the day-to-day operation of K1 - surely someone at Tyndaris had responsibility for monitoring K1's investment decisions and setting appropriate "stop-loss" restrictions?
The software developer or other vendors
Was it an error in K1's code that caused K1 to make the trades that led to the loss? If yes, should K1's developer, Austrian-based AI company, 42.cx, shoulder at least some of the responsibility? Or perhaps there are other software vendors or service providers in the fray? For example, who was responsible for maintaining K1's code and ensuring that known issues were rectified?
Or should Li himself be responsible to bear the loss? Tyndaris claims that Li was never guaranteed that the AI strategy would make money. Should Li, as a sophisticated and wealthy investor, have been aware of the risks involved in this sort of financial investment?
The answer is not straightforward and the facts of the case highlight the complexities involved in using traditional concepts of legal responsibility to hold people and organisations accountable for issues caused by AI-powered machines. K1 alone may have made the investment decisions that led to Li's loss, but K1 is neither capable of being responsible for those decisions at law nor for compensating Li for his lost millions.
Instead, the law will look to attribute responsibility for K1's decisions to a person or organisation on K1's behalf, despite none of the candidates potentially even being able to explain why K1 made the trades it did.
Regardless of who is ultimately held responsible, the circumstances demonstrate how important it is to fully understand AI technology and its capabilities and limitations prior to selling, adopting or using it.
Relying on any technology that is making decisions in a 'black box', and which you are unable to control, understand or explain is inherently risky.
Irrespective of whether you are the developer, vendor or user, if you do not understand the decisions that an AI-powered tool is making on your behalf, liability for those decisions may attach in unexpected ways when things turn to custard.
The K1 trial is reportedly due to be heard in early 2020, when the intricacies of attributing liability for decisions made by machines will be considered by the courts for the first time and legal precedent in this area will begin to develop. Until then, we might all want to think twice before handing responsibility for our investments over to a robot.
As first published by the NZ Herald.
This article is intended only to provide a summary of the subject covered. It does not purport to be comprehensive or to provide legal advice. No person should act in reliance on any statement contained in this publication without first obtaining specific professional advice. If you require any advice or further information on the subject matter of this newsletter, please contact the partner/solicitor in the firm who normally advises you, or alternatively contact one of the partners listed below.