Humans versus robots: the YSIAC debate held yesterday as part of YSIAC Conference 2019 sought to address the deep question of what it means for us to be human and the timely question of whether technology can and will, one day, surpass us.
The moderator for the debate was Professor Nadja Alexander (Director, SIDRA). Arguing for the motion were Professor Dr. Maxi Scherer (Special Counsel, Wilmer Cutler Pickering Hale and Dorr LLP) and Mr Adrian Tan (Partner, TSMP Law Corporation) – Human 1 and Human 2 respectively (“Humans”). Arguing against the motion were Mr Todd Wetmore (Partner, Three Crowns LLP) and Mr Rimsky Yuen, SC (Barrister, Arbitrator and Mediator, Temple Chambers) – Robot 1 and Robot 2 respectively (“Robots”).
The starting vote? 85% in favour of the motion and 15% against.
First Round of Arguments
Human 1 opened confidently, proclaiming that the votes of the 130 strong audience members demonstrated the “absolute superiority of human intelligence”, which was met with congratulatory smiles all around. She argued that the motion does not mean that emotional intelligence (or human intelligence) will trump artificial intelligence in every instance, conceding that there are obvious situations, such as Google Translate being able to translate 150 languages, where artificial intelligence is superior to human intelligence. Human 1 stated that the ultimate question, therefore, is whether artificial intelligence is superior to human intelligence, to which she argued “no”.
Citing statistical studies, Human 1 argued that while artificial intelligence can assist in routine situations, it is deficient in unprecedented or emergency situations. Put simply, artificial intelligence is a calculation of the likelihood of an event based on existing input data. Human 1 then brought the issue closer to home, with an illustration of an arbitral tribunal making decisions based on probability predicted by a machine. She reasoned that parties would not accept such decisions as they will argue that every case is unique to its own facts. Artificial intelligence also cannot provide reasons as to why it reached a certain outcome, she said.
Human 1 concluded by reading, to the audience’s glee, a nonsensical chapter of a Harry Potter novel written by a program with artificial intelligence, which was met with raucous laughter. Spoiler alert: Ron eats Hermione’s family.
In response Robot 1 first declared that the Robots took the motion to mean whether emotional intelligence will always trump artificial intelligence in the course of decision-making in arbitration, before quickly firing back that “the motion is embarrassed by its own ambition”. Acknowledging Human 1’s concession that emotional intelligence will not always trump artificial intelligence, he argued that human intelligence is fundamentally limited. On the other hand, artificial intelligence, being in its infancy, can only develop and improve moving forward, with the ability to potentially incorporate elements of emotional intelligence as well. Citing an article, he argued that artificial intelligence is capable of self-learning.
Robot 1 canvassed how artificial intelligence can help humans avoid cognitive bias, focusing effect, and omission bias in arbitral decision-making, and how artificial intelligence can be deployed 24 hours every day (much like associates in law firms!). He closed his arguments by stating that emotions are not a substitute for collection and analysis of data and any motion positing the primacy of emotional intelligence over artificial intelligence is an attempt to place intuition over reason.
Second Round of Arguments
Human 2 stepped forward, eloquently rebutting Robots’ interpretation that the motion includes the words “decision-making in arbitration”. He said, “I don’t see it but that is the beauty of machine language”. Human 2 argued that if one were to look at the context of arbitration and court disputes, a dispute is, at its core, people disagreeing with other people, and that a key part of dispute resolution is getting people to understand and accept the dispute in question.
Human 2 moved to his argument that the value of humans lies in our imperfections. Illustrating his point with the examples of the Japanese robot priest, Mindar, the Chinese robot monk, Xian’er, the Indian robotic hand in a Hindu temple and the German robot priest, BlessU-2, he argued that when one is in trouble, one does not want someone who is perfect. He said one would want someone who is flawed, and who struggles with the human experience of life. As it is with priests, so it is with lawyers, judges and arbitrators.
Robot 2 countered that if one needs to find the right answer one first has to ask the right question. Citing relatable examples, he argued that the reason why people utilise machines with artificial intelligence is that machines can do a better job than humans.
Robot 2 questioned the yardstick by which we measure the superiority of emotional and artificial intelligence, arguing that it is untrue that artificial intelligence cannot do “good” things such as creating art and music – fields which were thought to be quintessentially in the realm of human intelligence. He addressed Human 1’s illustration on Harry Potter, questioning whether that example truly represents the failings of artificial intelligence or the failings of the human responsible for that particular program. If someone could write a better code and create a better program, perhaps it is only a matter of time before artificial intelligence is able to create the next Hollywood blockbuster. Robot 2 asked whether having reasons is necessarily “good”, for sometimes people do not require reasons but an outcome.
Lastly, Robot 2 invited the audience to question the premise of the motion. He said, “It puts people in the position that A is always against B. I ask why they should be exclusive rather than inclusive. Why should we think in the way the motion represents?”
In rebuttal, Human 1 argued that it is a fiction that artificial intelligence is not prone to human biases, citing an article which reported that an artificial intelligence program was found to have a gender bias vis-à-vis men. She rebutted Robot 1’s argument about limitless learning by stating machine learning means improvement, but one only based on past data at any given moment in time. Hence there is no way machine learning can go beyond what has been done in the past.
Robot 1 swiftly responded by stating that the motion must fail by virtue of Humans’ concession that emotional intelligence cannot, in perpetuity, trump artificial intelligence. “You know when things are getting desperate when someone says vote for me because we are flawed”, he said, closing by proposing that the audience let the machines work at night so that they would not have to.
Finally the Humans and Robots took questions from the audience, which they responded to with another round of clever sparring between Human 2 and Robots 1 and 2.
At the end of the spirited debate which traversed humans, robots, and my favourite quote from Human 2, “Do you love me?” the audience were invited to cast their votes once more.
The result? 68% for and 32% against.
The Robots moved the most minds that day, but it is heartening to know that many still have faith in the relevance of human (or emotional) intelligence. As Robot 2 says, why should artificial intelligence and emotional intelligence even be mutually exclusive in the first place? Artificial intelligence is improving by leaps and bounds. It will no doubt become an even more advanced and widely used tool for lawyers and arbitrators in the near future – not just in legal research, but also in document disclosure and with expert evidence. With the advancement of artificial intelligence programs, it is hoped that lawyers and arbitrators can then spend more time on the human (and some would say most important) aspects of each case: the client’s needs and the effective resolution of the dispute before them.
More coverage from YSIAC Conference is available here.