Wolters Kluwer teamed up with the global law firm Clifford Chance to discuss the advances in artificial intelligence (AI), its limitations, and various applications in an interactive webinar titled Artificial Intelligence and Arbitration: Should We Keep It Real? The lively discussion covered AI fundamentals, in addition to recent developments in the field. The panel also explored the reliability of AI, issues around bias, as well as how, by whom and when could it be used in connection with international arbitration.
The webinar featured panelists Dean Sonderegger, Head of Wolters Kluwer Legal & Regulatory U.S., Bertrand Rondepierre, a Research Program Manager at Google AI, and Simon Greenberg, a Partner with Clifford Chance. Alexis Foucard and Karolina Rozycka, both senior associates out of Clifford Chance’s Paris office, served as webinar co-moderators. Jason Fry QC, Global Co-Head of the International Arbitration Group at Clifford Chance, offered closing remarks.
AI fundamentals and developments. At the onset, Bertrand Rondepierre provided some of the basics in terms of what AI is and what it is not. He also noted some of the recently emerging developments in the field. Some of Rondepierre’s observations include:
- AI is the science of making things smart. Machine learning, a subset of AI, is the science of getting computers to do something without being programmed with rules. At its core, machine learning is a new way of creating problem-solving systems.
- In a machine learning system, over time and with exposure to training datasets, the system will become smarter.
- We are experiencing a deep learning revolution enabled by increased computational power combined with new training algorithms for scalable training of large models. The key benefits include the ability to learn from raw, noisy, heterogenous data where no featured engineering is required.
In 2011, machine learning approaches resulted in error rates of approximately 26 percent compared to human error rates of approximately five percent. As a result of advances in computational power, by 2016 machine learning error rates declines to less than three percent.
In Rondepierre’s view, the “intelligence” in AI lies in the mind of the people who design it so that it answers the question be asked of it.
AI for lawyers—focusing on the use case. In his remarks, Wolter Kluwers’ Dean Sonderegger pointed to a misconception that AI in the legal business is often mistakenly viewed as a grand solution which can be accessed much like turning on a light switch.
Sonderegger noted that is not the case, but rather AI is subtly infused in many areas of legal practice including analyzing unstructured texts, running judicial analytics, reviewing regulatory risk factors based on past outcomes, and mimicking certain tasks of an attorney.
Sonderegger noted that AI-powered tools can assist in reviewing briefs, contracts, and automate certain aspects of due diligence review in merger and acquisition transactions. Sonderegger also stated that Wolter Kluwer uses AI in the acquisition and curation of content, understanding how regulations evolve, and by building matching algorithms.
While conceding that lawyers may have been slower to adopt AI technologies than other professionals, Sonderegger credited the legal community for its carefulness and skepticism noting that technology must be fit for a specific purpose. In Sonderegger’s view, AI is most useful in a legal setting when its success is tied to a particular use case with a clear outcome. He observed while this is a simple notion, it is one that technologists often misunderstand.
Limitations for using AI in decision making. Clifford Chance’s Simon Greenberg expressed skepticism whether AI could replace human beings in connection with decision making, particularly in an international arbitration context. Greenberg pointed to five factors required for good decision making. In his view, AI hits the mark for only three of those five factors.
According to Greenberg, decision making factors where AI is effective include (1) considering relevant information relating to the facts and law, (2) analyzing that information, and (3) not bringing biases (or adjusting for those biases) into the process. The two factors where Greenberg believes AI falls short to include the ability to exercise judgment based on relevant experience, and the capacity to effectively communicate. That includes the ability to provide the specific reasoning underlying a decision.
Greenberg also pointed to other barriers in using AI with respect to international arbitration matters. These include the confidentiality of arbitration decisions; differing laws and differing facts across arbitrations are at plays; cases are highly fact-specific; and the various constituents (arbitrators, lawyers, and parties) vary from arbitration to arbitration.
Dealing with bias. In response to an audience question raising concerns about bias in AI results, Bertrand Rondepierre acknowledged that biased data can result in biased results, but asserted that bias can be controlled and adjustments can be made. He noted that if you have biased results, human judgment can be used to mitigate those results. Sonderegger echoed these sentiments noting that AI could be used to augment rather than replace a human being’s role in the decision-making process.
AI will not be replacing lawyers. Counter to an assertion made by an industry observer some years ago, AI will not be replacing lawyers according to Sonderegger. He observed that AI will make lawyers and the legal industry more efficient, much like it has done in other industries like medicine and tax. While there may be fewer attorneys in the future, those that remain will be involved in higher value added activities. In Sonderegger’s view, international arbitration is one such activity. It’s Greenberg’s belief that if AI helps eliminate a particular activity, the person who previously performed that activity will be free to pursue another activity of higher value.
More information about Kluwer Arbitration is available here.