London International Disputes Week (“LIDW”) once again revisited the much-debated topic of ‘role played by artificial intelligence (“AI”) in arbitration’ with renewed vigour. There were more than seven events related to arbitration and AI at the LIDW. This post offers a brief insight into three key AI-related events at LIDW from the third day of the LIDW on 5 June. These events provide insight into the current application of AI in arbitration and legal practice, and the potential challenges faced in use of AI.

 

Arbitration Alchemy: AI, Psychology and Decision-Making Dynamics

This session moderated by Stephen Dowling SC (TrialView) was composed of Dr Ula Cartwright-Finch (Cortext Capital), Toby Landau KC (Duxton Hill), David Blayney KC (Serle Court), Stephen Dowling SC (TrialView) and Myfanwy Wood (Ashurst). The session can be divided into the psychological aspect of arbitration decision-making and the role AI can play in refining the decision-making process, particularly addressing human cognitive limitations.

Dr Cartwright-Finch highlighted several prevalent biases that affect human decision-making such as cognitive, unconscious, overconfidence, anchoring, and memory biases. She emphasised how these biases are exacerbated by the information overload commonly experienced in arbitration settings. Our brain tends to take shortcuts to address the information overload, compromising the decision-making process.

Mr Landau pointed out that adversarial advocacy in arbitration tends to result in over-lawyering. He explained that arbitrators, under the pressure of extensive documentation, are forced to focus on key points, potentially overlooking content that does not fall within these narrative structures. He recognised that these dynamics “might be poisonous and devalue the system”.

In turn, Ms Wood spoke about the different geographical and background biases that affect the way in which arbitrators perceive a case. She spoke of an affinity bias that exists, where people tend to favour individuals who are more convincing, or arguments they can better relate to due to their background.

Mr Blayney spoke about the human processes that exist to deal with these biases, he called it decision-making hygiene. A methodology to mitigate biases involves structured decision-making processes, ensuring independence, recognising biases, and using collective wisdom.

The panel then explored how AI can contribute to addressing these challenges by managing large volumes of information. Apart from summarising documents and evidence, AI can create elaborate maps to track down different parts of a case in an objective manner. This allows arbitrators to make more thorough decisions which take into account a broad number of elements.

The panel also discussed the recent US case where a judge used AI in its decision to interpret the term ‘landscape’. Mr. Blayney pointed out that, using multiple AI to identify an ordinary meaning of a text is not too different from using different dictionaries. However, Ms Wood noted that there are dangers associated with using AI that are not trained in local data as landscaping in different parts of the US might mean different things.

The panel noted that, while AI has the potential to play a crucial role in easing the decision-making process, caution should be advised due to AI’s own internal biases. Mr Blayney contrasted the arbitrator’s deterministic (rule-based) decision-making with the predictive models used by AI. This according to him is the ‘Achilles heel of AI’. However, he noted that predictive decision-making is not itself a bad thing as it can assist with human decision-making.

 

AI in International Arbitration: What Is it, Where Are the Issues, What Is the Future?

This session discussed the challenges associated with using AI in arbitration, particularly concerning data privacy and confidentiality. The session moderated by Katherine Apps KC (39 Essex Chambers) consisted of Jonathan Bellamy, Jennifer Thelen (39 Essex Chambers) Nick Ellison (Kroll) and Eimear McCann (TrialView). The panel contained a mix of technical experts and arbitration practitioners.

Mr Ellison first clarified that the definition of AI extends beyond mere automation or the use of algorithms in a traditional sense. It involves replicating human behaviour through machine learning and adapting over time. AI, especially generative AI, utilizes neural networks and vast datasets to evolve its capabilities. Despite its advancements, Ms Apps KC emphasized that “we are years and years away from AI being a silicon-based human brain.”

From an arbitrator’s perspective, Mr Bellamy highlighted the potential roles AI could play, from simplifying lengthy procedural tasks such as document review to automating transcriptions and drafting initial procedural documents. However, the use of AI carries several risks, including confidentiality concerns and due process issues. When using AI in arbitrations, parties and arbitrators should address any concerns early on during case management to avoid potential challenges to enforcement.

Ms McCann raised significant confidentiality issues associated with AI, particularly for lawyers handling sensitive client information. She also emphasised the severe privacy issues associated with data generally, due to the methods used to source data for training AI. While AI is an extremely useful tool, the panellists indicated that its use in cases could result in data breaches. The AI currently available is mostly trained using publicly available court data as law firms are unlikely (and should not) to provide data due to client confidentiality. This means the potential of AI in commercial arbitration is still lagging.

Ms Thelen brought up a critical concern regarding the UK data protection law (GDPR), particularly its extra-territorial reach and its applicability to AI. She explained the judicial reasoning in the Clearview case regarding the extra-territorial application of data protection legislation. The key element is that under the GDPR, even if the data used to train AI is not obtained from UK subjects if the AI is operated in the UK, data protection law will most likely apply to it. Therefore, AIs operated within the UK (or EU) must be regulatory compliant.

Other challenges with the use of AI such as the existence of deep fakes and increased potential to commit fraud using AI were also discussed by the panelists. Arbitrators and counsel will have to be cautious particularly when dealing with the legitimacy of evidence and witnesses in the future due to the rise of AI.

 

Embracing AI’s Role in Arbitration and the Legal Landscape

The event moderated by Alexandre Vagenheim (Jus Mundi) featured Paul Kinninmont (Freeths LLP) Mihaela Apostol (Arbtech) Nick Pryor​​​​ (Freeths LLP) The session focused on the actual and potential application of AI in arbitration and the legal field in general.

The session in a somewhat customary manner for sessions on AI revisited the definition of AI. Panelists also provided insights on their experience with AI. Mr Vagenheim spoke about the potential uses of AI by Jus Mundi to summarise cases and provide insight into cases. Panellists also acknowledged the rapidly evolving nature of AI in the legal field.

Ms Apostol highlighted several interesting applications of AI in arbitration and litigation. Firstly, the application of AI (based on partial delegation of tasks) to small-scale damage claims. For example, she referred to the pilot project involving IBM and the Frankfurt court system, on air passenger rights. She noted that as there are several air passenger claims, AI has the potential to ease the burden of the courts, particularly where cases deal with similar kinds of facts. Secondly, she noted that AI could be useful in cases obtaining swift case assessments, such as in emergency arbitrations. However, one concern noted by Ms Apostol was the public perception of AI, referring to the Estonian AI (judge) project, which faced criticism from the public.

An AI’s application in legal contexts is not without challenges. One issue is regarding who should own and control AI technologies. There is a risk of manipulation of AI for political gain and other purposes. Another concern discussed was the bias in the data used to train AI. In the Loomis v. Wisconsin case, the COMPAS AI system employed by US courts in sentencing was challenged for its racial and economic biases.

Mr Vagenheim pointed to the limitations of AI in terms of international applicability, noting that its current use is mostly confined to languages with extensive training data, such as English and French. The panel also debated whether the use of AI in arbitrations should be disclosed. Mr Pryor argued that the use of technology, in general, does not require disclosure, suggesting that AI should be no exception. Conversely, Mr Kinninmont highlighted the potential for challenges to arbitration awards based on public policy grounds under the New York Convention, especially if AI is unrecognised in the seat or country of enforcement.

A recurring theme across the sessions was the recognition of AI’s undeniable advantages in reducing costs and enhancing efficiency. The potential of AI to ease the decision-making process and case management for arbitrators and counsels continues to grow. Nevertheless, all the panellists agreed that the human element remains indispensable in arbitration. Arbitrators and judges contribute emotional intelligence and societal understanding to their decisions – qualities that AI cannot yet replicate.


________________________

To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.


Kluwer Arbitration
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.