Paris Arbitration Week (“PAW”) featured multiple discussions on the growing importance of Artificial Intelligence (“AI”) in arbitration proceedings, a much-debated topic. Issues such as the slow uptake in use of AI by legal professionals, the ethical and legal risks involved in the use of AI, and finding a balance between AI use and the necessary avoidance of those risks continue to be discussed as hot topics. This post discusses the key points of those discussions at “AI in Arbitration: Less Talk, More Tech,” hosted by TrialView and 39 Essex Chamgers, and “Mastering AI: Bold Horizons to Real-World impact,” hosted by Ashurst LLP and TrialView.

 

AI in the Real World

Stephen Dowling SC (TrialView) hosted a panel discussion, “Mastering AI: Bold Horizons to Real-World impact,” on the real-world impact of AI in arbitration proceedings. Guest speakers were Nick Ellison (Kroll), Myfanwy Wood (Ashurst LLP), Louise Reilly SC (Kellerhals Carrard), Philippa Charles (Twenty Essex), and Jennifer Kirby, an international arbitrator. The panel, composed of legal professionals serving in various roles, offered an interesting, useful, and even controversial at times discussion on the use of AI in their respective capacities, including as arbitrators, barristers, expert advisors, and solicitors.

As a point of departure, the panel noted AI’s increasing ability to digest and understand a large volume of documents. The advancement of large language models in making sense of thousands of pages of documents in a manner befitting the needs of a practitioner is on the rise. An example is the use of AI to make sense of a large volume of information. AI’s future use in drafting witness statements, the potential to clear up inconsistencies in those statements, and performing legal research on a global scale was also highlighted. Information security through means of tokenization was also discussed. Currently, document review and keyword searches are some of the general uses of AI. Concept searching is a more sophisticated use of AI.

The benefits of AI, particularly the potential in cost-savings, will not go unnoticed by clients and it was noted that practitioners will have to learn to use AI to facilitate cost reduction sooner than later.

Currently, arbitrators use AI to create schedules of documents and chronologies. While it could be used to explain difficult concepts in simple terms or act as an aid in preparing a first draft of a chronology, an arbitrator’s use of AI to understand concepts or the reliance thereon for an accurate account of facts was challenged. These issues, it was said, ought to be advocated by party representatives. It was suggested that an arbitrator ought to be transparent not only about which AI is used, but also to what extent AI is used for a particular purpose. Because of the risk of bias involved in AI, it seems reasonable that parties will want to consider whether an arbitrator can, or cannot, use a particular AI. As such, it was suggested that the use of AI ought to be discussed in a transparent manner at the outset of proceedings. The use of AI platforms which do not provide measures to protect confidentiality was also raised as a concern.

The discussion then moved on to consider the publication of recent guidelines and legislative provisions. The EU AI Act classifies AI systems intended to be used by alternative dispute resolution bodies for purposes of research, interpreting facts and the law, and in applying the law to a concrete set of facts, as high-risk. The SVAMC Guidelines do not impose any per se obligation to disclose the use of AI, but acknowledges that the disclosure of such use may be appropriate in certain circumstances. The CiArb Guidelines state that the disclosure of the use of an AI tool may be required to the extent that its use may have an impact on the evidence and the outcome of the arbitration, or otherwise involve a delegation of an express duty towards the arbitrators or any other party. Thus, the obligation to disclose use of an AI tool is far from a settled issue. Transparency relating to its use, particularly on the part of the arbitrator, appears to be paramount.

 

Practical Use of AI

This session “AI in Arbitration: Less Talk, More Tech” included guest speakers Lindy Patterson KC (39 Essex Chambers), Hannah Fry (39 Essex Chambers), and Gillian Forsyth (Eversheds Sutherland). The discussion centered around AI’s utility in case strategy. Topics covered the ability to effectively use AI in arbitration proceedings to streamline data organisation, enhance evidence preparation, and strengthen advocacy.

Discussions about the use of AI in arbitration proceedings generally start with a consideration of the risk involved. This discussion was no different. However, a key point in this discussion was that, at least for present purposes, AI ought to be used as a facilitative tool and not an answer in itself.

Becoming AI literate requires an understanding of how AI systems work. The building blocks behind Large Language Models (“LLMs”)—used in most of the top 50 AI web products—are data, computational resources, and algorithms. Ethical and legal risks are involved in each of these aspects. For example, who owns the data? Does the data consist of confidential or sensitive information? Are there copyright concerns relating to the use of the data? Other concerns relate to, for example, data leakage or data bias. In the context of algorithms, the issue most referred to would be that of AI hallucinations. The ethical and legal risks carry on. What resources can arbitrators and party representatives use to become AI literate?

DeepLex is one of several resources to consider. It is free. DeepLex aims to track current AI legislation, litigation, and emerging issues to provide a better understanding of the AI landscape for legal professionals.

With that in mind, the conversation moved to a demonstration of TrialView. TrialView aims to facilitate with evidence management and analysis in arbitration proceedings. There is no doubt about the fact that arbitration professionals, with the advent of technology, often find themselves drowning in evidence. Few disputed cases nowadays seem to be confined to a single lever-arch file of evidence.

TrialView’s platform allows the user to upload a vast array of data and structure the data in a methodical manner, including scope for uploading pleadings, disclosure, transcripts, among others. All of this is arranged in a chronological manner. It can then provide a summarised chronology of events, a detailed chronology of events, and summaries of a particular witness’ evidence. It can further analyse the inconsistencies in witness evidence, search documentation thematically, and provide a summary of a particular issue in a matter, for example, different from witnesses’ accounts of a particular event, and much more. This is, of course, very useful when dealing with thousands of pages of evidence. Tools, like TrialView, will be most useful if used from inception of proceedings, particularly in arbitration proceedings where all the evidence is frontloaded. The ability to search documents in multiple languages was also discussed.

The practical application of such programs will have to be tested over time. Concerns arising from use of the type of AI that can provide these types of results are, among others, the relevance of the results and any bias on the part of the AI involved. Risks such as bias can eventually be reduced with, for example, the use of multiple LLMs to cross-check the results of one AI against another. Concerns relating to the confidentiality of information stored on AI systems can be reduced by isolating cases, making storage available in a secured cloud, and ensuring that the content of a case is not used to train the AI. Of course, publicly available AI systems which do not have these attributes should be avoided.

 

Final Remarks

Incidentally, there was a session offering some preliminary insights on the 2025 international arbitration survey of Queen Mary University of London in partnership with White & Case. In a session, covered by another post, several figures enamoured the relevance of AI in arbitration. The findings showed that saving time is the biggest driver for AI use. In parallel, the main obstacle to the use of AI appears to be the risk of AI errors and bias. An overwhelming majority is not in favour of the use of AI by arbitrators to draft awards and decisions. A staggering 90% of the respondents expect to use AI for research, data analytics, and document review.

Clearly, while many involved in arbitration, in general, are alive to the benefits of AI for use in arbitration proceedings, it seems those, perhaps more circumspect than others, are very slow to its uptake. To those who do, AI ought not to be construed as a replacement of the user’s skill and judgment. It can only facilitate efficient and cost-effective case management. The user ultimately remains responsible for their own AI literacy and the product of their work, and confirming the accuracy and relevance of results is paramount. Understanding the limits of AI can therefore lead to both an appreciation of AI risks and benefits.

 

This post is part of Kluwer Arbitration Blog’s coverage of Paris Arbitration Week 2025.


________________________

To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.


Kluwer Arbitration
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.