On 19 October 2023, the ICC Young Arbitration & ADR Forum (YAAF) hosted a workshop titled “Navigating the Frontiers of Artificial Intelligence in Arbitration” in Zurich.

The evolution of Artificial Intelligence (AI) from a mere buzzword to a technological force has been remarkably swift. A recent study by Goldman Sachs reveals that, on average, 25% of all work-related tasks could be automated through AI. In the legal domain, this figure rises to a striking 44%, signaling a paradigm shift.

Despite these signals, the integration of AI into the daily workflow of arbitration professionals remains relatively modest. A 2021 study by White & Case and Queen Mary University of London indicated that 49% of arbitration practitioners never or rarely employ AI tools such as data analytics or technology-assisted document review. According to another study, the numbers remained on a similar level in 2023.

Against this backdrop, the workshop aimed to discern the main areas where AI could significantly impact international arbitration proceedings.


Document Review, Fact-Finding, and Document Production

Yves Barben (Senior Manager at EY in Zurich) and Andreas Wehowsky (Senior Associate at Schellenberg Wittmer in Zurich) kicked off the workshop by exploring the role of AI in document review, fact-finding, and document production – three critical stages of arbitration.

Electronic data review has become commonplace, offering efficiency over manual methods, especially for extensive data. Before the review, data needs to be collected and processed (e.g., conversion of documents into a readable format, de-duplication) before being uploaded onto a review platform such as Relativity. The review itself takes various forms, namely manual review, technology-assisted review (TAR), or generative AI-enabled review.

AI extends beyond the mere document review. For example, it can facilitate document production by AI identifying relevant documents, redacting sensitive information, and sorting out privileged or confidential documents unfit for production.

However, the use of AI necessitates ethical and procedural considerations. Potential concerns arise with regard to (deep-)fakes in videos, photos and audios. The prevailing institutional rules and lex arbitri lack explicit frameworks for the application of AI in document review and the consequences of potential misuse of AI in arbitration proceedings. The Silicon Valley Arbitration and Mediation Centre addressed this lacuna with the release of draft Guidelines on Artificial Intelligence in Arbitration (SVAMC AI Guidelines) on 31 August 2023.

Establishing a mutual understanding of the use of AI among different stakeholders in arbitration proceedings is essential to level the playing field. At the outset, the arbitral tribunal and parties are advised to establish parameters governing the application of AI in the proceedings, including an obligation to disclose AI use, as set forth in SVAMC AI Guideline 3.

SVAMC AI Guideline 1 underscores the users’ obligation to make “reasonable efforts” to comprehend and mitigate AI tool limitations, biases, and risks. This entails a thorough review process to ensure the accuracy of AI-generated submissions. To this end, SVAMC AI Guideline 4 attributes accountability for “uncorrected errors or inaccuracies in any output produced by an AI tool” to the parties and their representatives. This is reinforced by SVAMC AI Guideline 4, which prohibits the use of AI to falsify evidence, compromise the authenticity of evidence, or otherwise mislead the arbitral tribunal and/or opposing parties.

Nonetheless, several procedural risks remain unaddressed, such as potential omission or intentional concealment of documents. These issues may affect fundamental principles such as the right to be heard and, in extreme cases, violate procedural rights.


Selection and Appointment of Arbitrators

Following the initial panel, Sneha Vijayan (Co-founder of resolution and DLA in India) and Juliette Asso (Counsel at Lalive in Geneva) discussed whether AI could facilitate the selection and appointment of arbitrators.

While AI use in jury selection has been widely discussed (e.g., here, here and here), its application to the selection and appointment of arbitrators poses challenges. One such challenge arises from divergent objectives among stakeholders. Institutional appointments prioritize fair, impartial, and robust arbitrations. Conversely, parties typically seek arbitrators sympathetic to their perspectives and inclined to rule in their favor.

Another obstacle lies in the considerable costs associated with training AI models, which could amount to millions of USD. The panelists were not aware of any such developments at the time of the event. In order to minimize the overall costs, institutions and/or regular users of arbitrations could join forces to develop such a tool. It remains to be seen whether institutions or parties would be willing to invest substantial sums in tools facilitating arbitrator selection and appointment. In particular, it is questionable whether institutions and parties would work together to train an AI model.

Financial hurdles aside, the availability of relevant data for arbitrator selection, such as prior decisions, personal opinions, and biases, is limited. This information is typically not publicly accessible. Moreover, AI models, relying on statistical likelihood derived from available data, in particular the prevalent composition of arbitral tribunals in the past, inadvertently perpetuate stereotypes, such as those characterizing arbitrators as “male, pale, and stale.

SVAMC AI Guidelines, in principle, permit the use of AI for researching potential arbitrators. However, they caution against the sole reliance on AI for arbitrator selection “without human input or without assessing the AI tool’s selection critically and independently or controlling for biases and other limitations” (see SVAMC AI Guideline 1).



The final panel, featuring Yannic Kilcher (Chief Technology Officer at DeepJudge in Zurich) and Nino Sievi (Partner at Nater Dallafior in Zurich), explored the implications of AI on decision-making in international arbitration.

Decision-making is front and center of every dispute, and AI can support arbitrators in this key step. However, it is essential to understand that AI lacks cognitive thinking; its output is based on statistical likelihood derived from training data. Consequently, AI is most suitable when extensive case law is available, and factual and legal scenarios are comparable and repetitive (e.g., product liability, false advertising or insider trading with many injured parties).

The panel drew attention to the use of AI tools in German courts. Notably, the Regional Court of Frankfurt tested FraUKe (Frankfurter Urteils-Konfigurator Elektronisch), aimed to assist in decision-making for mass proceedings concerning air passenger rights claims. Similarly, the High Court of Stuttgart implemented OLGA (Oberlandesgerichtsassistent) for classifying appeal submissions and adapting template decisions to specific facts in mass proceedings related to the diesel emission scandal. While the introduction of these tools was generally met with approval, a position paper resulting from a conference of several presidents of higher courts in Germany in 2022 implies that AI is not poised to substitute the human decision-making process in the immediate future. Some concern revolves around the apprehension that judges may become less discerning over time due to the reliance on such tools. Moreover, scholars request the disclosure of the underlying algorithm to facilitate a thorough examination of the pertinent factors, if automated decision-making processes are applied.

The panel identified case prediction or assessment as another application for AI in arbitration. This is different from the actual decision-making because it would mainly assist the parties and counsel before the award is rendered. Existing AI tools, such as LexMachina and Solomonic, offer insights into extensive judgment databases to refine case assessments by counsel. For example, LexMachina claims to promote the analysis of courts, judges, opposing counsel and parties, generates complaint summaries and anticipates behaviours and outcomes that different legal strategies might produce. Solomonic, a UK-based litigation analytics platform, promises to monitor recent developments through personalized alerts. It emphasizes crucial relationships, compiles insights into smart summaries, and enables swift, confidential access to court documents. Additionally, it is said to help the trial preparation and predicts outcomes based on past behaviour, a significant indicator for future actions.

In any case, the challenges of the use of AI in decision-making in arbitration merit careful consideration. An inherent limitation stems from the fact that AI tools rely on statistical data, thus lacking the capacity for legal reasoning and rendering a decision based on the individual circumstances. The decision is not based on reasonableness but on statistical likelihood. In other words, a decision rendered by AI is simply the most likely outcome. This calls into question the extent to which AI can accommodate the specific circumstances of an individual case. The reliance on historical data may also restrict the evolution of law, as new legal arguments will rarely succeed.

Further concerns pertain to confidentiality undertakings, one of the main reasons parties choose arbitration over litigation. Parties or arbitrators using AI tools must ensure the confidentiality of the proceedings, while also scrutinizing the origin and quality of historical training data to prevent bias and replication.



While the integration of AI has the potential to streamline arbitrations, saving time and money, it cannot substitute lawyers and arbitrators at the moment. This limitation stems from several factors, including the expertise and human judgement essential for addressing unique circumstances and legal questions as well as ethical considerations and the costs associated with developing an AI model.

The use of AI in arbitration requires a careful assessment of both its benefits and challenges. Parties and arbitral tribunals should consider the potential application of AI early in the proceedings and establish a shared approach within the procedural framework. This underscores the importance of general principles, such as those outlined in the SVAMC AI Guidelines.


To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.

Kluwer Arbitration
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.