Artificial intelligence (AI) has been used in international arbitration for quite some time now. Before the launch of ChatGPT in November 2022, there seemed to be an implied consensus that neither soft law nor regulation on the use of AI in arbitration was required. AI tools appeared to be considered as just one category of LegalTech tools that parties and arbitrators are free to use without having to disclose such use to other parties and the arbitral tribunal. Discussions and arbitration literature on AI tended to revolve around efficiency gains and the futurist idea of robot arbitrators rather than possible implications and risks that AI may pose to the integrity of the arbitral process. The benefits of AI tools in terms of capacity and efficiency are significant; AI is expected to have a transformational effect on the legal profession and international arbitration is no exception. Current applications of AI in international arbitration include tools used to identify relevant documents, generate and edit text, as well as translation, interpretation and transcription tools. Future applications of AI in arbitration are virtually limitless with many use cases in the works.

Since the launch of ChatGPT, arbitration practitioners have engaged in discussions and shared experiences of using AI. The arbitration community is now in the process of assessing the implications of the use of AI in arbitration while also identifying any risks that may need to be addressed.

BCLP are currently canvassing views on these and other issues in their Annual International Arbitration Survey 2023: AI in IA – The Rise of Machine Learning. To take the survey, click here.

 

Identifying potential risks

The implications of using AI tools in arbitration largely depend on the capacity an individual or firm has in the proceedings and the context in which that particular tool is used. For example, a party may be free to use an AI tool to gather relevant documents it might want to rely on in the arbitration without disclosing such use to its counterparty. On the other hand, the same party might have to disclose that it is using the same tool if doing so in the context of complying with a document request or an order issued by the arbitral tribunal. Similarly, parties may find it acceptable for an arbitrator to use a generative AI tool to draft the procedural history of an arbitral award but refuse to give consent for the arbitrator to use such tool to prepare the reasons for her decision on the merits of the dispute.

The use of an AI tool by a party in an arbitration will also depend on whether the other parties in the proceedings have been made aware of the intention to use the AI tool and, if consent is withheld, whether the party intending to use the tool has been permitted to do so by the arbitral tribunal. Further, the use of an AI tool may be conditioned upon the disclosure of certain parameters for transparency purposes. For example, another party in the proceedings may wish to obtain information about the data on which the algorithm in question was or will be trained. Such information may be necessary to assess how the algorithm may affect the evidence and possibly the outcome of the case.

There are multiple potential implications of using AI in arbitration. The following sections provide a high-level overview of some of these implications.

 

Impact on the arbitration procedure

The use by one or more parties of AI in an arbitration may give rise to procedural issues. For example, an inadequate translation by a machine translation tool may affect the evidence on which the dispute is decided and the integrity of the proceedings. Similarly, generative AI tools may create procedural issues if used by fact or expert witnesses in the preparation of statements or reports. Predictive coding is another example of how the use of AI may affect the integrity of the arbitral process. Predictive coding – also known as Technology Assisted Review (TAR) – is a machine learning AI tool by which relevant documents are identified by an algorithm. Predictive coding is used regularly in English courts under strict conditions, including in terms of transparency. Transparency on parameters is important from a procedural standpoint since other parties to the proceedings may need to have access to sufficient information to satisfy themselves that the technology is being used appropriately and securely. (For more information on predictive coding and its application in international arbitration, see here).

To the extent that the use of AI in arbitration may affect due process and fairness, its use may give rise to enforcement issues. As the regulatory landscape evolves, lawmakers may prohibit the use of AI in the context of the administration of justice. Any AI regulation would add another layer of rules that arbitrators and counsel would need to consider to ensure that proceedings are conducted in accordance with the law applicable at the place(s) of enforcement (in addition to the law of the seat), and the enforceability of any ensuing award.

 

Use of AI by arbitrators

Another risk associated with the use of AI in arbitration relates to its use by arbitrators and arbitral tribunals. The use of AI by adjudicators raises questions as to its impact on the administration of justice and the rule of law; this is an issue that the current draft of the EU AI regulation seeks to address. The draft regulation classifies certain AI systems intended to be used in the context of the administration of justice as “high-risk”. Though the text does not directly refer to arbitration, it does refer to tools intended to be used by a judicial authority to research, interpret, and apply the law to a concrete set of facts, and to do so “in a similar way in alternative dispute resolution”. References to judicial authority and alternative dispute resolution appear to suggest that certain AI tools intended to be used by arbitrators as part of their mandate would be captured in the definition of “high-risk” AI systems and therefore covered by the regulation.

A further consideration relates to the nature of arbitrators’ mandate. Arbitrators are appointed by virtue of the parties’ agreement to arbitrate rather than pursuant to state powers. They are appointed by name for their specific characteristics to conduct the proceedings and determine the dispute either alone or collegially as a three-member tribunal. An arbitrator’s mandate is, therefore, a personal mandate (intuitu personae) which means that it must be fulfilled personally without delegating the entirety or any part to someone else. In the event a delegation takes place, it must be disclosed to the parties. An example of such delegation by an arbitral tribunal is the use of tribunal secretaries, which is regulated in arbitration. The intuitu personae nature of an arbitrator’s mandate implies that the arbitrator ought not to delegate parts of their mission to an AI system or software. Arbitrators will therefore need to determine which AI tools may not be reconciled with their mandates.

 

AI and ethics

AI also gives rise to ethical issues for counsel in terms of their duties toward both clients and the arbitral tribunal. With respect to their duties vis-à-vis clients, lawyers may be required to disclose the use of AI in the context of giving legal advice and representing clients in arbitral proceedings. Further, in some circumstances, the use of AI may not be in line with other duties towards clients as it may constitute a breach of the duty of confidentiality, for instance.

AI use is also likely to affect the duties of counsel towards the tribunal. One commonly accepted ethical duty in arbitration is that counsel may not misrepresent the law or the facts of the case. Therefore, the use of AI tools to perform legal research and answer factual enquiries will require qualified lawyers to verify any material produced by such tools to ensure accountability and compliance with ethical duties.

Regulators such as bar and lawyer associations around the world are considering the need for new guidance on AI use. As counsel in international arbitration are typically bound by their ethical rules irrespective of the place of arbitration or the substantive law of the dispute, any AI-related guidance or rules issued by their regulation authority will apply to counsel irrespective of whether opposing counsel is bound by the same rules.

Following the recent development in generative AI, some courts have already issued practice notes requiring counsel to disclose the use of AI in the preparation of materials filed with the court (see, e.g., the practice note from the Court of King’s Bench of Manitoba in Canada). This is something that parties who wish to see more transparency over the use of AI in arbitration may want to encourage tribunals to do.

 

Conclusion

The potential implications and risks associated with the use of AI in arbitration give rise to the question of how such risks should be managed to preserve the integrity of the arbitral process while embracing the benefits of AI, particularly in terms of efficiency and competitiveness. Depending on their nature, some AI risks may need to be addressed on an ad hoc basis, while others may require a systemic solution, either at the institutional level or through guidance or soft law.

Please share your views on these and other issues by taking part in BCLP’s Annual International Arbitration Survey 2023 AI in IA – The Rise of Machine Learning. The survey can be accessed here.

The author is very grateful to Rhea Patel, Trainee at BCLP, for her assistance with this article.


________________________

To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.


Profile Navigator and Relationship Indicator
Access 17,000+ data-driven profiles of arbitrators, expert witnesses, and counsels, derived from Kluwer Arbitration's comprehensive collection of international cases and awards and appointment data of leading arbitral institutions, to uncover potential conflicts of interest.

Learn how Kluwer Arbitration can support you.

Kluwer Arbitration
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.