There has been a lot of talk about artificial intelligence (“AI”) in international arbitration in recent years.  I vividly remember when I gave the keynote speech on “International Arbitration 3.0 – How Artificial Intelligence Will Change Dispute Resolution” at the Vienna Arbitration Days 2018.  At the time, people were quite skeptical about the topic, but apparently intrigued enough to select it at the GAR awards as the best lecture of the year.  Since then, the international arbitration community has evolved, and it is now undisputed that AI systems have a significant and increasing impact on international arbitration (see e.g., Maxi Scherer, Chapter 39: Artificial Intelligence in Arbitral Decision-Making: The New Enlightenment?, in Cavinder Bull, Loretta Malintoppi, et al., (eds), ICCA Congress Series, Volume 2, pp. 683 – 694 (2023)).  For instance, counsel frequently employ AI tools for document review and research purposes, and there is a rising demand for these systems in transcription and translation tasks.

As AI systems continue to develop, it is also important to create a harmonized ecosystem where AI “collaborates” effectively with arbitration practitioners – be it with counsel or arbitrators.  Among the most burning questions is whether there is a need to regulate AI, either broadly or in international arbitration more specifically.  Recently, I gave the 6th Sciences Po Mayer Brown arbitration lecture on the question “Do We Need to Regulate the Use of Artificial Intelligence in International Arbitration?”  While there is burgeoning regulation in court proceedings (such as by the UK Courts and Tribunal Judiciary and the Dubai International Financial Centre (DIFC)), very little exists that applies to international arbitration.  In April 2024, the Silicon Valley Arbitration and Mediation Center published the “Guidelines on the Use of Artificial Intelligence (AI) in International Arbitration,” as an attempt to propose some form of optional regulation.

On a broader level, the European Union Artificial Intelligence Act (the “Act”), a landmark legislation that lays down harmonised rules on artificial intelligence, was adopted by the European Parliament on 13 March 2024 and will enter into force after its publication in the EU Official Journal.  Despite being described as the most comprehensive piece of legislation in the AI field, the international arbitration community has paid little, if any, attention to this regulation and few practitioners are aware that the Act has the potential to apply to international arbitration proceedings (but see here), and in particular to arbitrators.  This blog discusses how the activities of arbitrators may fall within the material, personal, territorial and temporal scope of the Act.

 

Material Scope

The Act takes a risk-based approach, which means that it classifies economic activities according to the likelihood of harm caused by AI systems, and the regulatory duties vary according to this level of risk (Recital 26).

For instance, there is a general duty of AI literacy, which means that providers and deployers of AI systems shall take appropriate measures to gain the knowledge and skills to “make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause” (Recital 56).

Activities of arbitrators may be classified as “high-risk”.  Annex III, Art 8(a) provides that “AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts or used in a similar way in alternative dispute resolution” (emphasis added) are to be classified as high-risk AI systems.  The reference to “alternative dispute resolution” is likely to include international arbitration.  This is confirmed by Recital 61 which provides that “AI systems intended to be used by alternative dispute resolution bodies for [the purposes of the administration of justice and democratic processes] should also be considered to be high-risk when the outcomes of the alternative dispute resolution proceedings produce legal effects for the parties.” (emphasis added).

Article 6(3) contains exceptions to the high-risk classification, namely where otherwise high-risk AI systems are used in a way that does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons.  This applies to situations in which:

“(a) the AI system is intended to perform a narrow procedural task;
(b) the AI system is intended to improve the result of a previously completed human activity;
(c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
(d) the AI system is intended to perform a preparatory task to an assessment.”

In which circumstances these exceptions apply is not immediately clear from the Act.  Nor is the answer clear to the critical question whether one can conclude from Article 6(3) that international arbitration will fall under the high-risk activities category only if natural persons are concerned.

 

Personal Scope

The Act distinguishes between different regulated entities.  Providers, importers and manufacturers of AI systems bear the most stringent obligations under the Act (Articles 16, 25).  However, “deployers” of AI systems also fall under the scope of the Act. A “deployer” is defined in Article 3(4) as “any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.”  Arbitrators, as natural persons using AI systems for a professional activity, thus fall under the personal scope of the Act.

Deployers of high-risk activities have to follow a certain number of regulatory obligations, such as the obligations to (i) take appropriate technical and organizational measures to ensure that the AI systems are used in accordance with their instructions (Article 26(1)), (ii) monitor their operation (Article 26(4)), (iii) assign human oversight to natural persons who have the necessary competence, training, authority and support (Article 26(2)), (iv) ensure the input data is relevant and sufficiently representative (Article 26(4)), and (v) keep the logs automatically generated by the system for a period of at least six months (Article 26(6)).  In certain situations, deployers have additional duties to carry out data protection impact assessments (Article 26(9)) and cooperate with national EU authorities (Article 26(12)).  In case of non-compliance, financial and non-financial sanctions are foreseen (Article 99).

 

Territorial Scope

The Act outlines its territorial scope in Article 2.  The Act applies if the deployer of AI systems either (i) has its place of establishment or is located within the EU (Article 2(b)); or (ii) has its place of establishment outside the EU but “where the output produced by the AI system is used in the Union.” (Article 2(c)).

The application of this provision to international arbitration is not straightforward.

Concerning Article 2(b), one could argue that the place of habitual residence of an arbitrator is where she is established or located.  However, this means that in a three-member tribunal, one or two arbitrators might be covered by the Act, while the other one or two might not.  An interpretation that favours a more uniform application amongst tribunal members would be to consider the place of establishment of the tribunal (as opposed to its individual members), which would likely be determined by the seat of the arbitration.

It is even more complicated to assess in which circumstances the Act could apply if we consider Article 2(c).  The interpretation difficulty turns around the requirement that the output produced by the AI system must have been “used” in the EU.  Arguably, if AI systems have been used by the arbitral tribunal, the AI system’s output has impacted the award, which in turn has legal effects on an EU-based party.  Is the location of one of the parties in the EU thus sufficient to conclude that the “output produced by the AI system is used in the EU”?  Or, otherwise, is it sufficient that an award could ultimately be enforced against assets located in the EU?  If one were to answer in the positive, this would mean that the Act could have potentially significant extraterritorial consequences: it could apply even if the seat of the arbitration is outside the EU, the arbitrators are based outside the EU, and one of the parties is located outside the EU.

 

Temporal Scope

The Act will be implemented in stages.  Most provisions related to high-risk AI systems will apply 24 months after the Act has entered into force (Article 113).

Fortunately, this means that the international arbitration community still has time to consider the extent to which the use of AI in international arbitration by arbitrators falls under the Act.  What is sure, however, is that we need to engage in the debate!

 

I wish to thank Russell Childree, Dr. Ole Jensen, Andra Ioana Curutiu, Alice Dupouy, and Alexey Schitikov, colleagues at Wilmer Cutler Pickering Hale and Dorr LLP, for their research and assistance.


________________________

To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.


Kluwer Arbitration
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.