The prospect of integrating generative artificial intelligence (AI) into the adjudicatory decision-making process is not as distant as one might think. In February 2023, it was reported that a Colombian judge used ChatGPT in deciding a health insurance dispute. The judge asked both decisional and research questions and integrated those responses into his judgment. Another example comes from a court in Pakistan. One company has also explored the use of ChatGPT in mediation, ChatGPT was a party in a recent mock arbitration hearing based on the Vis Moot case, and Jus Mundi just announced its Jus AI. If judges and mediators can rely on ChatGPT, why not arbitrators?

 

Shadow Arbitrator: the Use of ChatGPT in the Decision-Making Process

Most of the major arbitration rules expressly recognise the parties’ prerogative to decide upon the procedure and limits of the tribunal’s mandate. For example, the UNCITRAL Model Law on International Commercial Arbitration provides that the “parties are free to agree on the procedure to be followed by the arbitral tribunal in conducting the proceedings”. Hypothetically, if the parties agree, the tribunal could use generative AI to fulfil its mandate. However, depending on the lex arbitri, such agreement might be against public policy and therefore unenforceable.

Notably, some arbitral rules underline that arbitrators have a duty to ensure the efficiency1)Using ChatGPT to perform arbitral tasks is definitely time-efficient, especially in investment arbitration cases, where the average time for rendering an award is 425 days. It can also be cost-efficient since it is free or low cost for premium subscribers. As of publication a premium subscription costs US $20/month. of the procedure (LCIA Rules, art. 14.1) and recognise the tribunal’s power to “emplo[y] technology to enhance the efficiency and expeditious conduct of the arbitration” (LCIA Rules, art. 14.6(iii)), and to “adopt suitable procedures for the conduct of the arbitration in order to avoid unnecessary delay or expense, having regard to (…) the effective use of technology” (HKIAC Rules, art. 13.1).

Given that the tribunal’s discretion to tailor proceedings is superseded by the lex arbitri, use of AI in arbitration proceedings requires an assessment of the requirements of the applicable national laws. Some may prohibit the use of AI in the decision-making process or submit AI to specific requirements. One example is the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment (2019), which guides policy makers in the use of AI in the judiciary based on the following principles: respect for fundamental rights, non-discrimination, quality and security, transparency, and “under user control”. However, some countries explicitly encourage the use of technology in the judicial process, like Colombia with its 2022 law, which provides that public lawyers should be using “technology” (in the broad sense), where it will enhance efficiency.

One fruitful use case for the delegation of decisional powers to AI in arbitration could be on the apportionment of damages, where expert reports from the parties are often miles apart. AI could be used to build an evaluation mechanism.

 

Can ChatGPT Replace a Tribunal Secretary? 

While using ChatGPT for decisional purposes might seem unrealistic now, using ChatGPT as a tribunal secretary might be more practicable. ChatGPT can undertake a good part of the tasks of the tribunal secretary, such as those set out in ICC Note to the Parties, paragraph 224:

“preparing for the arbitral tribunal’s review drafts of procedural orders as well as factual portions of an award, such as the summary of the proceedings, the chronology of facts, and the summary of the parties’ positions, provided that such procedural orders and portions of the award are subsequently reviewed by the arbitral tribunal itself.”

ChatGPT’s potential in this respect has been explored on the Blog here and on the ArbTech platform here. This would have a great impact on reducing both the time and costs involved in preparing the non-decisional parts of an award.

 

What Are the Pitfalls?

Bias

One frequently discussed issue in connection to AI is bias. There are two main opinions on this matter: i) bias is inherent in AI since the dataset which is entered into the AI system is controlled by humans and they transfer their biases into the system; or ii) AI can skilfully evolve and eliminate the bias since it will be more objective than humans.

While different studies come to different conclusions, in this post we wanted to showcase what ChatGPT had to say on the matter:

“Artificial intelligence (AI) can be biased, just like humans can be biased. AI systems are designed and trained by humans, which means that any biases or prejudices that the humans have can be inadvertently incorporated into the AI system.

Biases in AI can arise from a number of sources, including biased training data, biased algorithms, and biased decision-making processes. For example, if an AI system is trained on data that only represents a certain group of people, it may not perform well for people from other groups. (…)

It is important to note that AI bias is not always intentional, and often arises from a lack of diversity and representation in the development and training of AI systems. To mitigate bias in AI, it is important to ensure that diverse perspectives are included in the development process, and that data sets used to train AI systems are carefully chosen to be representative of the population as a whole. (…)”

Bias undisputedly exists in human decision-making processes. However, we must distinguish between conscious and unconscious bias, as well as agree on which (human) biases are “admissible” in arbitration and which are not. The same should apply to AI used in arbitration. ChatGPT is certainly not bias-free yet, and it is unclear whether any AI model ever will be.

 

Hallucination

At the moment ChatGPT is even having trouble differentiating between the use of technology and the use of AI, stricto sensu. This was proven by the answer given by ChatGPT when asked whether there were any AI guidelines for arbitration and its answer was that “in 2018, the HKIAC introduced a set of guidelines for the use of AI in arbitration”. The 2018 HKIAC Rules introduce the use of technology in the HKIAC proceedings but merely in the form of an online repository system, to the extent that this is recognised and embedded in HKIAC Rules, art. 3.1(e), and a general permission to use technology for the purposes mentioned in HKIAC Rules, art. 13.1. This example shows what has been observed in other answers as well: that apart from showing bias, ChatGPT is often hallucinating.2)The term “AI hallucination” is referring to instances where an AI model, especially large language models  like ChatGPT, are generating results that are untrue and not backed by real-world events and data. Analysts consider frequent hallucinations to be a major problem of large language model (LLM) technology. Consequently, it is quite important to evaluate the AI’s answers, in order not to experience a pitfall, as was the case recently in New York, where attorneys were sanctioned for citing fake ChatGPT authorities in their legal brief.

 

Risks to Privacy

It is fair to argue that the use of AI in arbitral proceedings comes with certain privacy concerns. In arbitration – compared to national court proceedings – confidentiality is usually the default rule. Therefore, the extra “layer of protection” of confidentiality makes the potential use of AI even more onerous, especially when used in the decision-making process. A potential “glitch” that would cause a leak of any confidential information – as in this instance here – while using AI, might result in a breach of the arbitrator’s duty of confidentiality.

Regarding data collection, the latest version of OpenAI’s Privacy Policy (as of 23 June 2023) states that OpenAI collects account information, user content, including the “inputs” used to converse with ChatGPT, communication information such as contact details, and social media information when one interacts with OpenAI’s social media. The company also collects technical information, such as log files, IP addresses, location, dates of user log-ins, device information, and more.

When using the browser plugin, ChatGPT is automatically generating information when you type something in a search engine, and this could also be collected since it could qualify as an “input”. This would mean that even the searches and content one is looking for in a distinct browser and not in the ChatGPT environment (while connected and logged into a ChatGPT profile) would not be exempted from collection. On the other hand, Google is collecting the same if not more data. The only difference might be that since the user can engage in a conversation with a language AI model, they might exchange more information and in a more specific way, which can lead to the identification of parties, cases and arbitrators when stored and analysed.

 

Conclusion

In conclusion, even though no one blames lawyers for using search engines for their research, major concerns emerge for AI. Nonetheless, Google plans to use Bard, Baidu plans to use Ernie Bot, and Microsoft plans to integrate Copilot into their Microsoft 365 products – all LLMs like ChatGPT. Will the use of all these tools raise the same reservations and concerns? And if yes, should their use by arbitrators be disclosed to the parties? Should there be an understanding between the parties and the tribunal from the case management conference, if not earlier? And do we need to qualify ChatGPT and maybe every language / research AI model as another type of technology that needs specific rules when used in arbitration? Topical questions, with difficult answers.

 

Further posts in our Arbitration Tech Toolbox series can be found here.

The content of this post is intended for educational and general information. It is not intended for any promotional purposes. Kluwer Arbitration Blog, the Editorial Board, and this post’s authors make no representation or warranty of any kind, express or implied, regarding the accuracy or completeness of any information in this post.


________________________

To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.


Profile Navigator and Relationship Indicator
Access 17,000+ data-driven profiles of arbitrators, expert witnesses, and counsels, derived from Kluwer Arbitration's comprehensive collection of international cases and awards and appointment data of leading arbitral institutions, to uncover potential conflicts of interest.

Learn how Kluwer Arbitration can support you.

Kluwer Arbitration

References

References
1 Using ChatGPT to perform arbitral tasks is definitely time-efficient, especially in investment arbitration cases, where the average time for rendering an award is 425 days. It can also be cost-efficient since it is free or low cost for premium subscribers. As of publication a premium subscription costs US $20/month.
2 The term “AI hallucination” is referring to instances where an AI model, especially large language models  like ChatGPT, are generating results that are untrue and not backed by real-world events and data.
This page as PDF

2 comments

  1. Here’s a few issues I see.

    If a chatbot is taking part in arbitral decision-making, how does this comport with the notion of the “composition of the arbitral authority” referred to in the New York Convention Art V 1. (d) ?

    Can a bot ever be an “authority” within the ordinary meaning of this word?

    What degree of knowledge of the bot’s programming is required for the composition of an arbitral authority including a bot to be within the (informed consensual) agreement of the parties?

    *********** separate question

    Entering the procedural history of the arbitration into a worksheet in order to generate most of that section of the award could have been done using the computer technology of the 1970s, and can be done today using any spreadsheet. In light of that, what marginal added value does the latest AI technology contribute? For example does it include a formula for detecting what sequence of written, verbal and non-verbal exchanges amount to a procedural ‘incident’ that should be noted separately against the routine background of [On date] [Claimant/Respondent/the Parties / the Tribunal/ the Secretary-General] [submitted / filed / exhanged / ordered / decided / issued] [its / their / a / Procedural Order No.] [attaching / requesting / determining / requiring / noting] […etc]? If not, should we invest in developing such a complex formula?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.