International arbitration is a prime example of the power and complexity of combined human minds. It is a marvel of human cooperation and ingenuity that strangers forego barbarism in favour of peaceful resolution – even more so when they do it cross-borders, on the unlikely belief that their interests will be guarded by yet another group of strangers acting as neutrals, the arbitrators, whose decision-making depends on entirely biological cognitive processes tainted by all sorts of biases, prejudices, blind spots, and limitations (discussed here, here and here).

But it is high time that we ask ourselves: How will ChatGPT (also known as GPT 3.5) impact the future of international arbitration as we know it? Will it prove to be a powerful tool towards improving our navigation of legal systems, or could it lead to unintended consequences?


The Technology behind ChatGPT

ChatGPT is an Artificial Intelligence (AI) technology tool capable of understanding and generating human language. You ask, it answers; you command, it obeys.

At its core, ChatGPT is a neural network, a type of computer system that is modelled after the human brain. This network is based on deep learning, a mechanism which allows the system to process and analyse large amounts of data and to assimilate and generate language. The more data the system is exposed to, the better it becomes at understanding and generating text.

Moreover, ChatGPT works through a mechanism called “transformer-based language modelling“, which involves analysing a large number of text inputs and using that information to probabilistically predict what the next word in a sentence will be. As the system continues to analyse more text, it develops a more sophisticated model of the patterns and structures of language.


Potential Uses of ChatGPT in International Arbitration

Arbitration practitioners may use ChatGPT for almost every activity involving information processing, be it researching, writing, or predicting outcomes (as discussed here) – not all of them fully ethical or desirable.

  1. Legal research and analysis: ChatGPT’s ability to understand and process vast amounts of information can be used to assist arbitrators or parties in their legal research and analysis. The system can quickly access relevant case law, statutes, and other legal materials, allowing arbitrators to make decisions quickly.
  2. Document drafting: ChatGPT’s natural language understanding capabilities could automate the drafting of legal documents and agreements, reducing the time and resources required for these tasks. This can streamline the arbitration process and make it more efficient: procedural orders by the tribunal, summary of the facts, and logistical communications from the parties and the institution can all be easily produced by AI. In a fraction of a second, it can organize a Redfern schedule, as we’ve tested ourselves.
  3. Party collaboration: ChatGPT is capable of improving the collaboration between the parties and the arbitral tribunal. For example, it can assist in the translation of legal documents, thus making it easier for parties from different jurisdictions to participate in the case. It is also capable of scanning memorials to identify the disputed and undisputed facts pleaded by the parties, organising them in tables and even rating them according to their legal strength, taking into account existing case law or witness statements.
  4. Predicting the aftermath of an award: As already mentioned above, ChatGPT can be used for predictive analysis. This function may enable arbitrators to predict the possible factual outcomes of their awards, based on historical cases and jurisprudential data. Simply put, ChatGPT can predict the chances of an award being annulled, denied enforcement or voluntarily complied with, which, in turn can allow arbitrators to draft their awards in such a way, so as to ensure their longevity.
  5. Dialectic argumentation: A party can use ChatGPT to identify weaknesses in the legal strategy of an opponent or even generate counter-arguments to an opponent’s reasoning. All that party has to do is input transcripts or the written documents submitted by its counterparty into the system and ask it to identify any inconsistencies, contradictions, or gaps.


Risks of the Use of ChatGPT in International Arbitration

  1. Text Limitations: The text may not always be legally accurate or appropriate, combining common statements found on the internet with apparently random ideas. For example, when asked to explain the concept of jurisdiction in investment arbitration, ChatGPT provided information on the formation of settlement agreements under domestic law, obviously irrelevant to the topic. In many cases, the text can be verbose and lacking in nuance or even coherence. Furthermore, using ChatGPT in other languages, as I have tested with Portuguese, results in even less reliable outputs.
  2. Bias: AI systems like ChatGPT are trained on data which may contain biases. If the data used to train the system is biased, the system itself may make decisions that are biased as well. This could have a particularly negative impact on marginalized groups.
  3. Manipulating the arbitral tribunal: A lawyer may use ChatGPT to manipulate how arbitrators decide the case. ChatGPT, being a language generation model, can analyse past decisions, articles and opinions made by the arbitrators and identify patterns in their reasoning or decision-making. Then, ChatGPT can be trained on the laws, legal precedents, and the facts of the case in question to generate possible arguments that the arbitrator might be more vulnerable to, not only in terms of their content, but also in terms of their style, by taking advantage of any unconscious biases the model may identify. This mechanism can be used at any stage of the procedure, from the appointment and challenge of arbitrators to the choice of experts, and lawyers can adjust their pleadings ever so slightly to favour their position before the tribunal.
  4. Unethical legal tactics: ChatGPT can foster unethical legal tactics by, inter alia, generating misleading or false statements or evidence in the arbitration or at the stage of negotiations; creating fraudulent documents or contracts that could be used to deceive or defraud parties; generating persuasive or misleading arguments that could be used to influence an arbitrator; or generating legal documents or contracts that are designed to take advantage of the other party’s lack of legal knowledge.
  5. Ghost Writer: If an arbitrator uses ChatGPT to write decisions (in whole or in part), these decisions may not truly reflect the arbitrator’s own independent and impartial decision-making. Instead, they would be influenced by the training data and the biases of the language model. Moreover, if the arbitrator uses ChatGPT to write its awards, the arbitrator may not be demonstrating the necessary level of knowledge, understanding and expertise. This could compromise the integrity of the arbitration process, and call into question the validity of the award.
  6. Stylistic imitation: ChatGPT allows users to change the style of a text to that of a specific person by using a technique called “style transfer”. This technique involves training the model on a large dataset of text written by a specific person so that it can learn the unique writing style and patterns of that individual. Once the model is trained, users can input their own text and the model will “transfer” the style of the specific person to the user’s text, resulting in an output that closely mimics the writing style of the person in question. Although this is still a complex task and the accuracy of the style transfer might not be perfect, arbitrators may still use it to transform awards written by secretaries and assistants into their own, thus circumventing problems of stylistic analysis, such as the one faced in Yukos, as explored in a previous blog post, and further expanded by Daniel Behn.



ChatGPT is a powerful tool. It can analyse vast amounts of data and generate legal documents or arguments quickly. However, there are risks. One of the biggest concerns is the potential for bias, fraud, or unethical behaviour. ChatGPT raises ethical questions about the role of technology in arbitration. Will we know if the arguments of the parties or the award were generated by ChatGPT? Are we equipped to do so? What is human? Does it matter? There are more questions than answers. Lawyers like to believe that human thought is unique, that it has a spark of creativity unmatched by unconscious data processing models. I am not so sure. For instance, ChatGPT generated every single sentence in this very post, following specific (and quite detailed) prompts from the author. This may be the first, but it is certainly not the last time you will ask whether something you just read was written by a human.

[Generated using GPT 3.51)In light of the potential ethical issues of using ChatGPT, the author retained control over all the ideas and the structure of this post. ChatGPT was used as a writing and style assistant, with little conceptual freedom. The final test was achieved after more then 50 iterations between the author and ChatGPT, feeding it with information, references, and detailed style guidance. Humans made minor edits directly to the text during the revision process.]


Further posts in our Arbitration Tech Toolbox series can be found here.

The content of this post is intended for educational and general information. It is not intended for any promotional purposes. Kluwer Arbitration Blog, the Editorial Board, and this post’s author make no representation or warranty of any kind, express or implied, regarding the accuracy or completeness of any information in this post.


To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.

Profile Navigator and Relationship Indicator
Access 17,000+ data-driven profiles of arbitrators, expert witnesses, and counsels, derived from Kluwer Arbitration's comprehensive collection of international cases and awards and appointment data of leading arbitral institutions, to uncover potential conflicts of interest.

Learn how Kluwer Arbitration can support you.

Kluwer Arbitration


1 In light of the potential ethical issues of using ChatGPT, the author retained control over all the ideas and the structure of this post. ChatGPT was used as a writing and style assistant, with little conceptual freedom. The final test was achieved after more then 50 iterations between the author and ChatGPT, feeding it with information, references, and detailed style guidance. Humans made minor edits directly to the text during the revision process.
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.