With a cybersecurity themed problem, this year’s Willem C. Vis International Commercial Arbitration Moot (Vis Moot) fittingly sets new rules regarding the use of artificial intelligence (AI) tools in the competition.

Introduced by an AI-generated video of Professor Christopher Kee (one of the Vis Moot’s three directors), the Vis Moot’s new rules do not outright ban AI from the competition. Instead, mirroring the sentiment of 63% of arbitration practitioners in a recent survey by our firm, BCLP, on the use of AI in international arbitration, the new rules seek to regulate the use of AI. They allow its use for research, translation and proofreading purposes, while requiring disclosure and strictly prohibiting students’ submission of AI-generated text.

This blog examines the Vis Moot’s new rules in light of current trends and industry sentiment as reflected in the results of BCLP’s survey. As AI tools are further refined and developed, they will play an increasingly significant role in the practice of law and in international arbitration. However, the potential of AI to bring further efficiency comes with its own set of risks and issues. The upcoming 31st Vis Moot will be the first “real” test of regulated uses of AI in international arbitration and how the Vis Moot, its arbitrators and participating students address the associated risks and issues will be instructive as the international arbitration community adopts and seeks to regulate AI in practice.

 

The Vis Moot’s New Rules and the Adoption of AI in Practice

The Vis Moot’s new rules regulating AI consist of three parts:

First, there is an exclusive list of allowed uses of AI, which includes in particular “[u]sing AI to generate overviews or briefings on relevant factual and legal topics […] solely used for the team’s own understanding”. Second, the rules expressly prohibit the submission of AI-generated text and the training of AI tools with the Vis Moot Problem. Third, teams are required to disclose their use of AI in a form appended to the rules.

According to the Vis Moot’s explanation, these new rules represent a compromise between the academic interest to “ensure that students continue to develop their skills” in the competition and an acknowledgement of “the potential of AI, its practical relevance in the legal profession as well as the risks associated with this technology”. In that light, the new rules’ strict prohibition against the submission of AI-generated text appears to be a red line drawn to safeguard the academic nature of the competition.

Leaving aside the issue of whether such a prohibition should be in place for academic reasons, in practice, the arbitration community is divided on the adoption of AI, particularly with respect to the use of AI-generated text in legal submissions. A little over half of the respondents (53%) to BCLP’s survey thought that AI tools should not be used for the generation of text in legal submissions. It is worth noting that respondents are more comfortable with and do not object to the use of AI tools for the generation of factual summaries (73%), for document analysis (65%), and for detecting whether AI has been used to generate text, images, documents, and evidence (80%).

This apprehension against the adoption of AI for legal submissions could be connected to the community’s current lack of confidence in understanding how AI tools work, in particular, generative-AI tools. Up to 69% of respondents to BCLP’s survey rated their confidence at five out of ten or lower. Further, of course, there are also real risks in using AI tools, with the recent cautionary tale of a New York attorney submitting text and fictitious case law generated by ChatGPT into court without checking.

However, these current issues with generative AI do not appear insurmountable. Technologically speaking, it is not a controversial opinion that issues of consistency and reliability will soon be resolved or at the very least mitigated to a level comparable to the consistency and reliability of human-generated text. Practically, current issues of consistency and reliability can already be mitigated by human review; experienced lawyers can and obviously should review AI-generated text, just as they already do with work prepared by junior members of their teams, in compliance with their professional, ethical duties.

The significant time and costs saving and competitive advantage that will result from the responsible use of AI cannot be ignored. Taking it at its highest, there is an obvious potential for AI tools, as they are further refined and developed, to allow for a much greater level of access to justice for the general public than ever before. Even now, with their current flaws and issues, AI tools offer a real competitive advantage and are already being utilised in practice by well-resourced parties and their lawyers (with 28% of respondents to BCLP’s survey having used ChatGPT, and significant portions of respondents having used other AI tools to perform tasks, such as document production, translation, and text formatting, in a professional context). The usage rate in practice will most likely go up and it appears inevitable that AI will play a significant role in international arbitration.

Foreshadowing Issues in Practice – Regulatory Difficulties and Inequality of Arms

The Vis Moot chooses to permit expressly the use of AI tools for research, specifically for the generation of “overviews or briefings on relevant factual and legal topics […] solely used for the team’s own understanding”. This choice may have come with a realisation that well-resourced teams will have a “leg up” in access to AI resources, in particular specialised AI tools that could be trained for better performance (in other words, these tools could “read in” to a case). Such realisation would certainly explain the prohibition against the training of an AI tool with the Vis Moot problem, in addition to the more mundane need to protect the Vis Moot’s intellectual property. Regardless, this prohibition appears difficult to implement in practice and, in any case, may not fully mitigate the inequality of arms between students with different levels of resources, know-how, and access to the latest technology.

Regulatory Difficulties

It is unclear how realistic a prohibition against training AI tools could be adhered to or enforced. It may be difficult in practice to distinguish between training an AI tool on the matter specific issues on the one hand, and referencing relevant or comparable factual scenarios in a query or request to an AI tool on the other. Further, AI tools with access to the internet already have a view of the Vis Moot problem (such as the new Bing by Microsoft as powered by ChatGPT) and such tools might account for the Vis Moot problem even if not prompted by students in their queries. It does not appear that the Vis Moot is seeking to ban the use of such tools.

With the fast-developing AI landscape, there will of course be uncertainties as to where the red lines truly are and the above issue of training AI tools is only the tip of the iceberg. The Vis Moot seems to recognise such uncertainties, providing an “unsure” column in its AI disclosure form. Needless to say, regulators will encounter similar issues in practice, and how the Vis Moot organisers, arbitrators, and students address such uncertainties in the competition will be instructive.

Inequality of Arms

In any case, even if a prohibition against training AI tools can be implemented, students with access to better, specifically designed AI tools for legal practice and the requisite know-how will have an advantage, arguably an unfair one over students who cannot access or use the prevalent generative AI tools available to the general public (e.g. those from China, Syria, Venezuela, etc. where ChatGPT is not supported). This might further exacerbate the already existing equality of arms issues in the Vis Moot and, subject to the extent of AI adoption in the Vis Moot and in international arbitration practice, could be a significant element perpetuating the problem.

It is not difficult to see how this inequality of arms aspect of AI use will translate into international arbitration practice, especially during a likely transition period when publicly-available generative-AI tools remain flawed, and better, more accurate, consistent and reliable specialised AI tools are developed and used by those parties and lawyers willing to invest in such resources. Large international law firms have already been using highly specialised AI tools for document production and other purposes, and are also developing specialised generative-AI tools for legal practice. It will be interesting to see how apparent the inequality of arms between students will be in the competition, and how the Vis Moot organisers, arbitrators and students tackle this issue.

Conclusion

It is apparent that the Vis Moot is taking a leading stance in what is still a largely unregulated matter in international arbitration. Viewed in the light of the BCLP survey, the Vis Moot’s new rules reflect the industry’s current sentiment on the use of AI and the 31st Vis Moot will be the first “real” test of regulated uses of AI in international arbitration.

Regardless of whether these new rules are a success, the ongoing discussions and lessons to be learned from the competition on the use of AI in international arbitration will find their way into procedural orders and other forms of “soft law” guidance in 2024.


________________________

To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.


Kluwer Arbitration
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.