On March 14, 2024, as part of the 3rd Annual California International Arbitration Week, the Silicon Valley Arbitration and Mediation Center (SVAMC) sponsored a roundtable discussion about the recently proposed SVAMC Guidelines on the Use of AI in Arbitration and the SVAMC AI Task Force’s efforts to build consensus and engagement among the global ADR community on this emerging issue. Gary Benton (Founder & Director, Silicon Valley Arbitration & Mediation Center) moderated the discussion which featured Orlando Cabrera (Hogan Lovells, Senior Associate and SVAMC AI Task Force Drafting Committee Member), Sarah Reynolds (Managing Partner, Goldman Ismail), and Svetlana Gitman (Division Vice President, American Arbitration Association – International Center for Dispute Resolution, Chicago and Los Angeles).

 

Use of AI in International Arbitration

The moderator opened the discussion by introducing the efforts by California Arbitration (CalArb) to promote International Arbitration in California and SVAMC’s efforts to support the global technology ADR community. He emphasized SVAMC’s efforts drafting the first ever guidelines of their kind setting standards to the ‘responsible use of AI in ADR’, and the importance of these guidelines in building a consensus around the regulation of AI in ADR within the broader global community.

 

Navigating the Complexities of AI in Arbitration: From Evolution to Regulation

The roundtable discussion began with tracing AI’s evolution from its inception in the 1950s to its current capabilities which include generating images and videos – a particularly popular use on social media. The speakers also highlighted AI’s capability in the legal domain such as drafting pleadings, summaries, speeches, and oral arguments. Despite those advancements, AI still suffers from several limitations, such as “hallucinations” where AI produces logical, but incorrect outcomes, and issues related to  privacy, confidentiality and susceptibility to misuse due to its learning mechanism. Discussions also covered the issues of algorithmic bias and the “black box” problem, where the reasoning behind AI’s conclusions remains unknown.

It was noted that, given the widespread use of AI across all aspects of daily life, and its potentially serious shortcomings, arbitrators began to ask parties for complete disclosure of AI usage in arbitral proceedings. However, as parties discovered, this task is complicated by the fact that many AI tools are so seamlessly integrated into daily activities that they are not immediately recognized as AI applications (e.g., search engines and online translators).

The discussion then highlighted the SVAMC efforts to draft guidelines addressing these issues. The first guideline the SVAMC drafted put the responsibility on ADR practitioners to understand what AI is and how it works, while the second guideline mandates practitioners to avoid submitting confidential information to any AI tool without prior authorization.

 

Advancing ADR Efficiency through AI Integration

The panel made a case for integrating AI into ADR, emphasizing its potential to make ADR processes faster, more efficient and cost-effective, and enhance fairness particularly for self-represented individuals and those involved in smaller non-complex disputes.

The discussion covered the AAA-ICDR’s initiative in this regard that encourages its employees to use ChatGPT carefully and with emphasis on confidentiality. To mitigate the abovementioned risks while leveraging AI’s benefits, AAA-ICDR has introduced AI tools designed to improve the efficiency of ADR processes. Among these innovations, particular enthusiasm was expressed for an ‘AI Generated Scheduling Product’ that dramatically shortens the time to generate scheduling orders from hours or days to mere seconds. Additionally, two other tools were discussed that are currently offered to the parties: ‘Free AI Clause Builder’ tool which helps in drafting dispute resolution clauses, and ‘ChatBot’ that assists self-represented individuals in filing their cases online. Furthermore, an AI Lab has been set up dedicated to developing tools and processes for the responsible application of AI in ADR.

 

Establishing Responsible Guidelines for AI Use in Arbitration

The goal to create consensus on responsible AI use among the global ADR community was discussed. The process for drafting the Guidelines on the Use of Artificial Intelligence in Arbitration was officially announced in July 2023 with a dedicated drafting subcommittee for the AI Task Force comprising renowned arbitration experts. These guidelines will serve as a point of reference for arbitrators. They are designed to be applicable to all participants involved in the arbitration proceedings, and SVAMC is currently accepting feedback and suggestions on these guidelines here.

There are seven SVAMC guidelines that provide a framework for the use of AI in arbitration. The speakers briefly mentioned the first two guidelines to set up the rest of the discussion. Below, we discuss the seven guidelines more closely.

Guideline 1 mandates that all participants using AI tools in arbitration proceedings should understand the intended uses of these tools and adjust their usage accordingly. It also requires users to make reasonable efforts to comprehend each AI tool’s limitations, biases, and associated risks and take steps to mitigate them where possible.

Guideline 2 emphasizes the necessity for participants to align their use of AI tools with the responsibility to protect confidential information, including privileged or protected data. Participants must not submit any confidential information to an AI tool without thorough vetting and proper authorization, especially when using third-party AI solutions, requiring a careful review of their data handling and retention policies. This guideline highlights the importance for parties and their representatives to be aware of the data and confidentiality risks associated with using publicly available AI tools in arbitration contexts. This guideline closely aligns with the principles outlined in the principles of secure use of technology emphasized in the CIArb Framework Guideline on the Use of Technology in International Arbitration.

Guideline 3 is still under development, and it addresses whether and to what extent parties and arbitrators should disclose AI usage in arbitration. While recognizing that not all AI applications necessitate disclosure, the guideline allows the participants to require disclosure or certification where using Generative AI tools significantly impacts the proceedings. The subcommittee offered two options to assess the need for disclosure. Option A considers factors such as the reliance on AI outputs instead of primary sources and the material impact of AI on proceedings. Option B proposes a more stringent approach, requiring proactive disclosure when AI-generated documents are materially relied upon or when AI has a significant effect on the outcome of the case.

Guideline 4 emphasizes the importance of diligence and competence when using AI tools in arbitration. It requires that parties and their representatives ensure AI-generated submissions are accurate and meet ethical and professional standards. The guideline highlights the responsibility of legal professionals to verify AI outputs for factual and legal correctness, cautioning that failure to do so could result in errors that compromise the arbitration process. This guideline holds parties accountable for any uncorrected errors in AI-assisted work products.

Guideline 5 stresses the need to uphold the integrity of arbitration proceedings and evidence in the face of potential misuse of AI. It prohibits parties from using AI to fabricate or manipulate evidence, emphasizing the existing duty to prevent fraudulent behavior in arbitration. The commentary highlights the risks posed by advanced AI technologies and emphasizes the need for vigilance to preserve fairness and integrity. If a violation occurs, the arbitral tribunal may take measures such as excluding tampered evidence and considering the conduct of the infringing party’s representatives when assigning costs.

Guideline 6 prevents arbitrators from using AI in a way which amounts to delegation of their mandate, emphasizing the arbitrator’s ultimate decision-making function. It requires arbitrators to carefully verify AI-generated outputs to ensure they reflect an independent and accurate analysis. This ensures that the essential human element in decision-making remains intact.

Guideline 7 advises against using AI-generated information beyond the scope of the arbitration record without disclosure to other parties and the opportunity to respond. It emphasizes the arbitrator’s responsibility to critically assess AI outputs, ensuring they do not mistakenly take them as definitive sources without independent verification of their reliability.

The panel discussion closed with the remarks that these guidelines are still in draft form and are open to suggestions. SVAMC is seeking feedback from the arbitration community reflecting SVAMC’s commitment to refining a set of uniform guidelines that ensure AI is used responsibly and enhances the integrity of ADR processes.

 

Concluding Remarks

The integration of AI is an exciting opportunity for increased efficiency and accessibility within the ADR community. This roundtable discussion explained the current steps being taken to ensure responsible use of AI in ADR, emphasizing the need to uphold confidentiality, adequate disclosure, and competence. SVAMC guidelines create the opportunity for innovative practices while maintaining these essential responsibilities, building consensus with the global ADR community regarding the regulation and implementation of AI. They also represent the community’s proactive approach to maintaining professionalism in this growing industry. As discussions and guidelines continue to evolve, it is clear that the ADR community is committed to using AI responsibly, safeguarding the integrity of legal proceedings, and advancing dispute resolution in the age of AI.

 

Editors’ Update: On April 30, 2024, The Silicon Valley Arbitration and Mediation Center published the first edition of the Guidelines on the Use of Artificial Intelligence (AI) in International Arbitration, as available here

Rabia Batool and Scarlet Tunney are members of Young California Arbitration (Young CalArb) which assisted in preparing this article. Young CalArb believes that the future of international arbitration in California lies in the hands of our promising young professionals. Its mission is to provide a dynamic platform that nurtures their growth and strengthens their network within the arbitration community. Young CalArb is sponsored by California Arbitration and is committed to advancing the cause of California Arbitration in developing and promoting California as a hub for international arbitration.


________________________

To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.


Kluwer Arbitration
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.