I recently had a conversation with a friend who expressed his worries about the potential implications of artificial intelligence (“AI”), such as ChatGPT, on the future of human labor. He voiced particular concerns regarding the displacement of humans’ roles in business analytics, the field in which his expertise lies.

ChatGPT is an AI-driven chatbot that leverages natural language processing to conduct human-like conversations. The language model is adept at answering queries and can produce written materials, such as emails, social media posts, essays, etc. It is undeniable that AI has transformed many sectors, including the legal and financial fields. Within the finance sector, there exist AI-powered tools (e.g., Comparables.AI and Bizvalue.IO) that I understand can perform a business valuation in a matter of a few minutes. The same task often takes valuation professionals considerably more time, often days or even weeks, as forecast of corporate cash flows for a discount cash flow analysis or selection of comparable companies for a market approach generally requires careful consideration of many macro and company-specific factors.

Others have published posts (e.g., here and here), commenting on how ChatGPT may impact dispute resolution or international arbitration. In this post, I focus on my view as it relates to quantum experts.

Quantum experts are professionals who specialize in assessing economic damages in litigation or arbitration cases. Our expertise is particularly useful in complex matters for which damages are not readily discernible. One such good example is a scenario requiring the estimate of expectation losses.

I have not yet seen an AI-performed valuation used in litigation or arbitration proceedings; however, it is not difficult to come up with contexts in which AI tools can be effectively employed. For example, before a shareholder dispute is initiated, counsel may perform a preliminary valuation with the help of AI-powered tools to assess the support – or lack thereof – for a party’s view of the claim’s value. Nonetheless, for the reasons below, I remain optimistic about the enduring value of my role as a quantum expert and am not overly concerned with being displaced by AI in the foreseeable future.

 

Human Judgment

Quantum assessments often necessitate a nuanced approach, requiring experts to exercise discretion and informed judgment. This could involve creating hypothetical counterfactual scenarios, selecting from different valuation methods, evaluating alternative data sources, and so on. These decisions require not only technical expertise but also a synthesis of professional acumen and a deep understanding of the intricacies of each case.

Despite its rapid advancements, AI is not yet capable of emulating the multi-faceted nature of human judgement, especially when it comes to creative thinking. While AI can make decisions based on predefined parameters and data inputs, it does not think per se. When asked about whether it can think, ChatGPT’s response is:

“As a machine learning model, I don’t ‘think’ in the same way humans do. Instead, I process and generate text based on patterns in the data I was trained on. My responses are the result of complex algorithms processing that data, not conscious thought or awareness. I don’t have beliefs, emotions, consciousness, or self-awareness. I can provide information, answer questions, and generate text based on the input I receive, but I don’t have independent thoughts or feelings.”

(However, I was still surprised by ChatGPT’s identification of itself as “I”, which appears to me to be a form of self-awareness. This article presents a good discussion on if ChatGPT can think.)

In addition, quantum experts’ work must comply with professional standards and guidelines, which often entails a sensitive approach to information handling and reporting. There exists an inherent risk that the use of AI-powered tools could inadvertently breach obligations pertaining to confidentiality. It was previously reported that Samsung employees accidentally leaked confidential data when using ChatGPT.

 

Context and Counterfactuals

To determine the economic loss a party may have suffered from an alleged wrong committed by another party, one must compare: (i) the financial position the allegedly wronged party is actually in, referred to as the “actual” scenario, with (ii) the financial position the allegedly wronged party would have been in but for the wrongful act complained of – this is referred to as the “but-for” or “counterfactual” scenario. Therefore, formation of hypothetical counterfactual scenarios is a common element of a quantum exercise. To create such scenarios, quantum experts operate within complex and multi-faceted settings and often need to make a series of assumptions regarding macro factors such as economic conditions, regulatory environment, industry norms, and business-specific factors such as revenue growth, cost levels, and capital structure. It is therefore critical for quantum experts to form a profound understanding of specific contexts.

While AI is known for its capacity to analyze vast amounts of data, it currently lacks the capability to fully grasp the intricacies and nuances of these contexts in the way that a human expert can. AI may also struggle with unstructured data or ambiguous information that does not fit neatly into pre-programmed algorithms. Unlike AI, human experts can draw on their professional experiences, intuition, and understanding of broader economic and business contexts to make sense of complex or ambiguous situations. For example, I imagine it would be difficult for AI to quantify the effect of the evolving and uncertain regulatory environment on the value of a media company in China. Regulations for the media sector in China sometimes change unpredictably. Even when the regulations are in place, their interpretation and enforcement can vary, leading to different impacts on companies. This variance is difficult to quantify for AI because AI powered tools rely on patterns and data, but the changes do not always follow a consistent pattern.

 

Communication Skills

Quantum experts do more than just crunching numbers. Efficient and effective communication lies at the heart of quantum work. Quantum experts must be able to communicate complex analyses and findings in an understandable, engaging, and persuasive manner to a non-expert audience, such as the clients, legal counsel, an arbitral tribunal, a judge, or jury.

Furthermore, during hearings or trials, the role of quantum experts takes on an added dimension. They must not only articulate their perspectives, but also engage with opposing viewpoints and respond to challenging queries with both poise and precision. This represents a human element that AI cannot yet replicate.

 

Reliability and Transparency

AI tools also likely fall short of the requirements of reliability and transparency that are necessary to sustain the legitimacy of legal proceedings. Whether in litigations or arbitrations, judges or arbitrators are expected to render decisions that are methodologically reliable and transparent in their reasoning. “A statement of the reasons for a judicial decision is widely regarded to be a pre-requisite for an orderly administration of justice” (Christoph H. Schreuer, The ICSID Convention: A Commentary, p. 996). It is questionable whether a judge or arbitrator that relies on “AI quantum experts” can meet this standard for reliability and transparency.

Further, it has been widely observed since the launch of ChatGPT that large language models “hallucinate” wrong answers – a trend that some commentators believe will continue for years to come. In the legal world, recent news reported an incident in which a lawyer representing his client in a personal injury lawsuit in Manhattan submitted a court filing that ChatGPT assisted drafting but later found that the case citations ChatGPT included in the filing were all bogus. Interestingly, when confronted with the fake citations, ChatGPT insisted that they were real and from reputable sources. The lawyer had a sanctions hearing in June 2023, after which the judge found that the lawyer acted in bad faith and imposed sanctions.

AI’s reliability is also hampered by the fact that it may unintentionally incorporate biases present in their training data, leading to potentially unfair or incorrect outcomes. For example, in the U.S. criminal justice system, the use of AI tools to estimate a defendant’s likelihood of committing a future crime has been criticized as reflecting factors that are inaccurate and that seem racially biased. More recently, ChatGPT’s refusal to write a poem about Trump’s positive attributes—before proceeding to write one about Biden’s—has led to criticism of its possible political bias. Keep in mind that it’s “garbage in, garbage out”. An AI-powered tool is only as good as the data it’s trained on.

The lack of reliability, compounded with the opacity of AI algorithms, can be especially problematic. Google’s CEO Sundar Pichai, in a recent interview, commented on the “mysterious” “black box” nature of Bard: “[Y]ou don’t fully understand. And you can’t quite tell why it said this, or why it got wrong.” Thus, while it might be true that AI powered tools can produce a business valuation in a matter of a few minutes, the “black box” nature of the AI algorithms is a significant issue in legal contexts where transparency and reliability are crucial. For example, the ICSID Convention, as construed by many tribunals, requires that the award enables a reader to follow the tribunal’s reasoning “from Point A to Point B” (Tidewater v. Venezuela, ICSID Case No. ARB/10/5, Decision on Annulment, ¶64). Can a reader follow an AI’s “black box” from input to output? Arguably, even the AI’s designer cannot.

 

Conclusion

Considering the aforementioned points, I do not view AI as a threat to the profession of quantum experts. Rather, I think it is more constructive to view it as a potential collaborator. AI tools could streamline quantum work by automating routine tasks, performing data analysis, etc. By leveraging these capabilities, quantum experts can then focus on higher-value activities that require human judgment, contextual understanding, and persuasive communication skills. Indeed, in writing this article, I used ChatGPT as an editing tool and found it highly efficient. However, I emphasize that while AI is a powerful tool with vast potential, the complex and nuanced nature of damage assessment underscores the irreplaceability of human expertise.

 

The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions, position, or policy of Berkeley Research Group, LLC or its other employees and affiliates.


________________________

To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.


Profile Navigator and Relationship Indicator
Access 17,000+ data-driven profiles of arbitrators, expert witnesses, and counsels, derived from Kluwer Arbitration's comprehensive collection of international cases and awards and appointment data of leading arbitral institutions, to uncover potential conflicts of interest.

Learn how Kluwer Arbitration can support you.

Kluwer Arbitration
This page as PDF

One comment

  1. While I appreciate the author’s views on the importance of human nuance and interaction, this post convinces me more than anything that AI is likely to rapidly displace the work of quantum experts. All of the objections are of a nature that can be overcome, which will happen quickly as generative AI continues to rapidly evolve and improve.

    There will always be those who make mistakes with technology, like the notorious lawyer in New York who relied on ChatGPT for his submissions, but this is true of all new tech. Just think of all the people who have sent “reply all” emails with confidential or privileged information to the opposing side. It does not mean we should all go back to posting letters in envelopes until everyone learns to use email correctly.

    The author also assumes that, when presented with a “nuanced” approach or machines, that parties and arbitrators will prefer the former. That’s a big assumption. Once AI systems come on line with reliable models, tribunals may prefer the ability to quickly narrow points of disagreement or facilitate tribunal decisions.

    In any event, in the author’s phrase, “AI is not yet capable”, the most important word is the adverb “yet”.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.