Introduction
ChatGPT is short for “Chat Generative Pre-Trained Transformer”.  It is an artificially intelligent text generation bot that can have a conversation.  It does so by using a language processing algorithm called a “Transformer”, which generates natural language responses to user input. There are various user platforms, for example, https://openai.com/blog/chatgpt.  ChatGPT is simple, like any human-to-human virtual chat: click the link, type, receive an answer, and respond.

I recently had the pleasure of discussing this new development on a soon-to-be-released podcast co-hosted by Delos Dispute Resolution and Orrick.  Adith Haridas (co-founder of the digital disruptor AirPMO) provided insights from a chatbot developer’s perspective.  Tunde Oyewole (counsel at Orrick) explored whether it could and should be used in advocacy.  In this blog post, I take a step back and ask about the ethics of the whole exercise – in particular whether a chatbot is something the legal community could and should embrace.

 

How can ChatGPT help the legal community?
Artificial intelligence is already being used in the law, for example, to try to forecast litigation outcomes and to compute billable hours automatically.  ChatGPT offers a new twist because it can generate content when prompted.  When I typed in “please give me an essay on recent EU pharmaceutical regulation”, it responded with a coherent eight-paragraph answer within seconds.  Clearly that is a potentially relevant research tool that is less expensive than a lawyer researching the issue afresh.  Further, concerns about the accuracy of the work product can be addressed by requesting sources, as I discuss below.  If applied to a closed and cyber-secure body of evidence,1)A closed system of evidence would eliminate the problem of intertwining evidence with “apparently random ideas” from the internet as well as a biased data set.  The data set could be mutually approved by the parties to eliminate the risk of the evidence having been altered.it could also help lawyers create arguments based on the data bank, and ultimately  win a dispute.  Moreover, it could be used by judges and tribunals.  They could type, for example, “Did the claimant report delays on a project?” and “What did the defendant do in response?”  That could speed up the judge or arbitrator’s review of the evidentiary material and hence allow them to deliver their decisions faster.  These are examples of AI supporting the law.  Further instances are covered in Leonardo Souza-McMurtrie’s post in this Blog.  A related question is whether ChatGPT could eventually go further and take over the role of a lawyer, judge, and tribunal.

 

Does it matter what AI can do?
However, asking about what a chatbot can do for the legal community is not the end of the matter.  The engineer’s approach is “if it can be developed, we develop it”.  The ethicist (and lawyer) would ask, “even if we can develop and use it, should we?”  That is an important question.

I took my sons to the London Design Museum a few days ago.  In one exhibit, they had juxtaposed on the wall a machine gun and a cast for a broken leg.  The description was that both were invented during the Second World War and have wide application given the ease of manufacturing and use, but one was designed to kill and maim, and the other to cure.  Is the designer, or the user, responsible for the end results?  The mere ability to do something doesn’t make it a good thing to do.  The atomic bomb is a prime example.

On a related note, it often strikes me that the legal community can be apprehensive about a new technological frontier.  Is that us being backward, or are we instinctively protecting something sacrosanct – a line that we should not cross?  I discuss this next.

 

Is a bot worse than the human mind?
When I first had a chat with ChatGPT, it offered the following disclaimer:

“Limitations:

  • May occasionally generate incorrect information.
  • May occasionally produce harmful instructions or biased content.
  • Limited knowledge of world and events after 2021.”

Do these limitations make ChatGPT unique?  I have seen many legal briefs and awards that suffer from these shortcomings: the law or the facts are incorrect, the judgment or award is biased, or the drafter has ignored crucial evidence.  Since a chatbot and the human mind suffer from the same shortcomings, these limitations may not be a reason to reject ChatGPT in the sphere of international disputes.

The word “Pre-Trained” in “Chat Generative Pre-Trained Transformer” is important.  Our brains, too, are “pre-trained”.  Law is all about pre-training the brain to apply certain thought processes.  When I moved from reading philosophy, politics and economics to reading law, it took time for me to get used to the legal way of thinking.  That is because the law deploys its own systematic methodology that differs from the more discursive approach adopted by social sciences.  The developers of ChatGPT call the human language “unstructured” and the bot’s language “structured”, but I would argue that a legal way of thinking is very much structured, too.

What is more, the law involves mastery of a vast data set.  This is AI’s core area of excellence, whereas human memory can fail.  Perhaps ChatGPT would therefore be better at mastering data sets and would miss fewer things.  For example, studies show that artificial intelligence might be better at predicting which criminals are safe for bail and which are not.  That is mainly because such an analysis is statistical and is based on a record of past facts.

 

Can a bot deliver justice?
But what happens when we do not have a meaningful precedent from which to extrapolate predictions?  For example, when we have parties in a dispute for the first time, with scant evidence of past conduct, and the judge or arbitrator needs to come to a just decision?  The limited data set limits the accuracy of AI’s work – and indeed, limited data sets are presently a problem for those seeking to predict litigation outcomes.  Could ChatGPT perform the role of a judge or arbitrator in that situation?

This gets to the core of what we do as lawyers.  Looking at the Solicitors Regulation Authority Principles, the first principle is that lawyers are to “act in a way that upholds the constitutional principle of the rule of law, and the proper administration of justice”.  The question, therefore, becomes “Can ChatGPT administer justice?”

One could take a realpolitik view.  In Republic, Plato attributes to the cynic Thrasymachus the view that “justice is the advantage of the stronger”.  In that scenario, parties embroiled in a dispute engage lawyers, who both use ChatGPT to assist them in articulating and substantiating their case.  Since ChatGPT can be trained, the party with the better-trained and more sophisticated chatbot wins.  On this “might makes right” view, justice has been delivered because the financially stronger party has won.

That is, however, not the end of the story.  Aristotle, Plato, and Rawls (who has articulated perhaps the most systematic modern theory of societal justice), among others, tap into justice as something that involves moral virtue.  Even in ChatGPT’s view, “justice must be moral” because it is “based on the principles of fairness and morality, which means that justice must be rooted in some kind of moral code or set of values”.  In short, “without morality, justice cannot exist”.  At the same time, the bot takes the view that “ChatGPT does not have morals as it is a computer program and is not capable of making moral decisions”.

At first glance, that seems to be the end of the story: only humans can behave morally.  If ChatGPT is correct in stating that “without morality, justice cannot exist”, only lawyers, not bots, can compellingly make a comprehensive case for justice, and only judges and tribunals can distribute it.

But that is perhaps too hasty a conclusion.  I mentioned above that ChatGPT is “pre-trained”.  My children, too, are pre-trained in ethics.  I have spent years teaching them a fairly complex, not binary, web of ethical conduct.  As a parent, I cannot say, “Never lie”.  I say “never lie (because I always find out but also because it is wrong to lie, according to Kant) except in limited situations such as when a dodgy person whom you do not know wants to know your name and home address, in which case run away and say nothing, or if you must say something, make something up”.  I wonder whether, if I had spent the same number of hours training ChatGPT to think ethically, it could ultimately get to the same level of sophistication to compute ethically right ways of acting.

That is quite apart from the perception that humans probably like to think they are more sophisticated ethically than they are.  While I rarely witness fellow lawyers applying Kant’s strict principles of moral conduct, I have seen many of them act with such unconstrained self-interest as to make Thrasymachus proud.

 

Is it problematic to use of ChatGPT in processing evidence?
On a more concrete level, lawyers must not influence the substance of the evidence, including by generating false evidence.  That principle has been adopted inter alia by the Solicitors Regulation Authority in Principle 2.2.

Recently, the International Baccalaureate program accepted ChatGPT as a source in essay writing.  Matt Glanville, the IB’s head of assessment principles and practice, commented that “[w]hen AI can essentially write an essay at the touch of a button, we need our pupils to master different skills, such as understanding if the essay is any good or if it has missed the context, has used biased data or if it is lacking in creativity.  These will be far more important skills than writing an essay, so the assessment tasks we set will need to reflect this.”

In my conversations with ChatGPT, I did not get citations for any of the statements I received.  Rigorous citation is key to satisfying that a piece of evidence is truthful and reflective of its full context, and therefore key to meeting our ethical obligations as lawyers.  The good news is that one can request ChatGPT to provide sources.

However, the problem is more fundamental.  If we do not know the evidence as arbitrators or counsel, do we know what we are missing?  In other words, how do we know what we do not know?  Many of us are wary of the idea of a tribunal secretary drafting the award for the tribunal’s review because the process of reviewing evidence is integral to the process of deciding fairly.  When the arbitrator simply reviews a draft, they will not necessarily know what evidence was excluded from the narrative.  Similarly, if ChatGPT drafts a submission for me, even if I am provided with rigorous sources that I can verify, how do I know what evidence was not weaved into the submission?  There is something valuable about the very time-consuming and meticulous exercise of sitting with the evidence and reading it in chronological order.  Perhaps that painful, unglamorous evidentiary process is a part of the law that we rightly protect as sacrosanct because it is so fundamental to delivering justice.

 

What lies ahead
The previous discussion assumes that such resources are available, allowing one to spend hours creating sound legal arguments.  That is not always the case.   We recently discussed this issue on our podcast, specifically the challenges faced by respondents who do not have legal representation.  I have encountered this when serving as a sole arbitrator, and had to rely on fundamental principles of fairness to ensure that an unrepresented respondent (who had no funds to engage counsel) understood what I was saying, and that I did not veer into acting as advocate for the respondent and so prejudice the Claimant.  I wrote about this experience in a blog post, which you can find here.  Because something is better than nothing, perhaps ChatGPT would be instrumental in helping such unrepresented respondents – on which watch this space…

 

Hanna Roos is the founding partner of Aavagard LLP which represents a new era of law. #ChallengeAccepted

 

Further posts in our Arbitration Tech Toolbox series can be found here.

The content of this post is intended for educational and general information. It is not intended for any promotional purposes. Kluwer Arbitration Blog, the Editorial Board, and this post’s author make no representation or warranty of any kind, express or implied, regarding the accuracy or completeness of any information in this post.


________________________

To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.


Kluwer Arbitration

References

References
1 A closed system of evidence would eliminate the problem of intertwining evidence with “apparently random ideas” from the internet as well as a biased data set.  The data set could be mutually approved by the parties to eliminate the risk of the evidence having been altered.
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.