Technology continues to transform the practice of law at a blistering pace – something obvious to all of us who suddenly find ourselves holding Zoom meetings from home in professional tops – and pyjama bottoms. However, technology’s continuing integration into the daily fabric of dispute resolution is much more than endless Zoom meetings, or even e-discovery and technology-assisted review (TAR) software. Some of the innovations, discussed below, are downright transformational. At the same time, advances in technology sometimes raise ethical and privacy concerns, bringing the inevitable – but perhaps warranted – scrutiny of legislative bodies. A few newsworthy topics can give a sense of where we are, how far we’ve come, and, most importantly, where we may be headed.
A Higher Caliber of Discussion
As the pace of innovation has gone from a trickle to a steady deluge, this has also raised the quality and maturity of the discussion about technology and dispute resolution – we are well past the days of tongue-in-cheek futurist predictions of “robolawyers.” These days, the discussion is technical, and high-level – for example, whether “on-chain” ADR built on blockchain platforms holds practical advantages over traditional ADR when integrated onto smart contracts, and if so, which types of smart contracts. (More on this below.) ArbTech is a global online forum where such discussions take place. Co-founded by Sophie Nappert (moderator of the hugely successful OGEMID arbitration listserv), ArbTech provides a space for thoughtful debate and collaboration on the application of technology to dispute resolution. (All of the topics below are distilled from recent discussions on the ArbTech forum.)
New Kids On The Block(chain): “On-Chain” ADR
The UK Jurisdiction Taskforce (part of the LawTech Panel of the Law Society), which recently published draft rules for resolving disputes arising from new technologies such as cryptoassets, cryptocurrency, smart contracts, distributed ledger technology, and fintech applications. The draft rules opened for consultation with an online event on 26 February. Notably, the draft rules envisage automatic dispute resolution processes being combined into digital asset systems (known as “on chain” dispute resolution), providing an arbitrator, in certain circumstances, with the ability to implement decisions directly on a blockchain or within the system (as opposed to a paper Award). While the draft rules are clearly cutting-edge and forward-looking, at their heart they still rely on the tried-and-true English Arbitration Act 1996.
AI and Litigation Financing: Like Two Peas in a Pod
Artificial intelligence (AI) is increasingly used for predictive analytics – predicting the outcome of disputes. Not surprisingly, such tools are very attractive for litigation funders, and several funders are betting on the technology giving them a competitive advantage in modelling risk in their case evaluations. To name but two: Legalist, which claims to use data from millions of court records to help case assessment for litigation funding; and Arbilex, which similarly uses AI and predictive analytics to assess arbitration cases, including the likely costs of a given case, as well as likelihood of success. The increasing use of AI to predict dispute outcomes raises several interesting issues. One of these is that the available dataset for court cases is generally much larger than the dataset for arbitration awards. As the accuracy of AI is directly correlated to the size and quality of the dataset, this poses an interesting question as to whether court litigation may gain a comparative advantage in the future, given potentially greater predictability of dispute outcomes. Arbitral institutions such as the ICC are sitting on veritable goldmines of raw data; however, selling predictive services to parties might create uncomfortable optics for institutions that are built on a foundation of neutrality and impartiality. And yet, market pressures could well push institutions to begin mining their own awards data.
ODR: Online Dispute Re(v)olution?
There has also been much innovation in online dispute resolution (ODR) platforms – no doubt boosted by the Pandemic. Perhaps none has garnered as much attention and discussion as Kleros, which uses blockchain technology to create a decentralized arbitration process that relies on crowdsourced adjudicatory expertise. Very reductively speaking, the Kleros process assigns jurors to cases (the jurors sign up online and are remunerated for their services), and incorporates a point system, inspired by the jury selection system in ancient Greece, and underpinned by game theory concepts. Jurors are rewarded for deciding cases “coherently,” creating financial incentives for correct adjudication. (A fascinating, in-depth guide to Kleros can be found here.) The result, conceptually, is a decentralized adjudication process where anyone can sign up to be a juror, but that nonetheless aims to arrive at correct decisions.
DeFi: Financial Services on the Ethereum Blockchain
Moving on to the FinTech world (but with an obvious impact on our dispute resolution world), the Economic Research Division of the St. Louis Federal Reserve recently published an article by Prof. Fabian Schär, which provides an in-depth analysis of decentralized finance (DeFi), its potentials, and risks. DeFi refers to an alternative financial infrastructure built on top of the Ethereum blockchain, using smart contracts to create protocols to replicate existing financial services, in a more open, interoperable, and transparent manner. Potential applications include decentralized exchanges, decentralized debt markets, blockchain derivatives, and on-chain asset management protocols. The advantage of DeFi is that it does not rely on intermediaries and centralized institutions (which, depending on whom you ask, operate in an opaque manner, are vulnerable to fraud, and require users to trust the institution). Instead, DeFi is based on open protocols and decentralized applications, where agreements are enforced by code, and transactions are executed in a secure and verifiable manner. The U.S. Federal Reserve’s interest in DeFi may well augur significant disruption in the financial services space, with an equally significant impact in dispute resolution.
Read the AI Label Carefully
Of course, the exciting innovations in technology and the administration of justice are tempered by a number of ethical concerns, revolving broadly around privacy and bias/discrimination.
In this respect, Europe leads the way. At the end of 2020, the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe published a study on the establishment of a certification mechanism for AI tools and services used in the fields of justice and the judiciary. The study begins to implement the CEPEJ Charter on the use of AI in judicial systems and their environment, adopted in late 2018. Broadly, the CEPEJ proposes certification and labeling criteria for AI tools based on principles outlined in the Charter, including (1) the Principle of respect of fundamental rights; (2) the Principle of non-discrimination; (3) the Principle of quality and security (with regards to the processing of judicial decisions and data, using certified sources and intangible data in a secure technological environment); (4) Principle of transparency, impartiality, and fairness; and (5) Principle of “under user control” (ensuring users are informed actors and in control of their choices). The proposed CEPEJ certification requirements will likely impact a number of “Legal Tech” areas, such as case law search engines, online dispute resolution, predictive analysis, automated legal drafting, and so on.
The Monster Lurking Within: Embedded Bias in AI
The risk of bias in AI was highlighted at the end of 2020, when Timnit Gebru, then co-lead of Google’s Ethical AI team, was fired over her publication of a research paper highlighting bias in large language models (AI trained on large amounts of text data) – which happens to be at the core of Google’s search business. Ms. Gebru is a pioneer in AI ethics and research, and co-authored a groundbreaking paper that showed that facial recognition software is less accurate at identifying women and people of color, largely because the data the AI software trained on utilized white male pictures. Ms. Gebru’s paper focused on bias in large language models, noting that the AI is trained on text pulled from the Internet, which contains racist, sexist, and otherwise abusive language that ends up in the training data. As an MIT article (reviewing Ms. Gebru’s research paper) described, “an AI model taught to view racist language as normal is obviously bad”; and that “a methodology that relies on datasets too large to document is … inherently risky… [and] perpetuates harm without recourse.” As AI makes its way into the dispute resolution realm, we must obviously guard against all sorts of inherent biases hidden in large datasets.
Indeed, hidden biases may have already found their way into the administration of justice. Many readers may be aware of a controversial program in the U.S. called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). COMPAS is used by certain U.S. courts to assess the likelihood of recidivism in defendants who are up for parole. In 2016, a defendant challenged the State of Wisconsin’s use of COMPAS, arguing it violated his right to due process because it prevented him from challenging the scientific validity and accuracy of the test. The COMPAS algorithm uses a “violent recidivism risk scale” calculated by looking at age; age at first arrest; history of violence; vocation education level; history of noncompliance; and a weight multiplier “determined by the strength of the item’s relationship to person offense recidivism observed in study data.” While this algorithm makes no reference to ethnicity or race, a highly publicized study by Propublica analyzed COMPAS assessments and concluded that the algorithm was biased against African Americans: “Blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend… [COMPAS] makes the opposite mistake among whites: they are much more likely than blacks to be labeled lower-risk but go on to commit other crimes.” (Emphasis added.) COMPAS’ parent company strongly rebutted this claim, but one very problematic point was that it refuses to release its proprietary software, making it impossible for defendants and third parties to challenge the accuracy of the algorithm. The issues raised in the COMPAS saga have obvious implications for international arbitration and other forms of dispute resolution as AI is increasingly integrated into these processes.
Technology and the Future of Justice
These stories scratch the surface of the vast range of innovations affecting the dispute resolution world. Our dispute resolution community must stay abreast of these developments, and hopefully steer them towards a principled, and ethical application of technology, as opposed to a reactive approach which would allow an unprincipled incursion of technology into our dispute resolution regime. Circling back to ArbTech, these goals are embedded within the forum’s DNA, and are part of its Mission Statement. The ArbTech community is composed of legal practitioners, developers, engineers and academics, who discuss, debate, and collaborate on bleeding-edge tech innovation in dispute resolution and “the Future of Justice.” ArbTech is currently open to new participants.
________________________
To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.