Introduction

At the latest ODR Forum which was held on 29-31 October 2019 in Williamsburg, Virginia, Dr Anyu Lee presented on China’s vision of online dispute resolution (“ODR”). He discussed how far China has progressed in developing artificial intelligence (“AI”) tools for online courts, arbitration and mediation. He also described the potential of AI in resolving disputes and in particular mentioned that, cross-border small value dissatisfactions which are difficult to resolve at present could be resolved smoothly and efficiently in the near future through AI.

Dr Lee concluded his presentation by arguing that the only way forward is to have these small value cross-border cases decided by robojudges/roboarbitrators/robomediators, and have their resolutions enforced by a social credit system. According to Dr Lee, in the near future, the first advanced robots will be able to speak multiple languages, know laws of different jurisdictions and analyze a large volume of court decisions, which will enable them to render correct and consistent decisions. Further, it was raised that judges, arbitrators and mediators should start training their own robots in order to compete with other robots in the near future.

 

A robojudge cannot be fair

While I agree with most of what Dr Lee presented, I disagree with his conclusions. Why? It is because our concept of fair justice is inseparable from human ethics.

At present and in the foreseeable future, machines will not be able to act ethically. Ethical questions are key for the success of any arbitration institution designing and implementing online processes. No matter how (cost) efficient online arbitration can be, designers of such online processes need to focus on general principles of fair trial and also on ethical principles of ODR (such as accessibility, informed participation etc.), and increasingly on evolving ethics of the AI (e.g. transparency, non-discrimination, under user control etc.), as well as IT development (e.g. accountability and data agency).

Further, the quality of AI is determined by the quality of the algorithms behind, and by the quality and amount of data available for mathematical analysis. Algorithms must be “trained” by data, and in this process, algorithms and models adapt substantially. It is the amount and the quality of data that determine the performance quality of AI modules. Individual arbitrators or arbitration courts are unlikely to have at their disposal enough data to train their robots adequately.

 

Centralized approach to online justice

In China as well as other countries, most of the small value dispute resolution data are currently controlled by the largest corporations (e.g. Alibaba, Amazon, eBay and similar) and governments. If we follow today’s trends in China, the current and future robojudges are not and will most probably not be in the hands of judges, but will be controlled by the largest corporations and states. In China, robojudges have already started to be implemented to supervise human judges. It is, however, not the future.

I call this approach the centralized approach to online justice. Yes, robojudges are efficient and can make decisions in a consistent manner. Yet, the independence of human judges, arbitrators and mediators has always been key to our human justice systems. Will a human arbitrator remain independent if he or she is directly supervised by a roboarbitrator in a similar way as with judges being supervised by robojudges in China? I doubt so.

This is why I read with great interest a previous post related to online arbitration in China by author Chen Zhi, which discussed the plans of Guangzhou Arbitration Commission (“GZAC“) to implement AI in its already developed online procedures corresponding with the current best ODR practices.

I understand from the blog post that GZAC has yet to implement roboarbitrators. Perhaps an alternative way of development for arbitration institutions is as follows: AI can be remarkably helpful in assisting with settlement and this may be the future for resolution of small value cases – resolution through AI-assisted negotiation rather than by an arbitral award rendered by a roboarbitrator.

Further, arbitration institutions will probably be viewed positively, if users are offered an option in the future to exchange anonymized judicial data with online arbitration courts all over the world, in order to maintain the quality of their future AI-enhanced services to the parties on top level. Such option is, in my view, connected with the decentralized approach to online justice.

 

Decentralized approach to online justice

The decentralized approach to online justice is based on wide global sharing of anonymized judicial data between all stakeholders. In the decentralized approach, judicial data will be controlled initially by their originators, i.e. the parties – people and the judiciary, under state regulatory supervision. The data will be shared widely with developers of AI modules and other stakeholders who will compete among themselves in their services for the parties and courts, arbitration institutions and mediation centers.

The decentralized approach to online justice however does not exist today. For it to happen, we need to develop globally acceptable open ODR schemes, specifications and standards. Such open standards will serve not only to share anonymized data, but also to enable global access by people to different types of dispute resolution methods and to share widely ODR know-hows. Such public and private ODR platforms will be able to cooperate on commercially agreed terms and complement their services to people.

 

Existing use of AI in dispute resolution

Outside of Mainland China, AI has already been efficiently implemented in the first online courts and arbitration forums. For example, the State Courts in Singapore have developed data repositories for both civil and criminal cases, a step which is prerequisite for successful AI application. Further, in the Netherlands, AI research has, inter alia, been focusing on automated processing of judicial texts. In addition, at the end of this year, a newly developed Electronic Business Related Arbitration and Mediation (eBRAM) Centre in Hong Kong was put into operation, reportedly using AI for automatic translations and text processing etc.

In general, it is observed that AI has been used as a smart research assistant in organizing support material regarding past cases or relevant literature. Further, AI can organize case files smoothly or even write first drafts of decisions or awards based on judge’s or arbitrators’ notes. Moreover, AI can assist parties in navigating the online justice system of a particular country and can readily respond to majority of user requests, and accordingly recommend course of action to be taken, or the type documents relevant to the parties’ dispute.

 

Future of AI in dispute resolution

I predict that in most countries, the centralized and decentralized approaches will co-exist.

Even at present, certain aspects of the centralized and decentralized features interact. For example  according to Professor Ethan Katsh’s opening address at the ODR Forum 2019, eBay has been deciding over 60 million disputes per year and mostly without human intervention. Further, it would appear that eBay-like centralized approach may be complemented in the US in the near future by decentralized open ODR environments.

In our own interest, we must always be watchful and thoughtful in dealing with AI, and particularly in the justice sector. I predict that there will be new governance necessary and indeed emerging very soon, particularly in countries where the decentralized approach is implemented. Such development will likely take the form of self-certification, compliance controls by operators of the first private open ODR environments, voluntary and/or mandatory certification by third parties for compliance with open ODR standards, and also new regulations addressing AI ethical issues when we get to recognize them. On the other hand, in countries implementing the centralized approach, the need for new governance will be less pressing.

Apart from AI-related aspects of online arbitration, there are other important issues to resolve when designing an online arbitration platform, including the “evergreens” like form requirements or enforceability of e-arbitration awards as mentioned in a previous post.

Other interesting features of online arbitration in the future include blockchains and smart contracts (see previous post on such features).

 

Conclusion

A robojudge cannot be fair because a robojudge is not a human being, and people are not machines. At present and in the foreseeable future, a robojudge will be an advanced statistical machine operating based on past data. AI will make a significant positive impact on achieving justice, but it may also have a dark side.

If we allow robojudges to preside over human decision-making, I am afraid that step-by-step, people may risk acting like machines which is a big risk that we need to prevent. One may ask: how so? And why is such develpment negative? Some may hope that robojudges will be less biased than human judges, others including myself would feel vulnerable with centrally-controlled robojudges in place.

I see a positive future for online arbitration. Technology as a tool to assist human decision-making may enable individuals to have an increased say in how their disputes are resolved and possibly minimize issues with bias. In such case, users will have more options to settle their disputes and to exercise better control over one’s data rather than relying solely on robojudges.

 

For further information, see Designing Online Courts: The Future of Justice Is Open to All


________________________

To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.


Kluwer Arbitration
This page as PDF

One comment

  1. Thank you for this very interesting article with which I agree. To answer your last question we need to avoid acting like machines to avoid the flawed groupthink that is inevitably created by such mass behaviour

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.