The Role of Artificial Intelligence in Judiciaries

By Global Judicial Integrity Network, United Nations Office on Drugs and Crime

The role of Artificial Intelligence technology in contemporary jurisprudence is attracting worldwide attention. The Global Judicial Integrity Network at the United Nations Office on Drugs and Crime has begun researching this issue and will discuss it at its high-level meeting in Doha, Qatar on 18-20 November 2019. This article provides a look at artificial intelligence through the lens of the judiciary.

Panel on the “Impact of Digitization on Integrity and Accountability”

Artificial intelligence (AI) technology has proliferated throughout many industries and is now making its way into judiciaries. For instance, the Hainan High People’s Court in China deployed an AI system that incorporates language processing and deep-learning tools that can produce sentencing decisions based on the case law data it processes. The system helps to ensure consistency and accuracy of judicial decisions, while reducing the time of the judgement process by over 50 %.1

It is one of the many technologies employed by the Chinese judiciary that can assist judges and other legal professionals in retrieving decisions, providing litigation guidance and predicting the outcome of cases.2 In the United States, AI technology helps to predict recidivism risks of offenders and then inform bail and sentencing decisions.3

AI systems’ ability to process a myriad of data quickly and accurately make them a valuable tool for judiciaries burdened with a high workload. They are not only aimed at increasing judicial efficiency, but also at ensuring the consistent fulfilment of judicial functions and increasing public confidence in the judiciary. Algorithms utilized in information and communications technology already provide benefits in terms of increasing the quality of justice. The United Nations Development Programme reports that the reduced human interaction and traceability allowed by the ‘e-Courts’ case management system have great potential to reduce corruption risks in the Philippines.Similarly, the use of AI in the performance of more complex tasks could help to prevent manipulations in judicial decision-making, as well as monitor the consistency of case law.

As the examples of China and the Philippines demonstrate, the potential benefits of emerging technologies have increasingly encouraged judiciaries around the world to use and explore the possibilities of using smart technology in the performance of judicial functions. An increase in investments in the development of such technology, both in the private and public sector, also suggests that we will see more smart justice tools in the future.

However, as recent studies and policy documents show, artificial intelligence poses significant challenges for judiciaries in terms of reliability, transparency and accountability.5 In cases where machine learning and predictive analysis are involved in the judicial decision-making process, there is a risk that technical tools are replacing the discretionary power of judges and judicial officers, creating an accountability problem. Therefore, it is crucial that the judges are aware of the limitations of such technology to ensure compliance with judicial integrity and values endorsed by the Bangalore Principles of Judicial Conduct.6

The Global Judicial Integrity Network addressed this issue in its inaugural launch event in April 2018, where experts and legal professionals discussed the potential impacts of AI and digitization on integrity and accountability in judiciaries.This topic will also be covered at the High-Level Meeting of the Network in Doha in October 2019 to enhance public discussion and awareness on the increased adoption of new technology in the justice sector.

One challenge involved in the use of AI in judiciaries is how judges will maintain control over the judicial decision-making process if AI technology is increasingly involved. The Villani Report, which is the result of a parliamentary mission that aims to shape AI strategy in France, points out that judges may feel pressured to follow decisions made by AI systems for the sake of standardization, instead of applying their own discretionary powers.8 This poses a serious risk of undermining judges’ independence, while reducing judgements to “pure statistical calculations.”This concern is addressed by the European Ethical Charter on the Use of Artificial Intelligence’s guiding principle “under user control” that suggests that judicial officers should “be able to review judicial decisions and the data used to produce a result and continue not to be necessarily bound by it in the light of the specific features of that particular case.”10

Another challenge concerns whether the internal operations of AI and the data fed into it are reliable and accurate. AI provides certain outcomes based on the processing of the input of existing data. As a 2016 policy document from the United States (U.S.) government phrases it, “if the data is incomplete or biased, AI can exacerbate problems of bias.”11 This poses a significant challenge for judicial impartiality, as judiciaries cannot render impartial decisions based upon biased AI recommendations.12 The biased decision-making would also threaten judicial integrity and due process rights. It is, therefore, recommended by the U.S. that federal agencies in the U.S. conduct evidence-based verification and validation to ensure the efficacy and fairness of the technical tools that inform decisions bearing consequences for individual citizens.13

It should be noted that overseeing the accuracy and effectiveness of AI tools is not an easy task. As AI-based tools often fall under the intellectual property rights of private companies, their data processing methods are often not accessible for public evaluation. Consequently, it may be difficult to know how an AI system weighs different factors against each other to reach an assessment about an individual case. While law in this field has yet to be developed in many countries, countries such as China, Germany, the United Kingdom and the United States have recently established their AI policies and made them publicly available.14 Although these policies do not necessarily address the risks regarding judiciaries specifically, they do acknowledge some of the aforementioned issues related to the transparency and ethical concerns of AI and provide governments’ strategies and/or recommendations for administrative bodies and other actors on the development and the use of AI.

It is also crucial to ensure that judges, lawyers and other legal professionals are informed about the potential limitations and concerns of AI use, as they will need to use this technology and be aware of how it might come into play in the courtroom. As a positive development, some judiciaries, like the Judiciary of England and Wales, already train its judges about the potential impacts of the use of artificial intelligence in the justice system.15

The Global Judicial Integrity Network will continue its efforts to promote peer learning and support activities on this topic and facilitate judges’ access to relevant tools and resources. Increased awareness of the potential opportunities and challenges of AI will be crucial to upholding judicial integrity and accountability, as well as the effective use of new technology in the judiciary.

 

 

[1]Yuan Shenggao, “AI-assisted sentencing speeds up cases in judicial system’’, China Daily, Updated 18 April 2019.

[2] Baker McKenzie, “Adoption of AI in Chinese Courts Paves the Way for Greater Efficiencies and Judicial Consistency”, 28 February 2018.

[3] Administrative Office of the United States Courts (Probation and Pretrial Services Office), An Overview of the Federal Post Conviction Risk Assessment, June 2018. Available at: https://www.uscourts.gov/; Electronic Privacy Information Center, “Algorithms in the Criminal Justice System’’. Available at: https://epic.org/algorithmic-transparency/crim-justice/ (accessed on 08 August 2019).

[4] United Nations Development Programme, A Transparent and Accountable Judiciary to Deliver Justice for All (2016), p. 36.

[5] See a list of reports and policy documents at Martin Gibert, Christophe Mondin and Guillaume Chicoisne, “Montréal Declaration Responsible AI Part 2, 2018 Overview of International Recommendations for AI Ethics”, 2018.  Available at:  https://montrealdeclaration-responsibleai.com.

[6] United Nations, Economic and Social Council Resolution “Strengthening basic principles of judicial conduct” E/2006/INF/2/Add.1. Available at: http://bit.ly/bangalore_principles.

[7] United Nations Office on Drugs and Crime, “Session Report Impact of digitization on integrity and accountability” (Vienna, 9-10 April 2018). Available at: http://bit.ly/impact_of_digitization.

[8] Cédric Villani, “For a Meaningful Artificial Intelligence Towards a French and European Strategy’’, A parliamentary mission from 8 September 2017 to 8 March 2018, p. 124.

[9] Council of Europe, European Commission for the Efficiency of Justice, “European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment”, February 2019, p. 15.

[10] Ibid, Principle 5.

[11] United States, Executive Office of the President (Office of Science and Technology Policy), Preparing for the Future of Artificial Intelligence (Washington D.C, 12 October 2016). Available at: http://bit.ly/future_of_AI.

[12] See fn. 6, Principle 2.

[13] See fn. 11, Recommendation 16.

[14] China, State Council, Notice of the State Council Issuing the New Generation of Artificial Intelligence Development Plan,  State Council Document [2017] No. 35 (8 July 2017). Available at: http://bit.ly/new_generation_AI; Germany, Federal Ministry of Education and Research, the Federal Ministry for Economic Affairs and Energy, and the Federal Ministry of Labour and Social Affairs, Artificial Intelligence Strategy (November 2018). Available at: http://bit.ly/AI_strategy_germany; United Kingdom, Secretary of State for Business, Energy and Industrial Strategy by Command of Her Majesty, Government response to House of Lords Artificial Intelligence Select Committee’s Report on AI in the UK: Ready, Willing and Able?, CM 9645, (APS Group, June 2018).  Available at: http://bit.ly/AI_select_committee; United States, Select Committee on Artificial Intelligence of the National Science and Technology Council, The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update (June 2019). Available at: http://bit.ly/US_AI_strategy;

See fn. 5.

[15] Courts and Tribunals Judiciary, “Lord Chief Justice sets up advisory group on Artificial Intelligence”, 4 March 2019. Available at https://www.judiciary.uk/.

Leave a Reply

Your email address will not be published. Required fields are marked *