HomeContemporary Legal issuesWith Increasing AI Inclusion, Can Judges Be Replaced By Such Technology?

With Increasing AI Inclusion, Can Judges Be Replaced By Such Technology?


The Honourable  Chief Justice Sundaresh Menon of Singapore spoke about this and other related topics at the inaugural Singapore-India Conference on Technology held from 13-14 Apr 2024.

Read the Keynote Speech in full


“Judicial Responsibility in the Age of Artificial Intelligence”

Saturday, 13 April 2024

The Honourable the Chief Justice Sundaresh Menon*

Supreme Court of Singapore

The Honourable Dr Justice Dhananjaya Chandrachud, Chief Justice of India
The Honourable Attorney-General
My fellow Judges, colleagues, and distinguished guests

Ladies and gentlemen

I.          Introduction

1. Good morning. It is a real pleasure for me to be back in India. As the Chief Justice observed in his opening remarks, the relationship between our respective courts, which was strong to begin with, has strengthened even further in the last few years, and I hope it will blossom even more in the years to come. Let me begin by thanking Chief Justice Chandrachud and his team for the very generous hospitality that has been extended to us. I also want to express my particular gratitude to Justice Rajiv Shakdher and my colleague Justice Aedit Abdullah, and their respective teams, for their tremendous work in putting together this inaugural Conference on Technology; and my heartfelt gratitude to those of you who have travelled here from all over the world to be a part of this Conference. My colleagues and I are delighted to be here for what promises to be the first of many fruitful exchanges between our judiciaries on how technology is transforming our operating environment.

2. I am especially pleased that this Conference not only brings together judges, but also facilitates dialogue with experts in technology and its growing interface with justice systems. This will be critical if judges are to develop a sound grasp of the technological advancements that are taking place around us and the potential threats and challenges as well as opportunities that they will bring. And a sound grasp of these developments will, I suggest, become essential for our judges because technology is evolving at a very rapid pace, with far-reaching and profound implications. This has been particularly striking over the last year and a half, as developments in generative artificial intelligence (or “AI”) have taken the world by storm, and have radically reshaped our conversations about what the societies and systems of the future will look like.

3. Mary Shelley’s timeless novel, Frankenstein, was penned in the early 1800s, but its image of the creator and his “ungovernable creation”(1) endures in the public consciousness today. Modern-day iterations of what has been termed the “Frankenstein complex”(2) have focused on the prospect of a “technological singularity”(3) – the point at which AI will have surpassed the understanding and control of humans. Some have suggested that we may one day also reach a point of “legal singularity”, where AI and machines are as good as, if not better than, human lawyers and judges at providing solutions to legal problems, if not also in applying and even writing the law.(4) Although we are not there yet, and perhaps most of us might hope that we will not ever get there, it must be borne in mind that the capabilities of AI will only continue to grow. To borrow the words of Professor Richard Susskind, today’s AI systems are “the worst [they] will ever be”; and technology’s great significance lies not in what it already is, but in what it is likely to become.(5)

4.  I want to emphasise a point here. When we think of what technology might become, we have to recognise that the significance of those developments will not lie in the notion that technology will mimic what humans have historically done or offer a cheaper, human-like alternative; rather, the significance of the ever-growing capability of technology will lie in its ability to offer solutions to the legal problems facing the world in ways that are completely different from the ways in which we lawyers and judges have been solving those problems for hundreds of years. Just think of how Google Maps has provided a completely different solution for those who used to use a paper map to find their way; or how e-mail and smartphones have transformed the way in which we communicate and access information. These transformations had nothing to do with replicating their pre-digital counterparts. Another example is Professor Susskind’s example of the autonomous vehicle that is driven not by a robot mimicking a human being, but by deploying data and computing power.(6)

5. It is important to understand this because the impact technology and technological advancements means that we stand today on the cusp of seismic shifts that will affect our justice systems in ways that we had previously not even imagined. But as we explore the vast potential of AI and adapt to the changes taking place around us, we should first reflect on what must remain constant; on what we must remain anchored to. My central thesis today is that we must be guided above all by the goal of preserving and strengthening the rule of law, because that is the bedrock on which both our societies are founded, and its maintenance and protection are the “ultimate responsibility” of the judiciary.(7) This should guide how we discharge our two distinct but complementary roles – namely, our traditional adjudicative role and our systemic role, which is emerging with rapid and growing significance(8) – if we are to ensure that the rule of law is not displaced by the “rule of technology” in this age of AI.(9)

II.          The adjudicative responsibility

6.  I begin with our adjudicative role as judges, which is our traditional role – that is, to interpret and apply the law in each case in a fair and principled manner.(10) The allure of AI and the possibility of “AI judges” should not cause us to lose sight of the aspects of judging that remain, and should remain, a fundamentally human endeavour. These are the aspects of our adjudicative responsibility that must endure. But at the same time, the potentially transformative impact of AI on our dispute resolution systems cannot and should not be ignored or reversed, and it will be crucial for human judges to be able effectively to manage the use of AI within the adjudicative process. Hence, there will be aspects of the adjudicative responsibility that will evolve.

         A.          The human endeavour of judging

7.  But I want to begin with the aspects of judging that must endure. Professor Michael Sandel has suggested that perhaps the most difficult philosophical question of the era is the role that human judgment, as opposed to “smart machines”, will play “in deciding some of the most important things in life”.(11) Although Professor Sandel was speaking of the cognitive exercise of human judgment generally, this question has particular resonance for us as judges, whose very profession revolves around the exercise of judgment on matters going to theheart of how our societies are organised. These include decisions on a person’s guilt or innocence; the appropriate punishment for offenders; and the provisions to be made for the children of a broken family. Given the weight and implications of these decisions, there are aspects of both the process and the outcomes of judging that, at least in certain fields, AI should not replace, if justice is to be both done and felt to be done.

8.  Let me suggest three aspects of the process of judging that still require a human touch:

    1. First, a judge should have empathy in engaging with the parties at various stages of the adjudicative process so that those parties feel that they have been heard,(12) and that the judge has appreciated the impact that her decision may have on the people involved in each case. This is most noticeably felt in fields like family justice, where there is often no single correct answer, and the key judicial task is to listen to the parties, diagnose the real underlying problem, and choose and prescribe a course of treatment that will enable the parties and their loved ones to achieve a measure of healing and then move forward.(13) A non-human adjudicator might be able to mimic empathy,(14) but is unlikely to be able to exercise and convey genuine empathy and social intelligence in the way a human judge can.(15)

    1. Second, the process of judging must reflect the values of our justice system. Whereas these values will be instilled in the human adjudicators that are selected to serve as the face of justice in our societies, AI tools are not imbued with these values. In particular, there is a compelling argument for criminal adjudication to be kept within the control of humans, who generally share our moral outlook and methods of reasoning(16) – especially as criminal law is, at its core, an expression of the values that our society is committed to promoting and defending.(17)

  1. Third, and more broadly, judging is an exercise that is intertwined with our shared humanity. As the former Chief Justice of the Federal Court of Australia, James Allsop, has argued, society’s loyalty to our justice system is engendered not only by the accuracy and consistency of our output, but by the “indefinable mixture” of the law and the judge dutifully undertaking the task of examining the dispute and applying the law with due diligence and care, in a way that recognises the dignity and humanity of the people standing before justice. This cannot be simulated by a machine, at least as far as I know.(18) It is for this reason that some have argued that litigants may have a “right to a human decision”.(19) 

9.  Beyond the process of judging, I suggest that human judges and judgment may also be indispensable in producing at least some of the outcomes expected of our justice system.

    1. First, judging often requires significant evaluative judgment – it involves balancing competing considerations, which might be accorded different weights in different contexts, and which may even be incommensurable.(20) Related to this is that judges must dispense individualised, custom-made, bespoke solutions. This requires more than just pattern detection or rule application based on existing data; instead, it calls for the exercise of discretion in applying legal principles to the facts of each new case.(21) This may on occasion call for normative reasoning on whether a particular set of circumstances should be treated as exceptional and whether it can be materially distinguished from existing the precedents.

  1. Second, the development of legal principles may require judges to choose between several equally plausible interpretations of the law,(22) or to decide whether established rules should be updated or departed from.(23) These situations do not call for a backward-looking mechanical application of existing knowledge, but rather a conscious, reasoned and forward-looking decision on whether and how our jurisprudence should evolve, having regard to the broader societal context which the law serves. This is especially important in common law systems, where cases are the “atomistic building blocks” of the common law.(24) These words of Oliver Wendell Holmes in 1881 still ring true today: the life of the law has not been logic but experience, as it “embodies the story of a nation’s development”, and it “cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics”.(25)

         B.          Judging in the age of AI

10. These fundamentally human aspects of judging suggest that the prospect of “AI judges” replacing human judges is a distant, even a remote one. But in this age of AI, the role of the human judge will itself need to evolve. In discharging their adjudicative responsibilities, judiciaries will increasingly need to grapple with questions of how AI should, and should not, be used within the adjudicative process.

11. If used properly, AI can be a tremendously useful assistive tool that can improve the quality of decision-making and enable judges to surpass ordinary human limitations. Indeed, AI may prove to be indispensable in dealing with the increasing technical, evidential and legal complexity that we see in many categories of disputes today.(26) AI tools can help judges manage and make sense of large volumes of documents and evidence,(27) and facilitate research across vast repositories of legal precedents and authorities.(28) AI tools could also help to identify the blind spots and subconscious biases of human judges by analysing trends in their decisions.(29)

12. But these technological advancements also bring with them new pitfalls that judges must guard against, to ensure that the use of AI in the adjudicative process does not undermine the rule of law. Let me highlight three key dangers.

        i.          Proliferation of untruths

13.  The first is perhaps the one that is best known, and that is what has been referred to as AI “hallucinations” and the proliferation of untruths. Generative AI tools are potent sources of misinformation because of how quickly they can produce apparently credible output that is completely false;(30) and unlike legal professionals, such tools are not bound by professional ethical obligations or values such as honesty and integrity.(31) Judges must therefore be especially vigilant to potential misuses, or careless uses, of AI tools by lawyers and parties. This underscores the need for judiciaries to take proactive steps to manage the use of AI in court proceedings,(32) and for individual judges to be sufficiently familiar with the subject matter and the AI tools involved to be alive and alert to the possibility that something might be amiss.(33)

14.  While the danger of AI proliferating untruths is not unique to the judicial or legal professions, it is a particular concern for us because the rule of law is rooted in the pursuit of truth to achieve justice,(34) and truth is the foundation of our adjudicative work.(35) The spectre of unchecked falsehoods tainting judicial processes and decisions is therefore particularly damaging to public trust in the courts, and ultimately the integrity of our justice systems.

        ii.          Decline in transparency and accountability

15. The second key danger stems from an over­-reliance on AI tools in judging. The need for judicial decisions to be reasoned and intelligible requires that judges themselves understand the AI tools that they may use, and that they are able to explain the role of those tools in the decision-making process. Otherwise, there is a real risk that aspects of judicial decision-making may become a “black box”,(36) lacking transparency and, perhaps more importantly, undermining accountability.

16. But the use of AI tools is often accompanied by “opacity problem”, which arises because their algorithms and underlying data sets are often undisclosed or unintelligible to laypersons, such that the output they produce cannot be meaningfully challenged. Crucially, when i speak of “laypersons” here, this includes the great majority of judges. We may be well equipped to explain our legal reasoning, but our general lack of technical training and the opacity of the tools involved may impede our ability to properly interrogate these tools, and to understand, much less to precisely explain, why and how they have contributed to certain findings or conclusions.(37) As we navigate the age of AI, judges will also need to be armed with sufficient technical and domain knowledge to ensure that we understand the AI tools used and, more importantly, are conscious of their limitations.

17.  The opacity of AI tools may not only affect whether justice is seen to be done; it may even cast doubt on whether justice is in fact done. AI tools are trained on data that may carry systemic racial, ethnic or other biases, which would undermine their reliability and often lead to unjust outcomes.(38) Judges should therefore be vigilant in guarding against “automation bias”, or the propensity to treat algorithmic output as authoritative without independent verification simply because it appears to be produced by an objective or scientific process.(39)

        iii.         Abrogation of the human aspects of judging

18.  The third and broader danger posed by the use of AI is the risk that judges might, in using these tools, abrogate the fundamentally human aspects of judging that I spoke about earlier. As James Allsop has put it, “[t]he danger is not the machine becoming human; it is the human becoming the machine”.(40) AI tools may enable us to complete the tasks of locating, organising and synthesising material in a fraction of the time they would previously have taken, but that means we can and therefore should apply more thought and attention to the aspects of judging that require a human touch, even – and perhaps especially – when it might seem tempting to delegate these, too, to AI. So, beyond cultivating technological expertise in relation to the use of AI tools, judges must also renew their commitment to their professional duties and their ethical responsibilities to exercise judgment, fully and fairly, in managing both the process and the outcomes of judging in each case.

         C.          The need of guidance and governance

19. Tackling the dangers I have outlined will require sustained effort and unwavering commitment from each individual judge navigating the age of AI. But these efforts should be complemented by systemic initiatives undertaken by judiciaries to anticipate and address these dangers. In particular, there is an urgent need to develop robust AI governance frameworks and guidelines to regulate the use of AI in litigation and adjudication. Several judiciaries around the world have already published such official guidance,(41) and a particularly noteworthy example is the framework issued by the New Zealand judiciary, in the development of which Justice David Goddard played a critical role. These will help to guide both judges and lawyers through the quagmire of legal and ethical issues that may arise.

20. Platforms like this Conference, and the Meetings of Chief Justices and Judges in Charge of Technology, also provide invaluable opportunities for international judicial exchange. Many of the legal, ethical, and technical issues raised by AI are novel and difficult, and transcend jurisdictional boundaries. There is an enormous amount that we can learn from each other, especially as we are all driven by the same mission – maintaining the integrity of the judicial process, and safeguarding public trust in our institutions.

III.          The systemic responsibility

21. I have thus far spoken about the impact of AI on the discharge of our adjudicative responsibility as judges. But I want to turn to what I have referred to as our systemic role, as institutions responsible for developing and operating a justice system that delivers justice fairly, effectively and efficiently to those it exists to serve.(42) We tend to understate, if not overlook, this aspect of the judicial role, but I suggest that this systemic role is an integral component of a broader vision of the rule of law, because it is society’s shared commitment to the belief that the law should rule that brings the rule of law to life. And for this commitment to be secured and maintained, we must not only strive for excellence in our justice systems, but also ensure that these systems can be reached by those who need to avail them.(43)

22.  This is an especially critical issue because of the scale of the global access to justice deficit. I believe Professor Dirk Hartung will touch on this in his presentation tomorrow, but let me paint some broad strokes to convey a sense of its scale. In 2019, the World Justice Project estimated that 5.1 billion people around the world, or two-thirds of the world’s population, had “unmet justice needs”, mainly because they faced obstacles to resolving basic civil and criminal justice problems, or lacked the tools to protect their rights.(44) In 2021, two years later, the World Justice Project said that the pandemic had “intensif[ied] the access to justice crisis”, and that for the first time in many years, extreme poverty had risen while human development had declined.(45) And two years later, in December 2023, the World Justice Project studied global access to justice trends and found, among other things, these four points that I want to highlight. First, that wealth-based inequality in the prevalence of legal problems was widespread, and that people living in poverty experienced more legal problems than the rest of the population in 70% of surveyed countries. Second, people living in poverty were more likely to experience problems of a legal nature outside the formal processes and institutions of the law, such as threats from debt collectors or homelessness, and so were further away from the protection of the law. Third, there was a wealth-based disparity in access to justice in nearly 90% of surveyed countries. Fourth, overall, the supply of justice services is not keeping pace with the growing demand for justice solutions.(46)

23. This is a grim picture of the state of health of the global justice system, and none of us whose lives centre on the work of administering justice should be able to rest easily in this light. And the worldwide decrease in access to justice is matched by a decline in the extent to which public institutions, including judiciaries, are trusted by their populations. Significantly, the 2023 Edelman Trust Barometer suggests that those in the top quartile of income “live in a different trust reality” than those in the bottom quartile.(47) If and when a widespread belief takes hold that justice systems are the preserve of elites, then the rule of law will come under immense pressure and may even disintegrate. I suggest that the global access to justice problem is rapidly becoming a major existential crisis for the wellbeing of our societies, and it is crying out for solutions.

24. This is where i suggest AI is poised to play a transformative role. By transforming how the justice system currently operates, both within and beyond the courthouse, AI can make a tremendous contribution to advancing access to justice, and I believe it is poised to be a real game-changer in helping judiciaries to discharge their systemic responsibility. Think again of the difference between paper maps and Google Maps. I recognise that there are those who fear that AI may widen the gap, but I suggest that the global justice gap has already become a full-blown crisis that demands a new paradigm; and I think that AI can play a role in this regard, both inside and outside the courthouse.

     A.          AI within the courthouse 

25.  Let me first touch on the potential of AI within the courthouse. The experience of court users can be greatly improved even by modest efforts to harness AI to build and operate the “courts of the future”. Let me mention two types of such uses of AI.

26.  First, AI has already significantly improved the accessibility of court proceedings and court documents. Let me give two examples.

    1. The first is the use of AI tools to provide translation services, to overcome language barriers. An example is the Supreme Court of India’s adoption of the Vidhik Anuvaad Software (or “SUVAS”), a machine-assisted translation tool trained by AI to translate judicial documents, orders, and judgments from English into 11 vernacular languages and vice versa.(48) Court users who are not comfortable using English may face a daunting “literacy gap” as they try to understand and navigate justice systems designed primarily for English speakers; and while translation services are already available, AI tools can provide these services much more quickly and with a much wider reach.
  1. Another example is the use of AI to assist in the live transcription of court proceedings. Tools like the Technology Enabled RESolution service (or “TERES”) are already on the market, and are being used by the Supreme Court of India.(49) The Singapore Courts too will be implementing an AI-driven real-time transcription system.(50) Such tools can enable the parties to obtain transcripts of their proceedings to assist them in preparing their cases, without the considerable time and costs that human-operated transcription services would require. And by making records of court proceedings more readily available, these tools may also promote transparency and accountability in relation to the conduct of proceedings, which in turn can help strengthen confidence and trust in the courts.

27.  The second type looks beyond this, to improving the time- and cost-efficiency of the judicial process by revolutionising how our courts process and manage cases that are poised to become ever more complex. For example, AI tools could review and synthesise information across voluminous documents to surface relevant material,(51) flag inconsistencies in a party’s evidence and generate hyperlinked chronological timelines of the events relevant to the dispute.(52) Innovations like these could expedite the process of resolving a dispute in court by minimising the time spent on tedious or repetitive tasks, and allowing the court and the parties to direct their time and energy to the substance of the dispute at hand. This has the added benefit of lowering the costs of dispute resolution.

     B.          AI beyond the courthouse 

28. AI can have an even wider positive impact beyond the courthouse. It must be remembered that the experience of those who are already court users is just one part – albeit a very important part – of the broader system for the administration of justice. Every member of society is, in some sense, a user, a beneficiary and a stakeholder of the justice system. I suggest that there are at least three groups that AI can assist in accessing the benefits of this broader justice system. These benefits will come about by thinking about new and innovative ways in which we can provide sufficiently workable legal solutions that are not designed to mimic the traditional work of lawyers and judges, and that are powered by the raw increase of processing and computing power.

29.  First, AI tools can assist would-be litigants in their efforts to vindicate their legal rights, in cases where they might otherwise be deterred or prevented from seeking recourse through the courts due to asymmetries in information and resources, and the cost of seeking legal representation. AI tools can mitigate this problem by helping self-represented persons navigate court systems and processes that were traditionally designed with lawyers and judges in mind. An example of this is the Singapore Courts’ collaboration with the legal technology company Harvey,(53) which we will hear more about in Panel Discussion 4 tomorrow. This is a collaboration that had its roots in a cup of coffee that my colleague, Justice Abdullah, and I had with the founder of Harvey, Mr Winston Weinberg, on 23 May 2023 in New York; and three months later to the day, we signed a memorandum of understanding with Winston and his team. I am delighted that Winston is here to join us today and to speak of his knowledge in this field.

30.  Second, AI tools can empower ordinary individuals with potential legal problems to avoid or resolve these problems on their own, without the need for recourse to lawyers or even the courts at all.(54) This will be critical because the global access to justice deficit has reached such a scale that there will not be enough lawyers and judges to effectively fill this gap. Furthermore, many day-to-day legal problems that people face are quite straightforward, and yet significantly affect their lives. And the statistics show us that the justice gap is widest for the poor. Take, for instance, disagreements between neighbours over incidents of communal living like noise.(55) By making information about individuals’ legal rights and obligations more readily available and comprehensible, AI tools can help them understand their legal position. This can assist in securing adherence to their obligations, and can also empower them to assert their rights, before these potential problems become full-blown disputes. Such knowledge and awareness can also encourage the amicable resolution of disagreements, by providing a shared knowledge base for the parties to work from. This is in line with a broader conception of justice as requiring not only an accurate adjudication of rights, but also the preservation of harmony and reconciliation in our societies.(56)

31. Third, beyond dealing with problems and disputes, AI can facilitate what Professor Susskind has described as “legal health promotion”.(57) The law should be viewed as a force for good in our communities, and AI tools that promote legal literacy can improve societal wellbeing by educating the public on how to avail themselves of the many benefits and privileges that the law offers – for example, in relation to the making of a will or a lasting power of attorney. Drawing inspiration from the example of Google Maps, imagine a legal platform that provides users with basic information on these things, and then guides them to help themselves using a drop-down menu with a series of prompts. This kind of solution would help the users and beneficiaries of our broader justice system to more fully enjoy the advantages of a society founded upon the rule of law in their everyday lives. While these broader uses of AI may go beyond the adjudicative work of judges and the institutional purview of the courts, I suggest that judges need to be alive to developments in this space so that we can actively consider and explore how existing systems for the administration of justice can be improved, and how the rule of law can be strengthened, as we look ahead to an uncertain future.(58) And finally, these are all measures targeted at what are, in relative terms, some of the smallest-value legal issues – because this, ironically, is where the justice gap is at its widest.

IV.          Conclusion

32. Let me conclude by returning to where I began. In 1818, the year Frankenstein was published, Caspar David Friedrich painted the iconic “Wanderer above the Sea of Fog”, depicting a lone figure looking out into the vast unknown. In some ways, this might capture how unmoored we all might feel amid the rapid and ever-evolving technological advancements that have characterised this age of AI. But just as the explorers of yesteryear kept their bearings by relying on the sun and the stars, we too have a lodestar, and that is the unyielding mission of preserving and strengthening the rule of law, in its fullest sense. That should guide us as we navigate these uncertain times of change and challenge, but also of unprecedented power and potential. These are very important issues and that is why I am so pleased to share this time with all of you and to think about them collectively.

33. I look forward to our discussions over the next two days, and I wish you all a fruitful Conference. Thank you very much.

* I am deeply grateful to my law clerk, Stanley Woo, and my colleagues, Assistant Registrars Tan Ee Kuan, Wee Yen Jean and Bryan Ching, for all their assistance in the research for and preparation of this address.

(1)       See Gorman Beauchamp, “The Frankenstein Complex and Asimov’s Robots”, Mosaic: An Interdisciplinary Critical Journal (1980) 13(3/4) 83 (“Beauchamp”) at 83.
(2)       A term attributed to science-fiction writer Isaac Asimov: see, for example, Beauchamp at 84.
(3)       See, for example, Stuart Armstrong, “Introduction to the Technological Singularity” in The Technological Singularity: Managing the Journey (Victor Callaghan, James Miller, Roman Yampolskiy and Stuart Armstrong eds) (Springer, 2017) at p 1.
(4)       See, for example, Benjamin Alarie, “The Path of the Law: Towards Legal Singularity” (2016) 66(4) University of Toronto Law Journal 443 at 445–446; see also Jennifer Cobbe, “Legal Singularity and the Reflexivity of Law” in Is Law Computable?: Critical Perspectives on Law and Artificial Intelligence (Simon Deakin and Christopher Markou eds) (Hart Publishing, 2020) at p 107.
(5)       See Richard Susskind, “How artificial intelligence will shape the future for lawyers”, The Times (2 March 2023): https://thetimes.co.uk/article/how-artificial-intelligence-will-shape-the-future-for-lawyers-b06jhvd9l.
(6)       See Professor Richard Susskind, Lionel Cohen Lecture 2023 (3 May 2023).
(7)       See Tan Seet Eng v Attorney-General and another matter [2016] 1 SLR 779 at [1]. See also Sundaresh Menon CJ, opening address at the Singapore Academy of Law Annual Lecture 2023 (8 September 2023) at para 9.
(8)       See Sundaresh Menon CJ, “The Role of the Courts in Our Society – Safeguarding Society”, opening address at the Singapore Courts’ Conversations with the Community (21 September 2023) (“Role of the Courts”) at para 3.
(9)       See Roger Brownsword and Karen Yeung, “Regulating Technologies: Tools, Targets and Thematics” and Roger Brownsword, “So What Does the World Need Now? Reflections on Regulating Technologies” in in Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes (Roger Brownsword and Karen Yeung eds) (Hart, 2008) at pp 5 and 25–26.
(10)       See Role of the Courts at paras 3 and 4.
(11)       See Christina Pazzanese, “Ethical concerns mount as AI takes bigger decision-making role in more industries”, The Harvard Gazette (26 October 2020): https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/.
(12)       On empathy as a virtue in judging, see Gary Low, “Emphatic Plea for the Empathic Judge” (2018) 30 SAcLJ 97 (“Low”), especially at para 36.
(13)       See Sundaresh Menon CJ, address at the opening of the Family Justice Courts (1 October 2014) at para 24.
(14)       See Sundaresh Menon CJ, “Legal Systems in a Digital Age: Pursuing the Next Frontier”, address at the 3rd Annual France-Singapore Symposium on Law and Business (11 May 2023) (“Legal Systems in a Digital Age”) at para 19.
(15)       See, for example, Eiichiro Watamura, Tomohiro Ioku, Tomoya Mukai and Michio Yamamoto, “Empathetic Robot Judge, We Trust You” (2023) International Journal of Human-Computer Interaction 1–10 (reporting on an experiment where participants were shown one of four short clips of a fictitious criminal trial – one with a human judge showing empathy and compassion towards the defendant, one with a human judge showing no such compassion and simply offering objective opinions, and modified versions of each of these two clips where the human judge was replaced with a humanoid robot judge and his voice was mechanically processed to sound robotic. The participants perceived the human judge as significantly more empathetic than the robot judge. The authors also noted that, because AI lacks the genuine empathy of humans, the creation of “empathetic AI” may be either impossible or unethical.).
(16)       See Legal Systems in a Digital Age at para 47, citing Adrian Zuckerman, “Artificial intelligence – implications for the legal profession, adversarial process and rule of law” (2020) 136 LQR 426 at 438–439 and 449–451 and Low.
(17)       See Public Prosecutor v Kwong Kok Hing [2008] 2 SLR(R) 684 at [17].
(18)       See The Honourable James Allsop AC, “The Legal System and the Administration of Justice in a Time of Technological Change: Machines Becoming Humans, or Humans Becoming Machines?”, Sir Francis Burt Oration (21 November 2023) (“Allsop”) at pp 12–13 and 19.
(19)       See, for example, John Tasioulas, “Ethics of Artificial Intelligence: What it is and why we need it”, 2023 Elson Ethics Lecture (4 October 2023) at pp 11 and 21. See also Article 22 of the European Union’s General Data Protection Regulation, which sets out a (qualified) right of data subjects “not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”: https://gdpr-info.eu/art-22-gdpr/.
(20)       See, for example, Simon Chesterman, “All Rise for the Honourable Robot Judge? Using Artificial Intelligence to Regulate AI”, NUS Law Working Paper No 2022/019 (October 2022) at pp 12 and 20; and Mireille Hildebrandt, “Law as Computation in the Era of Artificial Legal Intelligence: Speaking Law to the Power of Statistics” (2018) 68(1) University of Toronto Law Journal 12 at 23.
(21)       See, for example, Reuben Binns, “Human judgment in algorithmic loops: individual justice and automated decision-making” (2022) 16 Regulation & Governance 197 at 198 and 201; see also Anselmo Reyes and Adrian Mak, “Commercial Dispute Resolution and AI”, chapter 23 in The Cambridge Handbook of Private Law and Artificial Intelligence (Ernest Lim and Phillip Morgan eds) (Cambridge University Press, 2024) (“Reyes & Mak”) at p 518.
(22)       See, for example, Reyes & Mak at pp 529 and 531; and Allsop at p 23.
(23)       See, for example, Rebecca Crootof, “‘Cyborg Justice’ and the Risk of Technological-Legal Lock-In” (2019) 119(7) Columbia Law Review 233 at 237–238.
(24)       See Toh Siew Kee v Hoh Ah Lam [2013] SLR 284 at [35].
(25)       See Oliver Wendell Holmes Jr, The Common Law (Little, Brown & Co, 1881) at p 1.
(26)       On the problem of “complexification”, see, for example, Sundaresh Menon CJ, “The Complexification of Disputes in the Digital Age”, Goff Lecture 2021 (9 November 2021).
(27)       See Legal Systems in a Digital Age at para 26; Allsop at p 4.
(28)       See Legal Systems in a Digital Age at para 17. In the context of arbitration, see Sundaresh Menon CJ, “Arbitration and the Transnational System of Commercial Justice: Charting the Path Forward”, keynote address at the 25th Annual International Bar Association Arbitration Day (23 February 2024) at para 19.
(29)       See, for example, Mirko Bagaric, Dan Hunter and Nigel Stobbs, “Erasing the Bias Against Using Artificial Intelligence to Predict Future Criminality: Algorithms are Color Blind and Never Tire” (2020) 88(4) University of Cincinnati Law Review 1037 at 1065 and 1076–1077; and Reyes & Mak at pp 523–524 (referring to the example of Predictice (https://predictice.com/), a legal technology tool which claims to be able to identify potential biases of judges).
(30)       See Tiffany Hsu and Stuart A Thompson, “Disinformation Researchers Raise Alarms About A.I. Chatbots”, The New York Times (8 February 2023): https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html, quoting Gordon Crovitz (a co-chief executive of NewsGuard, a company that tracks online misinformation) as stating that ChatGPT could be “the most powerful tool for spreading misinformation that has ever been on the internet”.
(31)       See Sundaresh Menon CJ, “Answering the Call in the Age of Artificial Intelligence”, Mass Call Address 2023 (21 August 2023) (“Mass Call Address 2023”) at para 19.
(32)       For example, by issuing standing orders requiring lawyers to certify that AI-generated output has been checked for accuracy by a human being: see Jessiah Hulle, “AI Standing Orders Proliferate as Federal Courts Forge Own Paths”, Bloomberg Law (8 November 2023): https://news.bloomberglaw.com/us-law-week/ai-standing-orders-proliferate-as-federal-courts-forge-own-paths and Jacqueline Thomsen, “Lawyers Must Certify AI Review Under Fifth Circuit Proposal”, Bloomberg Law (22 November 2023): https://news.bloomberglaw.com/us-law-week/lawyers-must-certify-ai-review-under-fifth-circuit-proposal.
(33)       Mata might be contrasted with the example of Lord Justice Colin Birss using ChatGPT to summarise an area of law that he was familiar with, and then including that summary in his judgment – this was a task to which he “knew the answer and could recognise as being acceptable”: see Michelle Chin, “British judge taps ‘jolly useful’ ChatGPT for part of his ruling”, The Straits Times (17 September 2023): https://www.straitstimes.com/world/europe/british-judge-taps-jolly-useful-chatgpt-for-part-of-his-ruling.
(34)       See Role of the Courts at para 34.
(35)       See Sundaresh Menon CJ, opening remarks at the Singapore Judicial College – National Judicial College of Australia – New Zealand Institute of Judicial Studies Inaugural Judicial Education Roundtable (18 September 2023) (“Judicial Education Roundtable Remarks”) at para 7; and Sundaresh Menon CJ, “The Role of the Judiciary in a Changing World”, address at the Supreme Court of India Day Lecture Series 1st Annual Lecture (4 February 2023) at para 21.
(36)       See, for example, the “Asilomar AI Principles” coordinated by the Future of Life Institute and developed at the Beneficial AI 2017 Conference (11 August 2017): https://futureoflife.org/open-letter/ai-principles/. Principle 8 of the Asilomar AI Principles relates to “Judicial Transparency” – “[a]ny involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority”.
(37)       See Legal Systems in a Digital Age at para 47(b) and Felicity Bell, Lyria Bennett Moses, Michael Legg, Jake Silove and Monika Zalnieriute Australasian Institute of Judicial Administration, “AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Courts Administrators” (“AIJA Guide”) at pp 32 and 37.
(38)       See Legal Systems in a Digital Age at para 47(a). For example, in the context of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool – which is used in some US jurisdictions in predicting the likelihood that an accused reoffends in the context of criminal bail and sentencing decisions – it has been found that African American defendants were more likely to be flagged as high risk despite not in fact being high risk, whereas White defendants were more likely to be flagged as low risk despite not in fact being low risk: see the AIJA Guide at p 23.
(39)       See the Mass Call Address 2023 at para 20 and Reyes & Mak at p 514.
(40)       See Allsop at p 26.
(41)       See, for example, the guidance published by the New Zealand judiciary (which identifies “red flags” that may indicate that lawyers or litigants have used generative AI to produce their materials, and provides guidance on the tasks that judges can use generative AI to perform (with suggested prompts), as well as tasks that require extra care): Courts of New Zealand, “Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals” (7 December 2023) at pp 4–6); and by the UK judiciary (which also explains the key limitations of AI and emphasises that any use of AI “must be consistent with the judiciary’s overarching obligation to protect the integrity of the administration of justice”): UK Courts and Tribunals Judiciary, “Artificial Intelligence (AI): Guidance for Judicial Office Holders” (12 December 2023).
(42)       See Role of the Courts at paras 3 and 29–30.
(43)       See Role of the Courts at para 32; Sundaresh Menon CJ, “International Mediation and the Role of the Courts”, speech to the Indonesian Judiciary (7 November 2023) at para 7; and Sundaresh Menon CJ, opening address at the Jones Day – Centre for Asian Legal Studies Professorial Lecture on the Rule of Law in Asia (18 January 2024) at paras 6–8.
(44)       See World Justice Project, Measuring the Justice Gap: A People-Centered Assessment of Unmet Justice Needs Around the World (2019) at pp 7–8: https://worldjusticeproject.org/our-work/research-and-data/accessjustice/measuring-justice-gap.
(45)       See World Justice Project, Grasping the Justice Gap: Opportunities and Challenges for People-Centered Justice Data (2021) at p 5: https://worldjusticeproject.org/our-work/publications/workingpapers/grasping-justice-gap.
(46)       See World Justice Project, Disparities, Vulnerability, and Harnessing Data for People-Centered Justice: WJP Justice Data Graphical Report II (2023) at pp 6, 19–22 and 41–45: https://worldjusticeproject.org/our-work/research-anddata/wjp-justice-data-graphical-report-ii.
(47)       See the 2023 Edelman Trust Barometer: Navigating a Polarized World – Global Report (18 January 2023) at pp 4 and 12: https://edl.mn/3X0QXQE.
(48)       See Outlook India, “Supreme Court Introduces AI Translation Tool for Regional Languages in Judicial Proceedings” (9 December 2023): https://www.outlookindia.com/national/supreme-court-introduces-ai-translation-tool-for-regional-languages-in-judicial-proceedings-news-335427; and Economic Times India, “AI backed SUVAS translation tool intended to make legalese simpler, court proceedings faster: Law minister” (11 August 2023): https://government.economictimes.indiatimes.com/news/technology/meity-launches-indian-web-browser-development-challenge-proposed-browser-to-have-cutting-edge-features/102589420.
(49)       See Rohini Mohan, “India’s Supreme Court tests using AI to transcribe hearings”, The Straits Times (15 March 2023): https://www.straitstimes.com/asia/south-asia/india-s-supreme-court-tests-using-ai-to-transcribe-hearings.
(50)       This Court Audio Services System integrates audio recording and automated AI-driven transcription technology, allowing the real-time conversion of spoken words into written text. Expensive human-operated real-time transcription services may not have to be used if the automated transcription is of a sufficiently high quality, and the automated transcription is also linked to the corresponding audio recording so that the audio recording can be checked if there are any doubts regarding the transcription.
(51)       For example, the International Business Machines Corporation (IBM) has created an AI assistant named OLGA that enables judges in the German courts to sift through documents more quickly and use specific search criteria to locate relevant information in various documents, and also provides information on the suit to contextualise the information surfaced from the search: see Eckard Schindler, “Judicial systems are turning to AI to help manage vast quantities of data and expedite case resolution” (8 January 2024): https://www.ibm.com/blog/judicial-systems-are-turning-to-ai-to-help-manage-its-vast-quantities-of-data-and-expedite-case-resolution/.
(52)       See, for example, Rachel Curry, Consumer News and Business Channel (CNBC), “AI is making its way into the courtroom and legal process” (1 November 2023): https://www.cnbc.com/2023/11/01/ai-is-making-its-way-into-the-courtroom-and-legal-process.html.
(53)       See Lydia Lam, Channel News Asia, “Generative AI being tested for use in Singapore Courts, starting with small claims tribunal” (27 September 2023): https://www.channelnewsasia.com/singapore/artificial-intelligence-court-small-claims-singapore-chatgpt-3801756; and Lee Li Ying, “Small Claims Tribunals to roll out AI program to guide users through legal processes”, The Straits Times (27 September 2023): https://www.straitstimes.com/singapore/small-claims-tribunal-to-roll-out-ai-program-to-guide-users-through-legal-processes.
(54)       See, for example, Richard Susskind, The Times, “Forget the firms, we should be asking what legal AI means for clients” (15 December 2023).
(55)       See Sundaresh Menon CJ, “Technology and the Changing Face of Justice”, keynote lecture at the Negotiation and Conflict Management Group Alternative Dispute Resolution Conference 2019 (14 November 2019) (“Technology and the Changing Face of Justice”) at paras 43–44.
(56)       See Technology and the Changing Face of Justice at paras 49 and 51.
(57)       See Richard Susskind, Online Courts and the Future of Justice (Oxford University Press, 2019) at pp 66–70.
(58)       See the Judicial Education Roundtable Remarks at paras 11–12.

Share on

Place your
Adver here

For more details, contact

Related articles: