Characteristica Universalis Lex: Artificial Intelligence and the Ghosts of LegalTech Past
For our first AI and Law talk of the university year, we welcome Dr Christopher Markou, Faculty of Law, University of Cambridge.
The conference will take place on Zoom:
Abstract
The question âis law computable?â immediately recalls the classic jurisprudential question: âwhat is law?â â a question posed by both legal pragmatists and idealists. For tough-minded pragmatists, the question âwhat is lawâ might entail little more than a prediction of whether those in authority will or will not stop a planned action or penalise a completed one. This pragmatic approach appeals to businessâincluding the burgeoning LegalTech industryâbecause efficiency (read: throughput) is the name of the game. After all, commercial clients donât concern themselves with esoteric legal values, and non-lawyer clients may not even recognise them. Rather, the question is really whether some law enforcement body or judge will stop, penalise, or reward the action. If the law is reframed as the task of predicting behaviours and proactively intervening, the skills needed to practice law may become similarly circumscribed, more formulaic, and more readily computable.
But what does computable law really portend about the future of legal regimes premised on due process, equality of arms, and fairness?
Thought leaders in the field of computational legal studies or those straddling the line between legal academics and entrepreneurship are quick to tout the abilities of their models to best human experts at some narrow game of foretelling the future by doing yesterdayâs homework. Most often this involves predicting whether the U.S. Supreme Court or European Court of Human Rights, for instance, will affirm an appealed judgment based on some set of variables about the relevant jurists. For reductionist projects in computational law (particularly those that seek to replace cases before them rather than complement legal practitioners), traces of the legal process are equivalent to the process itself. If a machine produces a judgement that is in some way persuasive, we should accept it, goes one refrain.
But do we not teach our students that in law the process of exercising legal judgement is inseparable from the resulting judgement? Isnât the process the exercise?
For enthusiastic LegalTech developers, the answer is ânoâ. The words in a complaint and an opinion, for instance, are taken to be the essence of the proceeding, and variables gleaned from decisionmakersâ past actions and affiliations determine their subsequent ones. In this behaviouristic rendering, litigants present pages of words to the decisionmaker, and some set of pages better matches the decisionmakerâs preferences, and then the decisionmaker tries to write a justification of the decision sufficient not to be overruled by higher levels of appeal. From the perspective of the client, predictions that are 30 per cent more accurate than a coin flip, or 20 per cent more accurate than casually consulted experts, are not just useful; they are seen as the future. But there is more to law and legal process than can be computationally imputed, and limits to public trust and acceptance of so-called âRobot Judgesâ and automating ever more aspects of legal process and judgement. The human and repetitional toll, however, of automating human discretional authority âout of the loopâ has become acutely clear from the Australian âRoboDebtâ fiasco, the UK's use of a proprietary algorithm to award marks for classes curtailed by COVID, and Canadaâs use of biometrics to assess refugee claims.
This talk will first examine the history of computers and AI in legal contexts, focusing specifically on the hype around Legal Expert Systems (LES) in the 1980s-1990s to the current generation of LegalTech applications. Drawing on first-hand accounts from lawyers, developers and researchers the talk will survey the technical, practical, and theoretical seeds of failure and what can be learned from it. The talk will then turn to an examination of how concurrent developments in neuroscience, physics, biology and data science are actualising a machinic ontology of the world whereby everything, including law, is computable. The talk will conclude with recommendations for research priorities in computational legal studies and suggestions for where to draw âred linesâ for automating legal process or judgement.
About the speaker
Dr Christopher Markou is Leverhulme Fellow and Lecturer in the Faculty of Law, University of Cambridge, Associate at the Cambridge Centre for Business Research (CBR), Director of the AI, Law & Society LLM at Kingâs College London, and Fellow of the Royal Society of the Arts. He writes widely on emerging technologies policy and governance, with work featured in outlets such as Scientific American, Newsweek, and Wired, among others. Christopher has been a keynote speaker at the Cheltenham Science Festival, Cambridge Festival of Ideas, Ted Talks, and World Congress on Information Technology. He is co-editor (with Professor Simon Deakin, Cambridge) of the forthcoming volume "" (Hart 2020) and author of the forthcoming monograph Lex Ex Machina: From Rule of Law to Legal Singularity. Twitter:
The AI and Law Series
The AI and Law Series is brought to you by the Montreal Cyberjustice Laboratory; the McGill Student Collective on Technology and Law; the Private Justice and the Rule of Law Research Group; and the Autonomy Through Cyberjustice Technologies Project.
This event is eligible for inclusion as 1 hour of continuing legal education as reported by members of the Barreau du Québec.