What does a Decision become in the Era of Artificial Intelligence?
Soyez des nôtres pour une conférence L'intelligence artificielle et le droit avec Nicole Rigillo, PhD, chercheuse au Berggruen Institute et chez Element AI.
Mme Rigillo s'interroge sur l'application de l'apprentissage machine aux décisions prises à propos d'êtres humains, ce qui exige une révision de notre entendement de ce qu'est une décision exactement. C'est une question importante, car, à quelques exceptions près, nos lois, nos attentes en matière de raisonnement et nos notions de responsabilité s'articulent autour de la notion d'humains étant les principaux agents responsables des décisions prises à l’égard d'autres êtres humains.
¸éé²õ³Ü³¾Ã©
[En anglais seulement] Artificially intelligent systems are increasingly being used to both augment and replace human decision-makers of all kinds: a human resources representative screening a job applicant, a judge assessing a prisoner’s bail request, or an immigration agent issuing a visa, for example. The automation of high-stakes decisions has led to concerns about bias, fairness, and transparency. But a question that remains largely unexplored is how the application of machine learning to decisions made about human beings asks us to revise our understanding of what a decision is.
This is not a trivial question considering that, with some exceptions, our laws, expectations about explanations, and notions of accountability have been developed around humans as the primary agents responsible for decisions made about other humans. This presentation first situates how human decisions have been framed in administrative law, cognitive science and behavioural economics. It then draws on interviews with AI engineers to illustrate the major differences between human decision-making processes and the modes of reasoning used in second-wave AI.
A key issue here is the problem of interpretability, necessitating the development of a set of methods known as explainable AI, along with attendant concerns about the quality and utility of these explanations. The presentation closes by arguing that the differences in modes of reasoning between humans and machines necessitates a rethinking of laws and notions of accountability to better account for the specificity of decisions made by artificially intelligent agents.
La conférencière
[En anglais seulement] Nicole Rigillo is an anthropologist and Research Fellow at the Berggruen Institute's Transformations of the Human Program. She is based at Element AI in Montreal, where she engages AI scientists in dialogue on how artificial intelligence is changing what it means to be human.
Her current research centers around explainable AI and spaces of epistemic negotiation between humans and intelligent machines, ethical AI processes, and data collection in insurance and retail contexts. Her postdoctoral research at the University of Edinburgh examined how civic and environmental activists use WhatsApp to improve municipal governance in Bangalore, India, raising questions concerning the effects of encrypted dark social networks on democracy and the public sphere. Her PhD research at Ï㽶ÊÓƵ focused on how mandatory corporate social responsibility in India is altering an earlier model of welfare universalism by redistributing social responsibilities among groups of non-state actors.
Le cycle L'intelligence artificielle et le droit
Ce cycle de conférences est une collaboration du Laboratoire de cyberjustice de Montréal; le Collectif étudiant pour la technologie et le droit; le groupe de recherche Justice privée et état de droit; le Centre des politiques en propriété intellectuelle de McGill; et le projet d'Autonomisation des Acteurs Judiciaires par la Cyberjustice et l’Intelligence Artificielle’ (AJC).
Cette activité est admissible pour 1,5 heures de formation continue obligatoire tel que déclaré par les membres du Barreau du Québec.