Machine Agencies is thrilled to welcome Dr. Luke Stark for the talk “Laws of Inference: Conceptual Limits for Automated Decision-Making”:
Regulation via the epistemological structure of an application space is one potential mechanism to address the social impact of rapid advances in machine learning (ML) and other artificial intelligence (AI) methods used for automated decision-making. Drawing on Carlo Ginzburg’s distinction between conjectural (abductive/inductive) and empirical (deductive) science, I argue that ML systems should be assessed for their conceptual assumptions as well as their proposed use cases. This assessment should be grounded both in the forms of inferential reasoning (inductive, deductive, and or abductive) involved in a particular automated analysis, as well as the domain in which the analysis is being performed. In the paper, I sketch out a matrix of inferential types and use case categories that serves as a first step towards a more granular AI governance regime. Given the shaky epistemological foundations and social toxicity of much automated conjecture about human activities and behavior, such use cases deserve heightened legal, technical, and social scrutiny.
When? Wednesday, February 15 TH, 12:00 PM – 1:30 PM EST
Where? Milieux Resource Room (EV 11.705)
Luke Stark is an Assistant Professor in the Faculty of Information and Media Studies at the University of Western Ontario. His work interrogates the historical, social, and ethical impacts of computing and artificial intelligence technologies, particularly those mediating social and emotional expression. His scholarship highlights the asymmetries of power, access and justice that are emerging as these systems are deployed in the world, and the social and political challenges that technologists, policymakers, and the wider public face as a result.
The event is hosted at the Milieux Institute at Concordia University by the Machine Agencies Research Group.