Algorithms and Machine Learning are becoming part of an increasing number of life domains. As such, the question of normativity and algorithmic implementation gains importance in many different disciplines, including legal scholarship and sociology. On 30 July 2019 Nikolaus Pöchhacker, who is currently a guest researcher at BOKU/Law, gave a presentation on his ongoing PhD project exploring the entanglement of normative order and algorithmic agency in the development of a machine learning powered recommender system for a public broadcaster. Based on an empirical study he argued that algorithmic agency can be understood as a set of distributed interactions from which the algorithm as object of inquiry emerges. As a result, we have to study these interactions with the algorithms ecology to understand the normativity of machine learning and other algorithms.
The case discussed by Nikolaus Pöchhacker was also interesting insofar as public broadcasters have a legal obligation to enable democratic discourse by providing access to a wide array of opinions and positions present in the public. Drawing from the ongoing discourse in academia and politics, the developer team explored ways of implementing a recommender system which follows the idea of ‘diversity by design’. Findings from these experiments, however, seem to suggest that an effective governance of machine learning is in need of an ongoing translation of and adaption to normative claims. Thus, translating the law into technical means and rendering machine learning agency available to normative reasoning can be conceptualized as an ongoing process of interaction which, to be stable, calls for institutionalized solutions. Governance of technology under a given normative framework– so the conclusion of the talk – therefore is in need of an ongoing interdisciplinary discussion between legal scholars, social science, and computer science.