Die LTS LunchTimeSeries on Law, Technology and Society startet ins Sommersemester 2019!

Univ.-Prof. Dr. Iris Eisenberger, M.Sc. (LSE), Universität für Bodenkultur Wien, und Univ.-Prof. Dr. Konrad Lachmayer, Sigmund Freud Privatuniversität Wien, organisieren die erfolgreiche Vortragsreihe bereits das siebente Semester in Folge.

Im Sommersemester 2019 beginnt die Reihe am 17. Mai mit einem Vortrag von Prof. Claudia Müller-Birn, Freie Universität Berlin, zum Thema "Bringing the human to the center of algorithmic systems design: challenges and opportunities". Die Ankündigung finden Sie hier.

Am 12. Juni ist Prof. Sebastian Pfotenhauer, Technische Universität München (TUM), zu Gast und hält einen Vortrag zu "Testing future Societies? Developing a framework for test beds and living labs as instruments of innovation governance". Die Ankündigung finden Sie hier.

Den dritten Vortrag hält Dr. Jack Stilgoe, University College London (UCL). Er spricht am 27. Juni über "How experiments become futures - Social learning for self-driving cars". Die Ankündigung finden Sie hier.

Nach den Vorträgen laden wir zur öffentlichen Diskussion ein. In anglo-amerikanischer Tradition wird für Verpflegung gesorgt. Die Veranstaltung ist frei zugänglich; die Teilnahme ist kostenlos.

Wir ersuchen um Anmeldung unter law(at)boku.ac.at.

Das gesamte Programm als PDF finden Sie hier.

Bringing the Human back to the Centre of Algorithmic Systems Design. Challenges and Opportunities.

17.5.2019

From Mythology to a Fair Picture. When Hephaestus, the Greek god of smiths, moulded Pandora from clay, the people expected beauty and hope from her. Instead, she released nothing but plagues and misery from her jar. Frankenstein, the modern Prometheus, and his artificial human-turned-monster spread an equally chilling effect. A similar story could be told about HAL 9000, the computer who betrayed his crew in Arthur C. Clarke's „A Space Odyssey”.

Images like these still shape people’s ideas about artificial intelligence, Professor Claudia Müller-Birn emphasised at the very beginning of her talk. Media coverage often promotes these misconceptions, sometimes relying on studies that, in her view, do not give a fair picture of recent technical developments. In light of this, Müller-Birn, head of the Human-Centered Computing group at FU Berlin, takes on the task of elucidating what artificial intelligence, bots and machine learning are really about. She also wants to draw attention to its potential dangers and show how to regulate the technology in order to make best use of its opportunities.

Wikipedia’s Social Algorithmic Governance. Attempting to explain what so-called artificial intelligence truly constitutes, she draws on her rich experience and intense study of Wikipedia and its social algorithmic governance. Wikipedia had had a No-Bots-Rule until 2002. Its exponential growth of active users, however, made it impossible to review and manage articles manually and led to the policy-change and the introduction of bots. Being emphatic about the true nature of bots, Claudia Müller-Birn stresses that it is not artificial intelligence that is being discussed; rather, it is merely about autonomous or semi-autonomous computer programmes helping editors with content maintenance, coordination work and community support. Bots interact with Wikipedia’s core via its extensions in much the same way humans do but using an interface that differs from the graphic user interface compatible with humans.

What then is there to gain from Wikipedia’s use of bots? From 2002 onwards, Wikipedia developed a procedure of approving tasks assigned to bots. An active user’s proposal is looked at by peers with much scrutiny, and a sometimes lively discussion regarding the pros and cons of the bot in question unfolds. First, a member of the so-called Bot Approvals Group may approve a trial period of the task. During this period, the bot’s performance is supervised and members of the community can give feedback. At the end of the trial phase, the Bot Approvals Group approves or declines the request. It grounds its decision in whether the task has been able to conform with Wikipedia’s standards and technical requirements. The approval process might also be accelerated or be conducted more carefully based on the bot operator’s standing in the community. Generally, one’s reputation among fellow-Wikipedians and one’s expertise are important criteria when it comes to granting access rights higher up on Wikipedia’s hierarchy.

From Social Governance to Legislation. Policymakers could learn from these numerous instances of peer-discussed proposals on Wikipedia to create guidelines regulating the use of bots. These guidelines are much needed indeed, Müller-Birn repeatedly emphasised, as legislation regulating the use of algorithms in a societally desirable way is currently lacking.

If a legislator was to draft a law on the governance of bots, Article 1, as its prime rule, could well be:
“It is to be made sure that bots are identifiable as non-human-entities at all times.” Humans should familiarise themselves with this technology and effectively cooperate with machines while always remaining in charge. To this end, it is of importance that bots are not only identifiable as such, but that it is also transparent what tasks they are given to conduct.

After her talk, a lively audience enquired about the flexibility of the regulatory framework and the power shift it might imply, as well as the culture of debate on Wikipedia. Another point of discussion was the design of algorithms like the one employed by the Austrian job centre and who should be legitimised to decide on that.

Humans as the Centre of the Interaction. The essence of Professor Müller-Birn’s talk was: First, when drafting these much-needed policies, we should focus on systems with human-machine cooperation. These systems are much more widely used compared to fully autonomous systems. Here, social governance also helps to conceptualise the dependencies between the technical and the social systems. Second, while striving towards closer interactions between the ability of humans and machines to utilise opportunities, we should be aware of the risks of the technology. In doing so, we should spare no effort to ensure that humans remain the master and become the centre of algorithmic systems design. Again.

Johannes Huber, May 2019

Der Bericht im PDF-Format ist hier verfügbar.

Testing future Societies? Developing a framework for test beds and living labs as instruments of innovation governance

Testing future Societies? Developing a framework for test beds and living labs as instruments of innovation governance

Vortragender: Prof. Dr. Sebastian Pfotenhauer, Technische Universität München (TUM)

Datum: Mittwoch, 12. Juni 2019
Zeit: 12:00 - 13:30
Ort: Seminarraum SR 01, Erdgeschoß, Feistmantelstraße 4, 1180 Wien (Guttenberghaus)

Die Teilnahme ist kostenlos. Um Anmeldung bis Montag, 10. Juni 2019 wird gebeten: law(at)boku.ac.at

"Test beds and living labs have emerged as a prominent approach to foster innovation across geographical regions and technical domains. They represent an experimental approach to innovation policy that aims at once to test, demonstrate, and advance new sociotechnical arrangements in a model environment under real-world conditions. In this talk, I develop an analytic framework for this distinctive approach to innovation. Our findings suggest that test beds do not simply test a technology under real-world conditions. Rather, they equally “test” society around a new set of technologies and associated modes of governance based on par-ticular visions of the future – occasionally against considerable resistance."

Sebastian Pfotenhauer is Assistant Professor of Innovation Research at the Munich Center for Technology in Society (MCTS) and the TUM School of Management, both Technical University of Munich. He is also the coordinator of the EU-Horizon2020 project SCALINGS. His research interests include science and innovation in international settings as well as co-creation and responsible innovation. He holds an S.M. in Technology Policy from MIT and a PhD in Physics from the University of Jena, Germany, and has received post-doctoral training at MIT and Harvard.

How experiments become futures - Social learning for self-driving cars

How experiments become futures - Social learning for self-driving cars

Vortragender: Dr. Jack Stilgoe, University College London (UCL)

Datum: Donnerstag, 27. Juni 2019
Zeit: 12:00 - 13:30
Ort: Seminarraum SR 02, Erdgeschoß, Feistmantelstraße 4, 1180 Wien (Guttenberghaus)

Die Teilnahme ist kostenlos. Um Anmeldung bis Dienstag, 25. Juni 2019 wird gebeten: law(at)boku.ac.at

"Self-driving cars are currently learning to drive. Alongside well-publicised developments in machine learning, this also involves a more complicated process of social learning. Understanding and governing the politics of this technology means asking ‘Who is learning, what are they learning and how are they learning?’ On-road trials taking place in cities around the world offer a window into the social complexities of a debate that is often presented as technical. In his lecture, Jack Stilgoe will report on his previous research on the chaotic self-driving experimentation that has already taken place and describe the approach of his team’s new project “Driverless Futures?”."

Dr. Jack Stilgoe is an associate professor in Science and Technology Studies at UCL. He works on science and technology policy, particularly the governance of emerging technologies. Among other publications, he is the author of Experiment Earth: Responsible innovation in geoengineering. He leads “Driverless Futures?”, a three-year project funded by the Economic and Social Research Council to investigate how self-driving car technologies can be governed in the public interest.