Distributed Selves: Technology and Rights in the Digital Age
25.4.2018 On 25 April 2018, the LunchTimeSeries on Law, Technology, and Society (LTS) began its fifth consecutive semester. The auditorium was filled to capacity and extra chairs were brought in to accommodate the large turnout. Professor Iris Eisenberger, Institute of Law, University of Natural Resources and Life Sciences, Vienna, introduced the first LTS guest lecturer of summer term 2018: Professor Sheila Jasanoff, Pforzheimer Professor of Science and Technology Studies at the John F. Kennedy School of Government at Harvard University. In her lecture “Distributed Selves: Technology and Rights in the Digital Age”, Jasanoff advocated for a legal approach to protecting human values in a pervasively digitalised world. One can address the question of how to protect human values in the digital age from different perspectives: either by focusing on the emerging technologies and their disruptive potential, or by focusing on what human values we would like to preserve. If we take the latter perspective, as Jasanoff did in her lecture, we put human beings rather than machines at the centre of our attention. This allows us to take a closer look at our different selves: the self of observable individual characteristics (“phenotypic self”), the self of genetic and genomic information (“biological self”), and the self that consists of the digital traces we leave behind (“digital self”). In contrast to our phenotypic selves, our biological and digital selves are distributed, thus raising complex questions for the fate of human values in a digitalised world. Jasanoff illustrated the dynamic nature of these questions with judgments of the United States Supreme Court concerning the Fourth Amendment to the US-Constitution: The Fourth Amendment was originally intended to protect citizens from warrantless searches in their homes. However, in the past fifty years the United States Supreme Court has faced the questions whether wiretapping a public phonebooth (Katz v. United States, 389 U.S. 347, 1967), searching through rubbish bags left on the street (California v. Greenwood, 486 U.S. 35, 1988), or accessing cell phones (Riley v. California, 573 U.S. ___ 2014) falls within the scope of the Fourth Amendment. The technological change over time has required the US Supreme Court to re-think the definitions of private space and public space. Although its interpretation has changed in the face of technological development, the Fourth Amendment has been continuously protecting the embedded human value. To re-integrate the human values into our distributed selves, different artefacts of society such as markets, regulations, ethics, and the law come to mind. Jasanoff argued that the market could not grasp the complex issues of distributed selves due to the limited number of values it considers. Product-focused and reactive regulation, in turn, is embedded in existing social values. Therefore, it is an inadequate tool to protect such values. Ethics tend to privatise questions of value by turning public values into expertise, thus pulling them away from societal discourse prematurely. The law, on the contrary, offers a basis to declare what human values we consider worth preserving. While these commitments might be reinterpreted over time, the core values, which are collectively enshrined in them, remain. The right to be forgotten is, albeit controversial, an example for legal re-integration of such foundational values into our distributed selves in a digitalised world; a world that has the technological means to record our digital traces unforgivingly and permanently. In this light, the right to be forgotten is an attempt to prioritise what is societally desirable over what is technologically possible. It shows that it is the law that offers an appropriate place for such activism and construction of imaginaries. The audience discussion addressed, among other aspects, the democratization of surveillance, the perception of General Data Protection Regulation in the US, and the developments of the different selves. Jasanoff wove these statements and questions together into her plea for the law as suitable means of re-integration: The law allows us to discuss and democratically decide on emergent values of society, it allows us to take account of temporal and socio-cultural dependencies and, most importantly, it offers a place to raise questions that are not being asked. Thomas Buocz, April 2018 Der Bericht im PDF-Format ist hier verfügbar.
Law meets Data Science: Findings from the EUTHORITY Project
9.5.2018
The "LunchTimeSeries on Law, Technology and Society" (LTS) continued on Europe Day, 9 May 2018, with a topic which could not have been more suitable for that date: Dr. Nicolas Lampach lectured on "Law Meets Data Science: Findings from the EUTHORITY Project". The EUTHORITY Project is an interdisciplinary research project which combines legal analysis with empirical methods. Due to its interdisciplinary dimension, Lampach’s presentation was a great enrichment for the auditorium at the University of Natural Resources and Life Sciences, Vienna.
Lampach started his lecture by explaining the importance of the preliminary rulings of the European Court of Justice. As a fundamental mechanism, they ensure the uniform application of EU law. Lampach emphasised that domestic courts show major disparities in their referral behaviour. Those differences exist due to various factors such as, for instance, economic, political and legal variables. The EUTHORITY project aims to identify those factors by collecting and analysing data in order to determine the position of domestic courts towards EU Law.
What data do the project collect? They include data on economic activity, political systems, legal traditions or judicial organisations, for instance. The project covers regional as well as national aspects. Currently, the EUTHORITY project has 40 data variables and is planning to expand them to up to 80 or 85.
Lampach illustrated that the number of submitted references to the European Court of Justice differs among the member states. He took Germany and France as an example to show that even the founding states of the European Union show very different referral behaviour. Whereas Germany sends a high number of references, France appears to be more reluctant to do so. Moreover, there are not only differences among the member states, but also on a regional level within a state. To stick with the example of Germany and France: Germany shows a decentralised pattern wherein various domestic courts initiate proceedings for a preliminary ruling. France, on the contrary, is very centralised and sends references primarily from Paris.
Another outcome of the EUTHORITY research is a disparity between the referral behaviour of first level courts and peak courts. The project members observed that in the beginning, the first level courts submitted references more often than the peak courts. However, in the 1980s, this pattern changed: The peak courts became more active than other domestic judicial bodies.
Lampach cited two theories to explain this turning point: The Empowerment Theory and the Judicial Behaviour Theory. The Empowerment Theory argues that lower national courts enthusiastically began referring cases to the European Court of Justice to acquire new powers of judicial review. The peak courts later stopped this trend. According to the Judicial Behaviour Theory, workload and resources define a judge’s work ratio. In the beginning, the first level courts were functioning as "starts ups" which embraced the preliminary ruling mechanism. But the pressure on the peak courts increased when cooperation with the European Court of Justice gained more adherents. Furthermore, the peak courts have a smaller caseload, more support and assistance from clerks and other legal assistants, and therefore, a bigger capacity than lower domestic courts.
Despite the numbers of preliminary ruling proceedings, the content also shows disparities. The EUTHORITY project uses text mining as a data harvesting technique to obtain an overview of the topics discussed in the submitted references. Lampach illustrated that in Cargo Port Regions, words such as "product", "trade" and "custom" appear often, while in other regions, "contract", "proceed" and "service" are more common. On a regional level, the nouns "agriculture" and "fishery" appear more often in judgements, while in more urban regions, terms such as "law", "right" and "freedom" are predominant. Furthermore, intra EU-trade has a major impact on the referral behaviour: Courts in member states that trade more with the rest of the EU seem to submit more references to the Court of Justice.
The subsequent questions from the audience addressed topics such as the current developments regarding “legal tech”, the challenges of data protection within the EUTHORITY project, and generally, whether data collection and machines have the capacity to find profound scientific results in legal matters. But Lampach also encountered criticism from the auditorium concerning the project. One of the targets was the variables: Several auditors opined that the chosen factors had a major impact on the research results. Moreover, text mining was subjected to criticism: Some viewed this method as too perfunctory and doubted that it could replace a profound legal analysis. Lampach and the audience engaged in a lively discussion which created enriching output for all sides.
Magdalena Nemeth, May 2018
Der Bericht im PDF-Format ist hier verfügbar.
Autonome Autos - Eine technische Einführung
05.6.2018
Eines der heißesten Eisen im Automobilbereich ist zurzeit das autonome Fahren. Beinahe alle großen Kraftwagenhersteller laufen hier auf Hochtouren und versuchen eine Vorreiterrolle einzunehmen. Auch die Gesellschaft nimmt das autonome Fahren immer stärker wahr und diskutiert darüber. Hier spielen die kürzlich aufgetretenen Unfälle mit autonomen Autos von Uber und Tesla eine große Rolle, bei denen jeweils ein Mensch zu Tode kam. Deswegen tauchen im Zusammenhang mit den autonomen Fahrzeugen vor allem Sicherheitsfragen auf.
Prof. Dr. Hermann Winner von der Technischen Universität Darmstadt, der selbst über 100 Patente auf dem Gebiet der Fahrzeugtechnik angemeldet hat, führte im Rahmen der LunchTimeSeries on Law, Technology and Society (LTS) am 5. Juni 2018 in die Technologie des „Autonomen Fahrens“ ein. Zunächst wies er darauf hin, dass es sich um kein brandneues Phänomen handle. Bereits Mitte der 1990er Jahren fanden erfolgreiche Tests mit automatisierten Fahrzeugen statt, beispielsweise im Rahmen des „Prometheus“-Projekts von Daimler. Abgesehen von kurzen Eingriffen des Kontrollfahrers fuhr das Auto bereits damals selbstständig auf der Autobahn von München bis in die dänische Stadt Odense.
Doch was genau versteht man unter dem Begriff „Autonomes Fahren“? Die SAE International teilt die Autonomie des Autos in sechs Levels ein. Je höher der Level, desto mehr verschiebt sich das Fahren des Autos und die Verantwortung dafür vom Menschen auf die Maschine. Der größte Einschnitt erfolgt hierbei von Level 2 auf Level 3. Bis Level 2 muss der Fahrer in der Lage sein, bei schwierigen Situationen sofort einzugreifen. Ab Level 3 behält die Maschine bei plötzlich auftretenden neuen Situation noch für ein paar Sekunden die Kontrolle, bis der Kontrollfahrer übernimmt.
Zur aktuellen Situation auf der Straße führte Winner aus, dass im öffentlichen Bereich bereits autonome Fahrzeuge getestet werden, die mit geringer Geschwindigkeit und einem Kontrollfahrer in ausgewählten Städten unterwegs sind.
Winner unterteilt die Anwendungsbereiche autonomer Fahrzeuge in vier Kategorien: 1. Der Autobahnpilot, der von Auffahrt zu Ausfahrt fährt; 2. der Einpark-Assistent, der ohne Anwesenheit des Fahrers einen Parkplatz sucht. Darauf aufbauend wären die nächsten Schritte 3. die Maschine, die auf allen Straßen unterwegs ist, und 4. das „Vehicle-on-Demand“, das z.B. als Taxi oder als Lieferdienst genutzt werden kann. Letzteres könnte, vergleichbar mit dem Bahnsystem, von einer Leitstelle aus losgeschickt werden und so die Bedeutung von Carsharing erhöhen.
Die Bewältigung all dieser Situationen ist nur möglich, wenn das Auto „sehen“ kann, wozu unterschiedliche Sensoren eingesetzt werden: Z.B. Lidar, die große Distanzen erfassen können, Nebel die Messung aber abschwächt; Ultraschallsensoren, die Ergebnisse mit einer Genauigkeit von 5-10 Millimeter liefern, allerdings nur ein kleines Sichtfeld haben oder Radar, die zwar kaum von Nebel eingeschränkt werden, dafür allerdings aufgrund der geringen Winkelauflösung Probleme bei der Erfassung von Außenabmessungen haben. Ungeachtet der individuellen Mängel ermöglichen diese Technologien, gemeinsam eingesetzt, eine genaue Analyse des Umfelds. Nichtsdestoweniger bleiben Unsicherheiten, z.B. wenn das Auto ein Hindernis erkennt, wo keines ist oder das Hindernis nicht erkennt und ungebremst dagegen fährt. Erkennen alleine genügt auch nicht, wenn das Fahrzeug am Ende die falschen Schlüsse zieht.
Wie sicher ist autonomes Fahren wirklich? Die Entwickler bringen als Argument für die Notwendigkeit autonomer Autos vor, dass der maschinelle Fahrer sicherer sei als der menschliche. Winner zufolge lässt sich dies vor dem Inverkehrbringen der autonomen Fahrzeuge derzeit noch nicht beweisen. Zusätzlich sei auch zu bedenken, dass der Mensch schon sehr sicher fahre, denn statistisch gesehen verursacht jeder Mensch nur 1,4 Unfälle im Laufe seines Lebens. Auf deutschen Straßen werden 210 Millionen Kilometer zurückgelegt, bevor ein tödlicher Unfall passiert. Auf der Autobahn steigert sich dieser Wert sogar auf 660 Millionen Kilometer. Mit diesen beeindruckenden Zahlen muss sich das autonome Fahrzeug messen lassen und diese Werte sogar noch übertreffen, um das Sicherheitsargument zu bestätigen. Ein aussagekräftiges Urteil über die Fehleranfälligkeit der autonomen Fahrzeuge lässt sich allerdings erst nach Milliarden von auf öffentlichen Straßen zurückgelegten Kilometern fällen.
Unabhängig von der technischen Ausgestaltung und den Einsatzmöglichkeiten autonomer Fahrzeuge ist unklar, wie die Gesellschaft auf die autonomen Fahrzeuge reagiert. Lohnen sich die Investitionen der Autobranche und der einzelnen Staaten überhaupt? Steigen Verkehrsteilnehmende auf autonomes Fahren um? Unvorhersehbar ist auch, ob der rechtliche Rahmen alle Aspekte des autonomen Fahrens erfasst oder ob Anpassungen notwendig sind, insbesondere weil die Maschine künftig mehr Verantwortung übernehmen wird. Obwohl die Regeln des Zivil- und Strafrechts lediglich reaktiv sind, sieht Winner darin zumindest einen ausreichenden Rahmen, um autonome Fahrzeuge testen und erforschen zu können.
Bei der anschließenden, regen Diskussion beantwortete Winner unzählige Fragen des interessierten Publikums und wies auf die Gefahren des Hackens von Systemen hin. Besonders gefährlich seien Zugriffe auf Leitzentralen, bei denen hunderte Fahrzeuge unter Kontrolle gebracht würden.
Winner geht jedenfalls davon aus, dass autonome Fahrzeuge schon bald auf den Straßen ankommen, die Gesellschaft und die Mobilität allerdings stark verändern werden. Er rät jedenfalls zu Behutsamkeit, denn es gebe im Technologiebereich viele Dinge, die auf Annahmen und Glauben, nicht aber auf Wissen beruhten, weshalb es wichtig sei, die technischen Neuerungen skeptisch zu betrachten und aufgrund unseres kleinen Wissensstands zu versuchen, das Unerwartete zu erwarten.
Martin Weinmann, Juni 2018
Der Bericht im PDF-Format ist hier verfügbar.
The Robot Judge: Law, Technology and Historical Patterns of Change
15.6.2018
On 15 June 2018, Professor Jørn Øyrehagen Sunde, Faculty of Law, University of Bergen, gave a presentation about “The Robot Judge: Law, Technology and Historical Patterns of Change” as part of the LunchTimeSeries on Law, Technology, and Society (LTS). Despite the upcoming examination week, an interested audience took the opportunity to learn more about this lecture’s thought-provoking topic. After a short personal introduction by Professor Iris Eisenberger, Institute of Law, University of Natural Resources and Life Sciences, Vienna, Professor Sunde was warmly welcomed by the audience.
In his presentation, Sunde traced back the idea of a robot judge before examining historical patterns of change in the Norwegian legal system. He subsequently used the lessons drawn to assess possible fields of application for the robot judge. He concluded by highlighting potential obstacles that lawmakers and computer scientists need to consider before implementing a robot judge.
An interesting feature of the lecture was its unorthodox approach. According to Sunde, legal scholars nowadays often team up with engineers and other natural scientists to better understand emerging technologies. Yet, he pursued a different sort of interdisciplinary approach and analysed historical and cultural patterns of change whenever new technologies have challenged the legal system in place. Professor Sunde also clarified that his goal was not to assess whether the development of a robot judge was good or bad from a moral point of view. Instead he compared the (potential) emergence of a robot judge with other technological shifts in the past.
First, Sunde pointed out that fascination for machines goes back as far as the age of the Enlightenment. The earliest example he presented was the Schachtürke or “chess Turk”. The chess Turk was invented at the end of the 18th century and can be described as a fake chess-playing machine. Its creators claimed that the machine was able to play chess on its own. While their claim raised a lot of public attention, the “chess Turk” in reality only worked with a human player hidden inside the construction. The technology for a mechanical chess player was not there yet. In a certain way the story of the Schachtürke echoes also the current discussion about the robot judge; in late 2016 several newspapers published reports about a study by Aletras et al. called “Predicting judicial decisions of the European Court of Human Rights: A Natural Language Processing perspective”. In these reports, they exaggerated the findings of the study. As a result, the development of the robot judge seemed imminent to the general public. Eventually, this misleading news coverage prompted even the Norwegian government to ask scientists how the robot judge could be implemented in the Norwegian legal system.
In the second and third parts of his presentation, Sunde took the audience on a stroll through roughly one thousand years of Norwegian legal history. In doing so, he showed that each major development in communication technology also had a severe impact on the character of the law. The first example thereof was the change from oral law to written law. This shift enabled the central authorities to promote internal legal unity and wide-scale legislation, thus rendering the law an instrument for targeted governance.
Building on the legal history, Sunde returned to the question of the robot judge. He argued that we can already observe the massive use of computer programmes in areas like public administration or contract design. However, he highlighted the obstacles we have to face if the ongoing digitalisation leads to the development of a robot judge. Among the concerns he mentioned was the independence of the judiciary. Since a robot has to be programmed, its decisions are predetermined by the algorithms used. As a result, the robot judge would not be independent. Another issue raised was the public purpose of a court system. That is to say that a justice system also needs to reflect the changing public sense of justice. However, it remains unclear if and how the robot judge could develop the necessary ability to learn morals. This in turn might lead to decisions that the public perceives as unjust. Lastly, the orality of the proceedings might be a practical problem for the robot judge. Usually, oral proceedings do not follow a strict form, but are a complex series of questions and answers. As a consequence it might be difficult for a programme to process the argument properly.
The subsequent discussion centred on the robot judge’s potential advantages and disadvantages. On the one hand, a part of the audience argued that such a programme might just turn out to be a tool assisting the human judge in doing his or her work more efficiently and more objectively. On the other hand, some expressed concerns that the robot judge might pose a threat to modern liberal democracies because a robot judge cannot engage in moral considerations. Another issue raised was the question of causality; Professor Sunde portrayed the law primarily as a phenomenon shaped by the communication technologies at hand. Yet, it remains debatable in how far the law in return affects the use of the available means of communication.
While some of the questions had to remain unanswered, the discussion showed once more the importance of interdisciplinary approaches for the successful governance of emerging technologies.
Michael Fürmann, June 2018
Der Bericht im PDF-Format ist hier verfügbar.
Peer-to-Peer Law and the Commons
28.6.2018
The last LTS lecture of the summer term was delivered by Professor Melanie Dulong de Rosnay. A research associate professor at the French National Centre for Scientific Research (CNRS), and former visiting fellow at the London School of Economics and Political Science (LSE), she is heading the joint Information and Commons Research Group at CNRS/Paris Sorbonne and acting as a work package lead of the H2020 CAPS project netCommons on community wireless networks. Her research focuses on techno-legal infrastructure and policy for information and digital commons, but also touches upon algorithmic regulation, distributed architectures, peer production, open access, and licensing.
The first part of Melanie Dulong de Rosnay’s lecture outlined the dynamic relationship between law and technology, in particular the internet. Much of this relationship used to be fairly one-sided: For a long time, legal thinking was ill-equipped to face the staggering speed at which technology and innovation advanced. As a result, the binding norms regulating society and individual lives were no longer set by law alone, but increasingly by technology and code. Dulong de Rosnay calls this “The Myth of Digital Golems”, the impression that uncontrollable and unaccountable creatures are blindly enforcing the orders of their designers.
The advances introduced by both Cyberlaw (L. Lessig) and Lex Electronica were a turning point for legal thinking. Not only did they conceptualise how law could regulate code, but they also considered how legal values could be embedded in regulation by code. In a sense, this was an attempt to invert the relationship of technology and law from one of domination to one of cooperation.
Dulong de Rosnay proposed the concept of Peer-to-Peer Law as a further step in the evolving relationship between law and technology. In this hybrid model of regulation, law shall continue to “infect” code with its values. However, in a dialectical turn, some of code’s technical design features shall in turn be “exported” to law and its core concepts.
The second part of the lecture illustrated this alternative way of thinking through the example of peer-to-peer architectures, such as community networks (CNs) or distributed storage. These distributed architectures allow for fragmented and decentralised networks with an unstable group of participants (“peers”). In many cases these peers can be anonymous or pseudonymised, especially where CNs are used to promote privacy or to shield political activism from state control of the internet. Such technologies fundamentally challenge traditional legal reasoning, because most legal concepts rely on identifiable individual subjects whose actions can be ascribed to a time and space. Peer-to-peer married to the idea of the Commons may allow for new alleyways to (re)think core legal concepts, such as property and liability.
Property is traditionally thought of as a bundle of rights; it allows one to use (usus), process (fructus), and dispose exclusively of, or even destroy (abusus), a certain good. Although the law had the means to cope with the fragmentation of property between multiple users, further development is possible. For instance, the Free Software and Creative Commons movements were able to “hack” copyright by dissolving it into components specifying the scope of rights for future, potential users. Environmental law also developed ways to recognise the rights of a collective of users; Italy bans the privatisation of movement on water and elsewhere rights to water or land have been awarded to collectives. Further, some legislatures recently endowed natural features such as mountains and rivers with subjective rights that could be exercised by an unstable group of interested peers. For Dulong de Rosnay, such legal “hacks” constitute a particularly useful source of inspiration for the legal conceptualisation of the Commons, a good or set of goods accessible to anyone within a given (offline or online) community.
Liability is another legal concept that may be disrupted by peer-to-peer networks and reinterpreted for the Commons. Dulong de Rosnay mentioned peer-to-peer car insurance schemes as an example for distributed liability in off-line activities. However, the distribution of liability depends on the distribution of trust. In the context of on-line peer-to-peer networks with an unstable group of potentially unidentifiable peers, such distribution of trust and liability is much more problematic and may – if at all possible – be undesirable.
The lecture fuelled a rich and engaging discussion. Debates focussed on ways to rekindle trust in today’s societies, the possibility of fully distributed digital networks, the added value provided by peer-to-peer architectures, as well as the role of law in facilitating and regulating them. The questions on recent developments in internet and copyright law were particularly topical; as technology has the potential to fundamentally challenge basic legal concepts, this is equally true for the relationship between the internet and copyright. Or shall we say was?
The internet is no longer the space where John Perry Barlow declared “your legal concepts of property, expression, movement, and context” no to apply. If “copyright has always been at war with technology” (Lessig, 2006, p. 172) then it seems today that copyright is not only prevailing, but using both technology and law to promote its purposes. For instance, the new EU Copyright Directive proposal includes a so-called “link tax” (Art. 11) and “upload filters” (Art. 13). Critics say these measures would mean the end for the internet as we know it and do “irreparable damage to our fundamental rights and freedoms”. There is no doubt that further developments in this domain will be thoroughly scrutinised by both the legal profession and society at large. Daniel Romanchenko, July 2018 Der Bericht im PDF-Format ist hier verfügbar.