Distributed Selves: Technology and Rights in the Digital Age
25 April 2018
On 25 April 2018, the LunchTimeSeries on Law, Technology, and Society (LTS) began its fifth consecutive semester. The auditorium was filled to capacity and extra chairs were brought in to accommodate the large turnout. Professor Iris Eisenberger, Institute of Law, University of Natural Resources and Life Sciences, Vienna, introduced the first LTS guest lecturer of summer term 2018: Professor Sheila Jasanoff, Pforzheimer Professor of Science and Technology Studies at the John F. Kennedy School of Government at Harvard University.
In her lecture “Distributed Selves: Technology and Rights in the Digital Age”, Jasanoff advocated for a legal approach to protecting human values in a pervasively digitalised world.
One can address the question of how to protect human values in the digital age from different perspectives: either by focusing on the emerging technologies and their disruptive potential, or by focusing on what human values we would like to preserve.
If we take the latter perspective, as Jasanoff did in her lecture, we put human beings rather than machines at the centre of our attention. This allows us to take a closer look at our different selves: the self of observable individual characteristics (“phenotypic self”), the self of genetic and genomic information (“biological self”), and the self that consists of the digital traces we leave behind (“digital self”). In contrast to our phenotypic selves, our biological and digital selves are distributed, thus raising complex questions for the fate of human values in a digitalised world.
Jasanoff illustrated the dynamic nature of these questions with judgments of the United States Supreme Court concerning the Fourth Amendment to the US-Constitution: The Fourth Amendment was originally intended to protect citizens from warrantless searches in their homes. However, in the past fifty years the United States Supreme Court has faced the questions whether wiretapping a public phonebooth (Katz v. United States, 389 U.S. 347, 1967), searching through rubbish bags left on the street (California v. Greenwood, 486 U.S. 35, 1988), or accessing cell phones (Riley v. California, 573 U.S. ___ 2014) falls within the scope of the Fourth Amendment. The technological change over time has required the US Supreme Court to re-think the definitions of private space and public space. Although its interpretation has changed in the face of technological development, the Fourth Amendment has been continuously protecting the embedded human value.
To re-integrate the human values into our distributed selves, different artefacts of society such as markets, regulations, ethics, and the law come to mind. Jasanoff argued that the market could not grasp the complex issues of distributed selves due to the limited number of values it considers. Product-focused and reactive regulation, in turn, is embedded in existing social values. Therefore, it is an inadequate tool to protect such values. Ethics tend to privatise questions of value by turning public values into expertise, thus pulling them away from societal discourse prematurely.
The law, on the contrary, offers a basis to declare what human values we consider worth preserving. While these commitments might be reinterpreted over time, the core values, which are collectively enshrined in them, remain.
The right to be forgotten is, albeit controversial, an example for legal re-integration of such foundational values into our distributed selves in a digitalised world; a world that has the technological means to record our digital traces unforgivingly and permanently. In this light, the right to be forgotten is an attempt to prioritise what is societally desirable over what is technologically possible. It shows that it is the law that offers an appropriate place for such activism and construction of imaginaries.
The audience discussion addressed, among other aspects, the democratization of surveillance, the perception of General Data Protection Regulation in the US, and the developments of the different selves. Jasanoff wove these statements and questions together into her plea for the law as suitable means of re-integration: The law allows us to discuss and democratically decide on emergent values of society, it allows us to take account of temporal and socio-cultural dependencies and, most importantly, it offers a place to raise questions that are not being asked.
Thomas Buocz, April 2018
You can find the report in PDF-format here.
Law meets Data Science: Findings from the EUTHORITY Project
9 May 2018
The "LunchTimeSeries on Law, Technology and Society" (LTS) continued on Europe Day, 9 May 2018, with a topic which could not have been more suitable for that date: Dr. Nicolas Lampach lectured on "Law Meets Data Science: Findings from the EUTHORITY Project". The EUTHORITY Project is an interdisciplinary research project which combines legal analysis with empirical methods. Due to its interdisciplinary dimension, Lampach’s presentation was a great enrichment for the auditorium at the University of Natural Resources and Life Sciences, Vienna.
Lampach started his lecture by explaining the importance of the preliminary rulings of the European Court of Justice. As a fundamental mechanism, they ensure the uniform application of EU law. Lampach emphasised that domestic courts show major disparities in their referral behaviour. Those differences exist due to various factors such as, for instance, economic, political and legal variables. The EUTHORITY project aims to identify those factors by collecting and analysing data in order to determine the position of domestic courts towards EU Law.
What data do the project collect? They include data on economic activity, political systems, legal traditions or judicial organisations, for instance. The project covers regional as well as national aspects. Currently, the EUTHORITY project has 40 data variables and is planning to expand them to up to 80 or 85.
Lampach illustrated that the number of submitted references to the European Court of Justice differs among the member states. He took Germany and France as an example to show that even the founding states of the European Union show very different referral behaviour. Whereas Germany sends a high number of references, France appears to be more reluctant to do so. Moreover, there are not only differences among the member states, but also on a regional level within a state. To stick with the example of Germany and France: Germany shows a decentralised pattern wherein various domestic courts initiate proceedings for a preliminary ruling. France, on the contrary, is very centralised and sends references primarily from Paris.
Another outcome of the EUTHORITY research is a disparity between the referral behaviour of first level courts and peak courts. The project members observed that in the beginning, the first level courts submitted references more often than the peak courts. However, in the 1980s, this pattern changed: The peak courts became more active than other domestic judicial bodies.
Lampach cited two theories to explain this turning point: The Empowerment Theory and the Judicial Behaviour Theory. The Empowerment Theory argues that lower national courts enthusiastically began referring cases to the European Court of Justice to acquire new powers of judicial review. The peak courts later stopped this trend. According to the Judicial Behaviour Theory, workload and resources define a judge’s work ratio. In the beginning, the first level courts were functioning as "starts ups" which embraced the preliminary ruling mechanism. But the pressure on the peak courts increased when cooperation with the European Court of Justice gained more adherents. Furthermore, the peak courts have a smaller caseload, more support and assistance from clerks and other legal assistants, and therefore, a bigger capacity than lower domestic courts.
Despite the numbers of preliminary ruling proceedings, the content also shows disparities. The EUTHORITY project uses text mining as a data harvesting technique to obtain an overview of the topics discussed in the submitted references. Lampach illustrated that in Cargo Port Regions, words such as "product", "trade" and "custom" appear often, while in other regions, "contract", "proceed" and "service" are more common. On a regional level, the nouns "agriculture" and "fishery" appear more often in judgements, while in more urban regions, terms such as "law", "right" and "freedom" are predominant. Furthermore, intra EU-trade has a major impact on the referral behaviour: Courts in member states that trade more with the rest of the EU seem to submit more references to the Court of Justice.
The subsequent questions from the audience addressed topics such as the current developments regarding “legal tech”, the challenges of data protection within the EUTHORITY project, and generally, whether data collection and machines have the capacity to find profound scientific results in legal matters. But Lampach also encountered criticism from the auditorium concerning the project. One of the targets was the variables: Several auditors opined that the chosen factors had a major impact on the research results. Moreover, text mining was subjected to criticism: Some viewed this method as too perfunctory and doubted that it could replace a profound legal analysis. Lampach and the audience engaged in a lively discussion which created enriching output for all sides.
Magdalena Nemeth, May 2018
You can find the report in PDF-Format here.
Autonomous Cars - A technical Introduction
5 June 2018
One of the most discussed topics in the automotive sector is currently autonomous driving. Almost all major car manufacturers are investing high capacities and are trying to take a pioneering role. Even society is increasingly aware of autonomous driving and is discussing it more and more intensely. Here, the recent accidents involving autonomous cars from Uber and Tesla have played a major role, as a person was killed. For this reason, safety issues have arisen in connection with the autonomous vehicles.
Prof. Dr. Hermann Winner from the Technical University of Darmstadt, who has himself filed over 100 patents in the field of automotive engineering, introduced the technology of "autonomous driving" in the context of the LunchTimeSeries on Law, Technology, and Society (LTS) on 5 June 2018. First, he pointed out that autonomous driving is not a brand-new phenomenon. In the mid-1990s, successful tests with automated vehicles took place, for example, as part of Daimler's "Prometheus" project. Apart from brief interventions by the control driver, the car was already driving autonomously on the motorway from Munich to the Danish city Odense.
But what exactly is meant by the term "autonomous driving"? The SAE International divides the autonomy of the car into six levels. The higher the level, the more the driving of the car and the responsibility for it shifts from the human to the machine. The biggest cut is from level 2 to level 3. Until level 2, the driver must be able to intervene immediately in difficult situations. From level 3, the machine retains control for a few seconds in the event of a sudden new situation, until the control driver takes over.
Commenting on the current situation on the road, Winner has pointed out that autonomous vehicles already have been tested in the public sector, travelling at low speeds and with a control driver in selected cities.
Winner divides autonomous vehicle scenarios into four categories: 1. The freeway pilot driving from ramp to ramp; 2. the parking assistant, who is looking for a parking space without the presence of the driver; 3. the machine travelling on all roads, and 4. the "vehicle-on-demand", e.g., whether it can be used as a taxi or as a delivery service. The latter could, like the railway system, be sent from a control centre, thereby increasing the importance of car sharing.
Coping with all these situations is only possible if the car can "see". To make this possible, different sensors are necessary, e.g., Lidar, which can cover large distances, but fog weakens the measurement; ultrasonic sensors with an accuracy of 5-10 millimeters, but these have only a small field of view; or radar, which is hardly limited by fog, but has problems because of the low angular resolution in the detection of external dimensions. Despite the individual deficiencies, these technologies allow an accurate analysis of the environment when used together. Nevertheless, uncertainties, e.g., when the car detects an obstacle where there is none or does not recognise the obstacle and crashes into it without doing anything. Recognition alone is not enough if the vehicle draws the wrong conclusions at the end.
How safe is autonomous driving in the end? As an argument for the need of autonomous cars, the developers argue that the mechanised driver is safer than the human driver. According to Winner, this cannot yet be proven because the autonomous vehicles have not been placed on the market so far. Additionally, it should be remembered that people drive very safely. Statistically speaking, every human causes only 1.4 accidents in his or her entire life. Vehicles cover 210 million kilometers on German roads before a deadly accident happens. On the highway, this value increases to 660 million kilometers. With these impressive numbers, the autonomous vehicle has to be compared with, and even surpass, these values to confirm the safety argument. However, a meaningful judgment on the susceptibility of autonomous vehicles to error can only be made after billions of kilometers are travelled on public roads.
Regardless of the technical design and the application possibilities of autonomous vehicles, it is unclear how the society reacts to autonomous vehicles. Are the investments of the automobile industry and the individual countries worthwhile in the first place? Do road users change to autonomous driving? It is also unpredictable whether the legal framework covers all aspects of autonomous driving or whether adjustments are necessary, in particular, because the machine will assume more responsibility in the future. Although the rules of civil and criminal law are merely reactive, Winner sees at least a sufficient framework for testing and exploring autonomous vehicles.
In the subsequent discussion Winner answered countless questions from the interested audience and pointed out the dangers of hacking systems. Especially vulnerable are control centres, in which hundreds of vehicles would be brought under control at the same time, if these centres would get hacked.
Winner is sure that autonomous vehicles will soon be present on the streets and will change society and mobility. In any case he advises caution, because in the field of technology there are many things that are based on assumptions and beliefs, but not on knowledge. So, it is important to observe the technical innovations skeptically and to try to expect the unexpected with the basis of our little knowledge.
Martin Weinmann, June 2018
You can find the report in PDF-format here.
The Robot Judge: Law, Technology and Historical Patterns of Change
15 June 2018
On 15 June 2018, Professor Jørn Øyrehagen Sunde, Faculty of Law, University of Bergen, gave a presentation about “The Robot Judge: Law, Technology and Historical Patterns of Change” as part of the LunchTimeSeries on Law, Technology, and Society (LTS). Despite the upcoming examination week, an interested audience took the opportunity to learn more about this lecture’s thought-provoking topic. After a short personal introduction by Professor Iris Eisenberger, Institute of Law, University of Natural Resources and Life Sciences, Vienna, Professor Sunde was warmly welcomed by the audience.
In his presentation, Sunde traced back the idea of a robot judge before examining historical patterns of change in the Norwegian legal system. He subsequently used the lessons drawn to assess possible fields of application for the robot judge. He concluded by highlighting potential obstacles that lawmakers and computer scientists need to consider before implementing a robot judge.
An interesting feature of the lecture was its unorthodox approach. According to Sunde, legal scholars nowadays often team up with engineers and other natural scientists to better understand emerging technologies. Yet, he pursued a different sort of interdisciplinary approach and analysed historical and cultural patterns of change whenever new technologies have challenged the legal system in place. Professor Sunde also clarified that his goal was not to assess whether the development of a robot judge was good or bad from a moral point of view. Instead he compared the (potential) emergence of a robot judge with other technological shifts in the past.
First, Sunde pointed out that fascination for machines goes back as far as the age of the Enlightenment. The earliest example he presented was the Schachtürke or “chess Turk”. The chess Turk was invented at the end of the 18th century and can be described as a fake chess-playing machine. Its creators claimed that the machine was able to play chess on its own. While their claim raised a lot of public attention, the “chess Turk” in reality only worked with a human player hidden inside the construction. The technology for a mechanical chess player was not there yet. In a certain way the story of the Schachtürke echoes also the current discussion about the robot judge; in late 2016 several newspapers published reports about a study by Aletras et al. called “Predicting judicial decisions of the European Court of Human Rights: A Natural Language Processing perspective”. In these reports, they exaggerated the findings of the study. As a result, the development of the robot judge seemed imminent to the general public. Eventually, this misleading news coverage prompted even the Norwegian government to ask scientists how the robot judge could be implemented in the Norwegian legal system.
In the second and third parts of his presentation, Sunde took the audience on a stroll through roughly one thousand years of Norwegian legal history. In doing so, he showed that each major development in communication technology also had a severe impact on the character of the law. The first example thereof was the change from oral law to written law. This shift enabled the central authorities to promote internal legal unity and wide-scale legislation, thus rendering the law an instrument for targeted governance.
Building on the legal history, Sunde returned to the question of the robot judge. He argued that we can already observe the massive use of computer programmes in areas like public administration or contract design. However, he highlighted the obstacles we have to face if the ongoing digitalisation leads to the development of a robot judge. Among the concerns he mentioned was the independence of the judiciary. Since a robot has to be programmed, its decisions are predetermined by the algorithms used. As a result, the robot judge would not be independent. Another issue raised was the public purpose of a court system. That is to say that a justice system also needs to reflect the changing public sense of justice. However, it remains unclear if and how the robot judge could develop the necessary ability to learn morals. This in turn might lead to decisions that the public perceives as unjust. Lastly, the orality of the proceedings might be a practical problem for the robot judge. Usually, oral proceedings do not follow a strict form, but are a complex series of questions and answers. As a consequence it might be difficult for a programme to process the argument properly.
The subsequent discussion centred on the robot judge’s potential advantages and disadvantages. On the one hand, a part of the audience argued that such a programme might just turn out to be a tool assisting the human judge in doing his or her work more efficiently and more objectively. On the other hand, some expressed concerns that the robot judge might pose a threat to modern liberal democracies because a robot judge cannot engage in moral considerations. Another issue raised was the question of causality; Professor Sunde portrayed the law primarily as a phenomenon shaped by the communication technologies at hand. Yet, it remains debatable in how far the law in return affects the use of the available means of communication.
While some of the questions had to remain unanswered, the discussion showed once more the importance of interdisciplinary approaches for the successful governance of emerging technologies.
Michael Fürmann, June 2018
You can find the report in PDF-format here.
Peer-to-Peer Law and the Commons
28 June 2018
The last LTS lecture of the summer term was delivered by Professor Melanie Dulong de Rosnay. A research associate professor at the French National Centre for Scientific Research (CNRS), and former visiting fellow at the London School of Economics and Political Science (LSE), she is heading the joint Information and Commons Research Group at CNRS/Paris Sorbonne and acting as a work package lead of the H2020 CAPS project netCommons on community wireless networks. Her research focuses on techno-legal infrastructure and policy for information and digital commons, but also touches upon algorithmic regulation, distributed architectures, peer production, open access, and licensing.
The first part of Melanie Dulong de Rosnay’s lecture outlined the dynamic relationship between law and technology, in particular the internet. Much of this relationship used to be fairly one-sided: For a long time, legal thinking was ill-equipped to face the staggering speed at which technology and innovation advanced. As a result, the binding norms regulating society and individual lives were no longer set by law alone, but increasingly by technology and code. Dulong de Rosnay calls this “The Myth of Digital Golems”, the impression that uncontrollable and unaccountable creatures are blindly enforcing the orders of their designers.
The advances introduced by both Cyberlaw (L. Lessig) and Lex Electronica were a turning point for legal thinking. Not only did they conceptualise how law could regulate code, but they also considered how legal values could be embedded in regulation by code. In a sense, this was an attempt to invert the relationship of technology and law from one of domination to one of cooperation.
Dulong de Rosnay proposed the concept of Peer-to-Peer Law as a further step in the evolving relationship between law and technology. In this hybrid model of regulation, law shall continue to “infect” code with its values. However, in a dialectical turn, some of code’s technical design features shall in turn be “exported” to law and its core concepts.
The second part of the lecture illustrated this alternative way of thinking through the example of peer-to-peer architectures, such as community networks (CNs) or distributed storage. These distributed architectures allow for fragmented and decentralised networks with an unstable group of participants (“peers”). In many cases these peers can be anonymous or pseudonymised, especially where CNs are used to promote privacy or to shield political activism from state control of the internet. Such technologies fundamentally challenge traditional legal reasoning, because most legal concepts rely on identifiable individual subjects whose actions can be ascribed to a time and space. Peer-to-peer married to the idea of the Commons may allow for new alleyways to (re)think core legal concepts, such as property and liability.
Property is traditionally thought of as a bundle of rights; it allows one to use (usus), process (fructus), and dispose exclusively of, or even destroy (abusus), a certain good. Although the law had the means to cope with the fragmentation of property between multiple users, further development is possible. For instance, the Free Software and Creative Commons movements were able to “hack” copyright by dissolving it into components specifying the scope of rights for future, potential users. Environmental law also developed ways to recognise the rights of a collective of users; Italy bans the privatisation of movement on water and elsewhere rights to water or land have been awarded to collectives. Further, some legislatures recently endowed natural features such as mountains and rivers with subjective rights that could be exercised by an unstable group of interested peers. For Dulong de Rosnay, such legal “hacks” constitute a particularly useful source of inspiration for the legal conceptualisation of the Commons, a good or set of goods accessible to anyone within a given (offline or online) community.
Liability is another legal concept that may be disrupted by peer-to-peer networks and reinterpreted for the Commons. Dulong de Rosnay mentioned peer-to-peer car insurance schemes as an example for distributed liability in off-line activities. However, the distribution of liability depends on the distribution of trust. In the context of on-line peer-to-peer networks with an unstable group of potentially unidentifiable peers, such distribution of trust and liability is much more problematic and may – if at all possible – be undesirable.
The lecture fuelled a rich and engaging discussion. Debates focussed on ways to rekindle trust in today’s societies, the possibility of fully distributed digital networks, the added value provided by peer-to-peer architectures, as well as the role of law in facilitating and regulating them. The questions on recent developments in internet and copyright law were particularly topical; as technology has the potential to fundamentally challenge basic legal concepts, this is equally true for the relationship between the internet and copyright. Or shall we say was?
The internet is no longer the space where John Perry Barlow declared “your legal concepts of property, expression, movement, and context” no to apply. If “copyright has always been at war with technology” (Lessig, 2006, p. 172) then it seems today that copyright is not only prevailing, but using both technology and law to promote its purposes. For instance, the new EU Copyright Directive proposal includes a so-called “link tax” (Art. 11) and “upload filters” (Art. 13). Critics say these measures would mean the end for the internet as we know it and do “irreparable damage to our fundamental rights and freedoms”. There is no doubt that further developments in this domain will be thoroughly scrutinised by both the legal profession and society at large.
Daniel Romanchenko, July 2018
You can find the report in PDF-Format here.