LTS LunchTimeSeries on Law, Technology and Society enters into the Summer Term 2019!

For the seventh consecutive semester, Univ.-Prof. Dr. Iris Eisenberger, M.Sc. (LSE), Institute of Law at the University of Natural Resources and Life Sciences, Vienna, and Univ.-Prof. Dr. Konrad Lachmayer, Sigmund Freud University, Vienna, are organising the Lunch Time Series on Law, Technology and Society (LTS).

In the summer term 2019, the series will start on 17 May 2019 with a lecture given by Prof. Claudia Müller-Birn, Free University of Berlin, on "Bringing the human to the center of algorithmic systems design: challenges and opportunities". You can find the announcement here.

On 12 June 2019, Prof. Sebastian Pfotenhauer, Technical University Munich (TUM), will give a lecture on “Testing future Societies? Developing a framework for test beds and living labs as instruments of innovation governance”. You can find the announcement here.

As our third guest in this semester, we will welcome Dr. Jack Stilgoe, University College London (UCL), on 27 June 2019. He will talk about "How experiments become futures - Social learning for self-driving cars". You can find the announcement here.

After each lecture, there will be an opportunity for public discussion. Based on the Anglo-American model of the Lunch Time Series, the Institute of Law will provide catering. The event is open to all and participation is free of charge.

Please register in advance at: law(at)boku.ac.at.

You can find the complete program of the semester here.

Bringing the Human back to the Centre of Algorithmic Systems Design. Challenges and Opportunities.

17.5.2019

From Mythology to a Fair Picture. When Hephaestus, the Greek god of smiths, moulded Pandora from clay, the people expected beauty and hope from her. Instead, she released nothing but plagues and misery from her jar. Frankenstein, the modern Prometheus, and his artificial human-turned-monster spread an equally chilling effect. A similar story could be told about HAL 9000, the computer who betrayed his crew in Arthur C. Clarke's „A Space Odyssey”.

Images like these still shape people’s ideas about artificial intelligence, Professor Claudia Müller-Birn emphasised at the very beginning of her talk. Media coverage often promotes these misconceptions, sometimes relying on studies that, in her view, do not give a fair picture of recent technical developments. In light of this, Müller-Birn, head of the Human-Centered Computing group at FU Berlin, takes on the task of elucidating what artificial intelligence, bots and machine learning are really about. She also wants to draw attention to its potential dangers and show how to regulate the technology in order to make best use of its opportunities.

Wikipedia’s Social Algorithmic Governance. Attempting to explain what so-called artificial intelligence truly constitutes, she draws on her rich experience and intense study of Wikipedia and its social algorithmic governance. Wikipedia had had a No-Bots-Rule until 2002. Its exponential growth of active users, however, made it impossible to review and manage articles manually and led to the policy-change and the introduction of bots. Being emphatic about the true nature of bots, Claudia Müller-Birn stresses that it is not artificial intelligence that is being discussed; rather, it is merely about autonomous or semi-autonomous computer programmes helping editors with content maintenance, coordination work and community support. Bots interact with Wikipedia’s core via its extensions in much the same way humans do but using an interface that differs from the graphic user interface compatible with humans.

What then is there to gain from Wikipedia’s use of bots? From 2002 onwards, Wikipedia developed a procedure of approving tasks assigned to bots. An active user’s proposal is looked at by peers with much scrutiny, and a sometimes lively discussion regarding the pros and cons of the bot in question unfolds. First, a member of the so-called Bot Approvals Group may approve a trial period of the task. During this period, the bot’s performance is supervised and members of the community can give feedback. At the end of the trial phase, the Bot Approvals Group approves or declines the request. It grounds its decision in whether the task has been able to conform with Wikipedia’s standards and technical requirements. The approval process might also be accelerated or be conducted more carefully based on the bot operator’s standing in the community. Generally, one’s reputation among fellow-Wikipedians and one’s expertise are important criteria when it comes to granting access rights higher up on Wikipedia’s hierarchy.

From Social Governance to Legislation. Policymakers could learn from these numerous instances of peer-discussed proposals on Wikipedia to create guidelines regulating the use of bots. These guidelines are much needed indeed, Müller-Birn repeatedly emphasised, as legislation regulating the use of algorithms in a societally desirable way is currently lacking.

If a legislator was to draft a law on the governance of bots, Article 1, as its prime rule, could well be:
“It is to be made sure that bots are identifiable as non-human-entities at all times.” Humans should familiarise themselves with this technology and effectively cooperate with machines while always remaining in charge. To this end, it is of importance that bots are not only identifiable as such, but that it is also transparent what tasks they are given to conduct.

After her talk, a lively audience enquired about the flexibility of the regulatory framework and the power shift it might imply, as well as the culture of debate on Wikipedia. Another point of discussion was the design of algorithms like the one employed by the Austrian job centre and who should be legitimised to decide on that.

Humans as the Centre of the Interaction. The essence of Professor Müller-Birn’s talk was: First, when drafting these much-needed policies, we should focus on systems with human-machine cooperation. These systems are much more widely used compared to fully autonomous systems. Here, social governance also helps to conceptualise the dependencies between the technical and the social systems. Second, while striving towards closer interactions between the ability of humans and machines to utilise opportunities, we should be aware of the risks of the technology. In doing so, we should spare no effort to ensure that humans remain the master and become the centre of algorithmic systems design. Again.

Johannes Huber, May 2019

You can find the report in PDF-format here.

How experiments become futures: Social learning for self-driving cars

27.6.2019

Tomato harvesting and self-driving cars

“This is a tomato harvester”, said Dr Jack Stilgoe, Associate Professor at University College London (UCL), gesturing towards his presentation slides.  The slides showed an image of an enormous agricultural machine in action on a tomato plantation. While the audience was still trying to figure out the connection between the tomato harvester and social learning for self-driving cars, Stilgoe continued: “If you’re a juicy, tasty, and soft tomato, you have little chance of surviving the tomato harvester. The tomato harvester favours dense, hard, and absolutely tasteless tomatoes.”

Stilgoe further explained that the emergence of the tomato harvester has had far-reaching consequences for the tomato market and beyond. For example, apart from encouraging the production of low-quality tomatoes, it has significantly reduced the number of migrant farm workers. However, at the time the tomato harvester was first introduced, few understood it as “an artefact with politics”. Therefore, its effects on society have been thoroughly described and discussed only in retrospect. With self-driving cars, we are again about to miss the window for a timely and proper public discourse.

Where do we come from?

To show how self-driving cars have been imagined in recent history, Stilgoe used two examples.

A very early vision of a self-driving car was demonstrated with a film at the General Motors “Motorama” auto show in 1956. The self-driving car in the film only worked on dedicated roads with navigation aids for the car. Very much like the tomato harvester, self-driving cars have been having an effect in their field of operations from early on.

In 2007, DARPA (the US Defense Advanced Research Projects Agency), an agency of the US military, held public demonstrations of self-driving vehicles. At first glance, the vehicles looked pitiful at best. Although driving at low speed, they still repeatedly crashed into each other and into obstacles. At second glance, however, the demonstration of vehicles driving by themselves – however badly they did so – showed that what had once seemed impossible was now inevitable.

Are we nearly there yet?

How far have we come since DARPA’s demonstration more than ten years ago?

What might give us a realistic impression of the progress we have made or not made since “Motorama” in 1956 is the new self-driving bus for the “Seestadt Aspern”, an area in the east of Vienna. Only recently, in June 2019, the “auto.Bus – Seestadt” started its public transportation test service. Its operation is highly restricted. It has a maximum speed of 20 kph, it operates in a newly-built area without much traffic, dedicated stops have been built and equipped with sender units, and a human operator sits on the bus and has the ability to intervene if necessary.

However, a much more advanced stage of development is suggested by countless videos on YouTube about Tesla’s “Autopilot System”. Many of these are even speeded up to give the appearance of higher driving speed. “The videos are mere performances and should be treated as such. They are doomed to succeed”, Stilgoe added. The problem is that such demonstrations make self-driving cars seem more sophisticated and safer than they really are. This is especially the case since neither the enthusiastic users nor the manufacturers themselves are very keen on emphasising how immature the technology still is.

To illustrate the severe consequences of this overestimation, Stilgoe introduced the audience to Joshua Brown, the first person to die in a self-driving car accident, and to Elain Herzberg, the first bystander to be killed in a self-driving car accident. In the course of the investigation by the NTSB (the United States National Transportation Safety Board) that followed Joshua Brown’s death, the NTSB criticized Tesla for not doing enough to prevent the misuse of the “Autopilot System”.

Did we take a wrong turning?

The short history of self-driving cars has been shaped by innovation without permission, and premature experiments done by manufacturers. Self-driving cars were sold like finished products, but manufacturers did not take responsibility for the failures that occurred, as it was – after all – only “public beta phase” testing.

The “public beta phase” testing and its problems led straight to a lively discussion (lively despite the fact that it was 35 degrees Celsius in Vienna that day), in which the audience and Stilgoe touched upon a broad range of questions around the regulation of self-driving cars. Is there a right moment to expand the laboratory from a fenced-off territory to the public at large? How proactive could policy-makers be and how proactive are they in reality? Do we need a stricter framework for labelling zones of experimentation, so that at least bystanders are informed about the risk that comes with self-driving cars?

Although these questions are complex, coming up with answers and developing solutions is not impossible. What counts is that the questions are asked in time, because – for all we know – we might be the self-driving car equivalent of a juicy, tasty, and soft tomato.

Thomas Buocz, July 2019

You can find the report in PDF-format here.

Testing future Societies? Developing a framework for test beds and living labs as instruments of innovation governance

12.06.2019

Test beds are usually thought of as places where we test technology under real-world conditions. However, in test beds we can equally well test how societies can be reconfigured for a new set of technologies. With this statement at the beginning of his talk entitled “Testing future societies? Developing a framework for test beds and living labs as instruments of innovation governance”, Professor Sebastian Pfotenhauer, Technical University Munich (TUM), immediately gave away an important conclusion.

Defining test beds

Test beds (also known as “living labs” or “real-world laboratories”) are a prominent innovation tool that is deployed by companies and research institutions to drive innovation in a designated experimental space. They aim to provide a controlled experimental space that aids the collection of feedback on a new technological invention under realistic conditions. After shaping this invention to perfection in a safe way, it can be expanded (“scaled up”) from the test bed to other parts of society. Examples of test beds are Masdar City in the United Arab Emirates, Sidewalk Toronto in Canada and the Catalonian Living Lab in Spain.

In the context of test beds, it is important to point out that when scientists replicate their perception of humanity, they also frame future society. Therefore, test beds not only shape technology but also reinvent the way we perceive society. Pfotenhauer illustrated this fact using two examples of test beds in Germany.

Two case studies of the testing of future energy systems

The “Energiewende” in Germany has resulted in many test beds as part of the work on more sustainable living alternatives for the future. Two prominent examples are the “European Energy Forum” (EUREF), an urban smart energy campus in Berlin, and the “Energy Avantgarde Anhalt” (EAA), a regional renewable energy network in Saxony-Anhalt.

EUREF has quickly become a flagship initiative of a new sustainable way of urban living. With its hip and “berlinesque” flair, it attracts a young and innovative crowd. Because the property is privately owned and its infrastructure is fenced off, it is fairly easy to advance further discoveries in this controlled environment. Although some consider the EUREF to be a space for carrying out testing, others are critical that the EUREF has become nothing more than a showcase for public demonstrations.

The EAA, on the other hand, is not fenced off, is located in a rural environment and is spread over a much larger territory than the EUREF. Located in the state with the highest average age and second-lowest GDP per capita in Germany, this test bed faces obstacles that are rather different from those encountered by the EUREF. Moreover, the EAA is – in contrast to the EUREF – relatively open to any citizen wishing to join the project. However, not many have seized this opportunity.

Three tensions of test beds in society

Because of the unique combination of defining elements that make up a test bed, it is subject to different expectations that cannot all be met at the same time within the same project. This creates certain tensions, three of which Pfotenhauer elaborated on during his talk.

Controlled experimentation versus messy co-creation – Although scientists usually want to work in a controlled, replicable environment, they also need a certain unpredictability to test how a product functions in an unexpected situation.

Testing versus demonstrating viability – Not every experiment ends with a positive result. Trial and error is an acceptable scientific method. However, if a test bed is meant to demonstrate progress, a public failure under the gaze of an interested crowd is usually not desirable.

Unique real-world settings versus scalable solutions – Test beds are under pressure to perform under very unique social conditions but at the same time to produce results that are applicable to similar obstacles everywhere.

Legal protection of society

There are many practical considerations when it comes to implementing a test bed in research. Do we want an opt-in or an opt-out system? How practical is either of these if the only way to opt out is by avoiding the test bed in question?

In any case, the legal protection of the individuals affected by the testing is crucial for the acceptance of test beds. According to Pfotenhauer and his SCALINGS colleague Iris Eisenberger, this can be facilitated through different regimes.

First, it is possible to require informed formal consent for participation in a test bed, which can be compared to the consent one gives in a medical context. This regime is often used when it comes to testing robots that interact with individuals, for example. Second, persons affected can rely on judicial protection if they have legal standing in court. The law can determine under what circumstances a person affected by a test bed has legal standing. Third, legal protection can be achieved not on an individual but on a general level if the legislator defines specific conditions that must be met by a test bed. Nevertheless, the legislator’s conditions must be in accordance with the fundamental rights of the individuals involved on both sides. Here, the key challenge is to balance the innovator’s rights to protection of his or her property, to research and to run a business for example, on the one side, and the affected individuals’ rights to bodily integrity and privacy for example, on the other side.

At the end of the lecture, the audience discussed among other things where the current craze of naming every laboratory a living lab came from, and whether obtaining consent from participants hinders the progress of research. The ethical boundaries of test beds and, in particular, the comparison of animal testing and “human testing”, prompted many follow-up questions.

Melisa Krawielicki, June 2019

You can find the report in PDF-format here.