Die LTS LunchTimeSeries on Law, Technology and Society startet ins Wintersemester 2019/20!

Univ.-Prof. Dr. Iris Eisenberger, M.Sc. (LSE), Universität für Bodenkultur Wien, und Univ.-Prof. Dr. Konrad Lachmayer, Sigmund Freud Privatuniversität Wien, organisieren die erfolgreiche Vortragsreihe bereits das achte Semester in Folge.

Im Wintersemester 2019/20 beginnt die Reihe am 25. Oktober mit einem Vortrag von Prof. Brice Laurent, MINES ParisTech, zum Thema "European Objects: The troubled dreams of harmonization". Die Ankündigung finden Sie hier.

Am 13. Dezember ist Prof. Arianna Vedaschi, Wirtschaftsuniversität Luigi Bocconi, zu Gast und hält einen Vortrag zu "Artificial Intelligence in the Age of Terrorism: A Tipping Point for Constitutional Law". Die Ankündigung finden Sie hier.

Den dritten Vortrag hält Prof. Sarah de Rijcke, Universität Leiden am 22. Jänner zur Frage "Bend until it breaks? On interactions between research evaluation, research conduct, and science-society relations". Die Ankündigung finden Sie hier.

Nach den Vorträgen laden wir zur öffentlichen Diskussion ein. In anglo-amerikanischer Tradition wird für Verpflegung gesorgt. Die Veranstaltung ist frei zugänglich; die Teilnahme ist kostenlos.

Wir ersuchen um Anmeldung unter law(at)boku.ac.at.

Das gesamte Programm als PDF finden Sie hier.

European Objects: The troubled dreams of harmonization

25.10.2019

Over the last three years Brexit has been an unavoidable news item in the UK and beyond. As it touches on a seemingly endless myriad of issues, Brexit is also a suitable hook for the discussion of practically any topic. Accordingly, Boris Johnson, the acting UK Prime Minister, soon appeared on the slides projected behind Professor Brice Laurent, MINES ParisTech, during his LTS lecture at BOKU University. Pictured at the 2019 Conservative Party leadership race, Boris Johnson was holding a packaged herring in his right hand – a product referred to as a “kipper” in the UK – using it as an example for EU overregulation. Johnson’s accusation prompted a strong denial from the European Commission, whose spokesperson emphasized that while there were in fact various EU provisions applicable to fish products, the regulation referred to in Johnson’s speech was not one of them.

Objects, Politics and Law

Apart from being slightly odd, the “kipper-gate” is remarkable for being a controversy surrounding an object: the kipper is not only a packaged fish in a material sense, but also in a regulatory (i.e. legal) sense. The difficulty of conceiving of objects in this dual – factual as well as regulatory – understanding, points according to Professor Laurent “to the difficulty in considering technical/regulatory objects as political objects that matter”.

Beyond fish, many things matter for EU regulators: chemicals, financial products, food, etc. They matter because the regulation of these objects is deemed important for the proper functioning of the European Single Market, the raison d’être of most EU institutions. In regulating these objects, European regulators however also (re-)define them, thus creating what could be called “European objects”.

And, because these “European objects” are not merely factual but in large parts political, they invite closer examination of the ways they are defined by various formal or informal actors and subsequently regulated by EU legislators, as well as of the imaginaries these objects reveal: “Regulatory actions on European objects are attached to long-term perspectives of desirable European futures”, Professor Laurent stressed.

Dreams of a Disentangled Market

The European Single Market needs to be freed from the entanglements of member state’s national markets to fulfil its potential – at least that is arguably the EU’s view. EU institutions are therefore pursuing what Professor Laurent called the “dream of a disentangled market”. A way to disentangle the market is to disentangle the objects within it, i.e. to harmonise them. Harmonisation is not always unproblematic, however. To illustrate his point, Professor Laurent provided the examples of French Languedoc wine and Cypriot Halloumi cheese that had both gained protected geographical status in the EU. The protections meant that these products needed to be defined, and the definitions chosen by the EU ended up excluding some of the products of Languedoc wine producers and Cypriot cheese makers. The “local” was thus transformed into a “European local”.

Other attempts at disentangling the European market were undertaken in sectors such as energy (e.g. European Union Emission Trading Scheme), finance (e.g. banking union) or tobacco (e.g. health warnings).

Whether these individual attempts were successes or not is a discussion worth having, however the point being made by Professor Laurent went further. According to him, there are indications that harmonisation is a convenient way for EU institutions to extend their power beyond their original mandate: the most poignant example probably being that the EU arguably pursues health policy through harmonised regulation of tobacco products.

The Unique Voice of European Science

Professor Laurent also voiced concerns about similarly problematic tendencies in EU institutions to ground their legitimacy on claims of scientific objectivity. He explained that these tendencies become most evident during or in the aftermath of “crises”: The Fukushima nuclear disaster led to EU-managed nuclear risk and safety assessments (stress tests), while the financial crisis lead to an increase in the powers of the European Central Bank (ECB). On both occasions a large-scale political issue was turned into a problem of expertise to be evaluated and managed by a European epistemic authority, be it centralised like the ECB, or not.

From Failures of Harmonisation to Failures of Imagination

The dreams of harmonisation are not troubled because they aim at delegating more and more powers to EU institutions. They are troubled because the politics involved in the delegation seem too often to be side-lined and alternatives are neither publicly discussed nor made explicit.  “The dream of a disentangled market and the dream of a unique voice of science are only painfully realised, and their alternatives are not articulated as consistent political projects”, Professor Laurent concluded. What seems crucial is not to prevent harmonisation but rather to find ways of imagining and articulating alternative harmonisation projects that could subsequently be subject to public debate and more transparent decision-making processes. 

Daniel Romanchenko, November 2019

Der Bericht im PDF-Format ist hier verfügbar.

Artificial Intelligence in the Age of Terrorism: A Tipping Point for Constitutional Law

13.12.2019

In the post-9/11 era of jihadist terrorism, the security situation has changed dramatically. Terrorism, by its very nature, follows no clear pattern, so predicting future incidents seems almost impossible. In addition, it is difficult, if not impossible, to punish terrorists. Therefore, preventing terrorism altogether becomes all the more important. During their LTS lecture, Prof. Arianna Vedaschi and Chiara Graziani explored the opportunities and legal challenges of using artificial intelligence (AI) to prevent terrorism.

Maintaining a high level of security in the face of terrorist threats has two sides to it, an objective one and a subjective one. Facing terrorism is not only about making people safe; it is about making them feel safe as well.

Mass surveillance (acquiring large amounts of data) and AI (looking for patterns in data) have already been used for both these purposes. Are they compatible with protecting fundamental and human rights, however? As always, tensions arise between security interests and individual liberties. Is the current proportionality test employed by the European Court of Justice (ECJ) able to resolve that tension?

There are several reasons for these concerns. First, like humans, AI can make mistakes and, in doing so, can infringe human rights. Secondly, AI could discriminate against certain groups of people. This could be due to biased input, due to certain patterns being discerned by AI itself, or due to the sheer fact that AI is not equipped to do equally well at recognizing the faces of people in different groups. Thirdly, there seems to be a lack of transparency and responsibility. Who is the agent in charge of an automated decision, and how can we challenge the outcome of an algorithm? Additionally, a shortcoming of AI so far seems to be the context-dependent interpretation of a situation.

Is mass surveillance hence incompatible with human rights? No, or, at least, not per se, says the ECJ. In its decision on Passenger Name Records (PNR) data, the ECJ held that mass surveillance is possible in principle, provided the provisions are specific and precise, and that decisions are reviewed by a human being. In practice, however, AI cannot be used without infringing human rights.[1]

Categories of counter-terrorism measures. There are a variety of ways in which AI and big data (which is not the same but often comes into play in conjunction with AI technology) can be used to counter terrorism.

Meta-data are an important part of big data analysis. They contain information about other data. As such, they are neutral. If combined with other (meta-)data, however, they can be used to profile people with the aim of predicting and preventing events like terror attacks. Facial recognition technology creates a biometric template based on a person’s facial features. Screening faces in public and comparing them to a database allows for the automated identification of individuals.[2] Content on the internet can be screened with the help of AI to look for and prevent dangerous activities. As this is regularly also done by private actors, the question of how to regulate public–private-partnerships in this area is highly topical. Closely related to this are attempts to anti-radicalize individuals showing suspicious behaviour online. Strategies like the redirect method[3] identify users who are susceptible to radicalization and redirect them to content with a more balanced view on a certain topic. Lastly, AI can be used in the financial sector in order to hinder the financing of terrorism. There have been different legislative acts in the EU related to money laundering. However, using AI to conduct these activities has not yet been addressed, leaving the decision on whether and how to use it to financial institutions.

Constitutional law analysis. Making use of modern technology allows for higher standards of safety. For a constitutional lawyer, however, several questions arise that need to be addressed.

Principle of Proportionality. Several provisions of EU law limit restrictions on the enjoyment of individuals’ rights and liberties. According to Article 52 CFREU, any limitation on rights and liberties must be enacted by law. It also must target a legitimate aim and be necessary for that aim. Finally, the provision must be proportional, meaning that the severity of the infringement and the gains from the restriction must strike a certain balance. In several decisions, the ECJ has declared that mass surveillance fails the proportionality test. The indiscriminate storage of communications data, for example, has been deemed to be excessive.[4]

Non-discrimination. Using face recognition algorithms, AI can be used to differentiate between suspect and non-suspect individuals. As AI is not equally well equipped, for instance, to recognize male as opposed to female faces, it could discriminate against certain groups of people. Also, biased input to self-learning machines leads to biased learning outcomes.

Expanding role of the private sector. The private sector often has more resources and more advanced technology available than the public sector. This could lead to the outsourcing of functions that are considered to be essentially state functions. A quasi-judicial role might be handed over to private parties. There are practical arguments on the side of private actors. On the other hand, a lack of transparency and accountability speak against them. Powerful private parties dealing with these issues would also take over the task of defining terms. Outsourcing of vital functions could also pose a threat to our concept of state sovereignty. Lastly, it is doubtful whether private actors who operate on the basis of profit are appropriate to provide for public security.

Lack of traditional sources of law. Besides hard law (e.g. Art 7 of Directive 2016/681) and institutional soft law sources (e.g. Guidelines on Artificial Intelligence and Data Protection by the Council of Europe), private soft law emerges as the real master of AI and counter-terrorism. The pivotal role of private soft law is especially problematic as it cannot be subject to judicial review.

A very lively discussion following this multifaceted talk proved the importance of this topic. The questions ranged from generally contesting AI’s ability to interpret complex human language to the sensitive issue of classifying potentially radical users. One audience member mentioned a very recent decision by the Austrian constitutional court pertaining to mass surveillance. The court had declared parts of the so-called Safety Package unconstitutional, in part because the indiscriminate surveillance also allowed for discovery of minor criminal behaviour – in the light of that, the infringement of fundamental rights was not proportional.[5]

Way forward? Given the potential of AI and big data for counter-terrorism, renouncing these tools does not seem to be an option. The threats to human rights are obvious as well and, so far, mass surveillance has been judged to be disproportionate by the ECJ. Will an adaptation of the proportionality test lead to a reconciliation of conflicting interests? Only time – and society’s preferred balance between security and freedom – will tell.

Johannes Huber, Januar 2020

Der Bericht im PDF-Format ist hier verfügbar.


[1] PNR, Opinion 1/15 of the Court (Grand Chamber) of 27 July 2017, ECLI:EU:C:2017:592.
PNR is about automated risk assessments of airline passengers based on information they provide.

[2] The use by the UK police of facial recognition to identify individuals in crowds was deemed to be legal by the Cardiff High Court: EWHC 2341, 4 September 2019.

[4] Digital Rights Ireland, Judgement of the Court (Grand Chamber), 8 April 2014, ECLI:EU:C:2014:238.

[5] Verdict of the Austrian Constitutional Court of 11 December 2019, G 72-74/2019-48, G 181-182/2019-18.

Bend until it breaks? On interactions between research evaluation, research conduct, and science-society relations

22.01.2020

The rise of statistics in society can be traced into the twentieth, nineteenth and perhaps even prior centuries. Some commentators also view big data as adding an entirely new qualitative dimension to these developments. Today we are certainly witnessing the datafication of just about everything. The strive for ‘objective’ criteria for measurement, classification and evaluation has undoubtedly also had an impact on academia. Sarah de Rijcke, Professor of Science, Technology and Innovation Studies and Scientific Director at the Centre for Science and Technology Studies (CWTS) at Leiden University, is one of the leading scholars specialising in the social studies of research evaluation. Her work examines the repercussions of research evaluation not only on the academic profession and research practices, but more broadly on policy and decision-making as well as the manifold relations between science and society.

Bridging the Gap

In her lecture, Professor de Rijcke explained how the metric system was layered on top of the academic system throughout of the 1970s and 1980s. This represented a shift in how quality was defined in academic work. Perhaps it is also a symptom of the growing distance between the academic ‘work floor’ and the higher levels of decision-making: To bridge the growing gap, decision-makers started to increasingly rely on numbers – to the detriment of contextual considerations. 

Promise and Delivery

There are good reasons to use metrics and in general to evaluate scientific research. One of the central aims is to encourage excellence by rewarding scientific output. Another aim is to encourage science with societal relevance by establishing a link between research and policy priorities: Here, the underlying interventionist agenda becomes manifest. Public demands for science to follow societal priorities are legitimate, not least because academic institutions in most countries still largely depend on public funding. Professor de Rijcke therefore emphasised that one of the motivations behind evaluation is to “help science deliver its promise to society”.

Bend Until It Breaks?

However, Professor de Rijcke did not fail to mention the other side of the argument: Research evaluation can have detrimental consequences and arguably does not ‘deliver on all of its promises’ either. The dependence on metrics pressures researchers into publishing, which is not always conducive to quality and depth. Quantification also tends to produce a dynamic known as the ‘Matthew effect’, where advantages in publishing opportunities and funding are accumulated, leading to ever further advantages: "the rich get richer and the poor get poorer". In this context, the question of whether ‘excellence’ can even be defined in any meaningful sense becomes ever more pertinent. Furthermore, it is questionable whether metrics are apt to measure ‘relevance’ at all, and whether they in fact exacerbate already existing path dependencies in publishing and funding. There is also well-founded scepticism about whether established funding criteria – such as interdisciplinarity, team-science or mission-oriented research, as in the case of the EU – are transposable and can be equally fulfilled by all academic disciplines. The underlying question that concerns Professor de Rijcke is thus of course: How far can research evaluation go before it stops being beneficial and starts being detrimental to the overall ‘health’ of science and academia?

Evaluation and ‘Fluid’ Knowledge

Professor de Rijcke’s current ERC-funded research project examines how evaluation shapes the field of ocean science. The aim is to better understand how research agendas in ocean science are being shaped and developed, to “describe the values that guide concrete science policy steering efforts with respect to the role of ocean science in and for society” and to “develop new concepts to theoretically grasp whether and how research evaluation shapes knowledge-making”.

Daniel Romanchenko, Februar 2020

Der Bericht im PDF-Format ist hier verfügbar.