From the Cloud to the Edge: New Ways for Data Appropriation
19.10.2018
“Data is the new oil. It must be collected, refined, and transmitted – preferably without being leaked.” With this analogy, Professor Alain Strowel set the stage for his talk “From the Cloud to the Edge: New Ways for Data Appropriation” on 19 October 2018 at the University of Natural Resources and Life Sciences, Vienna.
Whereas oil resources are shrinking, the amount of data in our world is enormous and growing larger every minute. One autonomous car, for example, generates up to 4,000 gigabytes of data per day. How should we regulate the control of and the access to this valuable resource? Who should own it? How should it be saved? And should the legislator regulate it by law or leave it to private autonomy?
Big Data, from the cloud to the edge
Strowel started his lecture by briefly introducing the concept of Big Data as large amounts of data that are interpreted by data analysis tools designed to “cope with data abundance as opposed to data scarcity”.
(Big) data can be stored “in the cloud” or “on the edge”. Whereas data storage in the cloud stores data in a centralized way (e.g., on so-called “serverfarms”), computing on the edge decentralises data, bringing processing power closer to the source of data. Strowel explained the tension between the two storage concepts by the examples of smart metering and autonomous cars. Efficient smart metering (e.g., for tracking household energy consumption) relies on a centralised treatment of data and on simple end-devices. Autonomous cars, however, require high local processing power, because a short response time is crucial for the passenger’s safety. From a data protection perspective, diverging data storage designs (here: centralised cloud storage versus decentralised edge computing) can have far-reaching legal consequences. For example, data breaches are more severe if they happen in centralised data silos. In contrast, the more complex the data collecting device on the edge is, the more privacy concerns arise for its user.
Data property and data appropriation
Among other data-related conflicts, the tension between the cloud and the edge can be described using Strowel’s “data appropriation triangle”. The three “corner-questions” of the data appropriation are: How much data regulation should be contractually modifiable? What needs to be laid down as binding law? And where are the limits of legal regulation – where is privacy by technological design necessary?
In other words, the triangle comprises three perspectives on data protection: property rights, contractual means, and technological or practical measures.
Strowel subsequently focussed on the “property rights”-corner. According to Strowel, the data subject’s rights, such as the right to data portability under Article 20 of the General Data Protection Regulation (GDPR), are part of a property-characterised understanding of data. In this context, the audience also discussed ways of categorising data. According to Strowel, the current EU legislation landscape mostly distinguishes between confidential data and public data as well as personal data and non-personal data. However, Strowel is convinced that these categories are too broad and that we should be more precise with data distinction.
Types of data and data regulation in the EU
As an example of current developments in EU data legislation, Strowel picked the case of text and data mining (TDM) in light of the Draft Directive on copyright in the Digital Single Market. TDM is the process of extracting data and patterns from large datasets. If applied to image recognition, this tool brings data “from the eye to the machine”.
Strowel argued that TDM should not be covered by copyright. For example, autonomous cars in London will usually create visual representations of the well-known red double-decker buses. Some images with that motive are protected by copyright. However, the car does not care whether it encounters a suitable motive for works of London-cliché photography. It merely recognises the bus as an object that is best not to be driven into.
Copyright law pursues the objective of protecting the exploitation of work in its capacity as a work. TDM, on the contrary, does not reproduce work “as a work” but only extracts selected data sets (size and position of the obstacle). Hence, TDM does not qualify as an exploitation of protected works.
Relevance of data regulation in today’s world
The subsequent group discussion covered a broad spectrum of topics, including the categorisation of data, the problems of overly complicated privacy guidelines, and the possibilities and limitations of models such as the data appropriation triangle. The lively discussion illustrated once more how relevant and how pressing the questions of data protection law are in today’s world.
Thomas Buocz/Katja Schirmer, October 2018
Der Bericht im PDF-Format ist hier verfügbar.
Human Control of Machine Intelligence
13.12.2018
Human Control of Machine Intelligence
“The greatest challenges of appropriately regulating Artificial Intelligence (AI) are social rather than technical.” With this statement, Professor Joanna J. Bryson, University of Bath (UK), set the stage for her lecture on “Human Control of Machine Intelligence” on 13 December 2018 at the University of Natural Resources and Life Sciences, Vienna. In her lecture, Bryson explained why we need to hold humans accountable if we want to maintain control over AI.
Defining Artificial Intelligence
Because humans identify as intelligent, we often assume that intelligence means being “human-like”. According to Bryson, this is not the case when we are talking about AI. Instead of imagining AI as an artificial human being, we should picture AI as an intelligent artefact, deliberately built to facilitate our intentions.
Bryson defined intelligence as the capacity to do the right thing at the right time, translating perception into action. This requires computation, which is the physical transformation of information using time, energy and space. In the course of evolution, humanity’s winning strategy has been sharing knowledge and mining each other’s prior experience. Recently, this successful concept was utilized by machine learning. Thus, AI’s computation potential has been growing significantly, feeding on human knowledge through machine learning. However, this pace of growth will slow down once AI has reached the frontiers of human knowledge.
Maintaining control through transparency and accountability
Before AI becomes intellectually equal to humans, we should find a way to maintain control over it. Bryson argued that only those in control – human beings – should be held accountable.
Human accountability requires AI transparency. The goal of AI transparency is not about complete comprehension of AI, but providing sufficient information in order to hold human beings accountable. To understand human behaviour, for example, it is not necessary to understand how neurons in the brain are connected either. As long as humans can understand the reasons behind the AI’s behaviour, there is no need to know every bit of computed data.
In this context, Bryson referred to the example of deaths caused by driverless cars. Accounts of the accident and the car’s perception are available within a short period of time, since it is regulated as a part of the automobile industry. Because we can retrace how the car accident occurred, it is easier to solve the question of who is to be held accountable. In contrast, shell companies are an example of badly distributed accountability and control, thereby making it difficult to enforce legal penalties.
Through a transparent design of AI it is possible to say what went wrong if an error occurs. This requires documenting the software engineering process as well as logging the AI’s training data and performance. As an example of AI transparency, Bryson introduced a live visualization interface used for observing AI reasoning. This visualization helps naïve users to better comprehend actions performed by simple robots.
Regulating AI through legal personhood?
Bryson is of the opinion that humans who build and use AI should be held accountable rather than the machines themselves. Therefore, she opposes granting legal personhood to AI. She argues that our justice system works through dissuading people from illegal behaviour through the feeling of dysphoria caused by societal or physical isolation. Applying this system to machines will not work. The idea of punishing AI for any wrongdoing with human penalties like imprisonment relies on the belief that AI is or will be humanlike, which Bryson strongly rejects. If humanity wants to maintain control, responsibility must remain in human hands.
Bryson is convinced that more laws regarding AI are not necessarily needed, but rather tools to enforce them. Every aspect of an artefact follows human design decisions. Therefore, regulators should motivate developers to produce clear and safe code for AI. Bryson concludes that once we start legislating and adjudicating for human accountability, AI transparency will inevitably follow.
Group discussion
After Bryson’s lecture, the audience discussed the role of accountability and transparency of AI in more depth with Bryson. As regards the audience’s question concerning due diligence and standards of AI, Bryson advocated transnational regulation and named the EU as a particularly promising framework for tackling challenges such as the regulation of AI. Among other aspects, the roles of corporations and government in minimizing AI risks were debated. Using her wide knowledge of legal, technical and social aspects of AI, Bryson led through an inspiring discussion that gave way to many new thoughts.
Nino Gamsjäger, January, 2019
Der Bericht im PDF-Format ist hier verfügbar.
Strafbarkeit beim Einsatz autonomer Systeme – Neue Impulse für das Konzept der Fahrlässigkeit?
17.12.2018 Ob im Straßenverkehr, in der Medizin oder in der Industrie – in einer rasant steigenden Zahl an Lebensbereichen begegnen uns autonome, also mit einem gewissen Entscheidungsrahmen versehene, Systeme. Sie können in den verschiedensten Bereichen bei Entscheidungen helfen oder auch selbstständig Entscheidungen treffen. Einige autonome Systeme lernen durch die Methode des sogenannten Deep Learning, also auf der Grundlage von selbständig angeeignetem Wissen. Ausgehend von der Frage: „Wieso funktioniert mein autonomes Fahrzeug nicht so wie ich das will?“, tauchte Beck in ihren Ausführungen immer weiter in das Strafrecht ein, um jene Bereiche aufzuzeigen, bei denen der Einsatz autonomer Systeme Fragen und Probleme aufwerfen. Beck gliederte ihre Gedanken dabei in vier Levels: Kausalitätsprobleme, die Vermischung von Dichotomien, mögliche Kontrollinstanzen und letztlich Blockchain Technologie als Grundlage für mögliche Kontrollinstanzen. Level 1 – Fragen der Kausalität und Kausalitätsprobleme Bei der objektiven Zurechnung stellt sich etwa die Frage, wie weit ein erlaubtes Risiko bei dieser neuen Technologie gehen darf und welche Rolle Verhaltensstandards spielen. Ein weiterer Punkt betrifft die Unterbrechung der Zurechnung durch Dritte: Kann eine Maschine, welche Teil der Entscheidung ist, die Zurechnung unterbrechen? Bei der subjektiven Vorhersehbarkeit und Vermeidbarkeit sprach Beck das Modell des „Human in the loop“ an, welches darauf aufbaut, dass der Mensch Teil der Entscheidungskette bleibt. So normiert § 1b dt. StVG, dass der Lenker oder die Lenkerin stets verantwortlich bleibt, die Fahraufgaben wieder zu übernehmen. Dies werfe in der Praxis Probleme auf. Es sei fraglich, welchen Nutzen ein selbstfahrendes Auto habe, beziehungsweise ob es Lenkerinnen und Lenkern überhaupt zuzumuten sei, wenn sie trotz Autonomie des Fahrzeugs immer damit rechnen müssten, sofort die Steuerung zu übernehmen. Sie müssten im Ergebnis konzentrierter sein, als es das Lenken im Regelfall verlangt. Hinsichtlich der Programmierung selbstfahrender Fahrzeuge stellen sich Fragen der konkreten Vorhersehbarkeit von Komplikationen für die Entwickler, die Rolle von externen Standards als Indiz für deren mögliche Sorgfaltswidrigkeit oder die Anwendung des Vertrauensgrundsatzes auf autonome Systeme. Level 2 – Vermischung von Dichotomien Im zweiten Level behandelte Beck die Vermischung von gegensätzlichen Kategorien, konkret das Verhältnis zwischen staatlichem und außerstaatlichem Recht und die Frage von Standardbildung als Entlastung des Staates. Zu bedenken sind dann Auswirkungen auf die individuelle Sorgfalt aufgrund der Annäherung von Recht und Nicht-Recht in der Form externer Standards. Levels 3 und 4 – Kontrollinstanz durch Blockchain Technologie? Mit Überschriften wie „Programmierter Rassismus“ oder „Mit Daten werden Maschinen intelligent – und rassistisch“ betiteln Zeitungen das Phänomen, dass autonome Systeme Bias aus eingespielten Daten übernehmen und in weiterer Folge diskriminierende Entscheidungen treffen. In Zeiten, in denen autonome Systeme immer relevanter in mehr und mehr Lebensbereichen werden, braucht es eine Kontrollinstanz. Damit beschäftigte sich Beck in Level drei ihres Vortrags. Zu denken sei an eine Algorithmus-Überwachungsbehörde. Eine solche Institution wäre mit großer Macht ausgestattet, was die Frage aufwerfe, wie sie legitimiert und organisiert werden sollte: Beispielsweise staatlich oder nicht staatlich bzw national oder international? Als abschließenden Gedanken und zugleich als viertes Level führte Beck den Gedanken aus, eine solche Kontrollinstanz als Blockchain, also eine dezentrale, transparente und unveränderbare Kette von Datenpaketen auszuführen. Diskussion: Verwendungsmöglichkeiten und Entwicklungen Dem Vortrag folgte eine spannende Diskussion, in der zukünftige Auswirkungen von Deep Learning auf das Strafrecht kontrovers diskutiert wurden. Im Speziellen wurde überlegt, welche Aufgaben für den Staat daraus resultieren. Thema der Diskussion war außerdem der Einsatz künstlicher Intelligenz als Hilfsmittel für Strafgerichtsbarkeit, beispielsweise bei der Zusammenstellung personalisierter Maßnahmen im Rahmen der Resozialisierung von Straftätern oder bei der Prognose ihrer Gefährlichkeit.
Ferdinand Hönigsberger, Jänner 2019
Der Bericht im PDF-Format ist hier verfügbar.