Frank Pasquale & Gianclaudio Malgieri, Unlawful AI…“until proven otherwise”: The New Turn on AI Accountability from the EU Regulation and Beyond (Ethics of AI in Context)

Ethics of AI in Context

► To stay informed about other upcoming events at the Centre for Ethics, opportunities, and more, please sign up for our newsletter.

Unlawful AI…“until proven otherwise”: The New Turn on AI Accountability from the EU Regulation and Beyond

In the last years, legal scholars and computer scientists have discussed widely how to reach a good level of AI accountability and fairness. The first attempts focused on the right to an explanation of algorithms, but such an approach has proven often unfeasible and fallacious due to the lack of legal consensus on the existence of that right in different legislations, on the content of a satisfactory explanation and the technical limits of a satisfactory causal-based explanation for deep learning models. In the last years, several scholars have indeed shifted their attention from the legibility of the algorithms to the evaluation of the “impacts” of such autonomous systems on human beings, through “Algorithmic Impact Assessments” (AIA).

This paper, building on the AIA frameworks, advances a policy-making proposal for a test to “justify” (rather than merely explaining) algorithms. In practical terms, this paper proposes a system of “unlawfulness by default” of AI systems, an ex-ante model where the AI developers have the burden of the proof to justify (on the basis of the outcome of their Algorithmic Impact Assessment) that their autonomous system is not discriminatory, not manipulative, not unfair, not inaccurate, not illegitimate in its legal bases and in its purposes, not using unnecessary amount of data, etc.

In the EU, the GDPR and the new proposed AI Regulation already tend to a sustainable environment of desirable AI systems, which is broader than any ambition to have “transparent” AI or “explainable” AI, but it requires also “fair”, “lawful”, “accurate”, “purpose-specific”, data-minimalistic and “accountable” AI.

This might be possible through a practical “justification” process and statement through which the data controller proves in practical terms the legality of an algorithm, i.e., the respect of all data protection principles (that in the GDPR are fairness, lawfulness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity, accountability). This justificatory approach might also be a solution to many existing problems in the AI explanation debate: e.g., the difficulty to “open” black boxes, the transparency fallacy, the legal difficulties to enforce a right to receive individual explanations.

Under a policy-making approach, this paper proposes a pre-approval model in which the Algorithms developers before launching their systems into the market should perform a preliminary risk assessment of their technology followed by a self-certification. If the risk assessment proves that these systems are at high-risk, an approval request (to a strict regulatory authority, like a Data Protection Agency) should follow. In other terms, we propose a presumption of unlawfulness for high-risk models, while the AI developers should have the burden of proof to justify why the algorithms is not illegitimate (and thus not unfair, not discriminatory, not inaccurate, etc.)

The EU AI Regulation seems to go in this direction. It proposes a model of partial unlawfulness-by-default. However, it is still too lenient: the category of high-risk AI systems is too narrow (it excludes commercial manipulation leading to economic harms, emotion recognitions, general vulnerability exploitation, AI in the healthcare field, etc.) and the sanction in case of non-conformity with the Regulation is a monetary sanction, not a prohibition.

► please register here

This is an online event, available on the Centre for Ethics YouTube Channel. Channel subscribers will receive a notification at the start. (For other events in the series, and to subscribe, visit YouTube.com/c/CentreforEthics.)

Frank Pasquale
Law
Brooklyn Law School

 

 

 

Gianclaudio Malgieri
Law & Technology
EDHEC Business

 

 

Tue, Nov 30, 2021
12:30 PM - 02:00 PM
Centre for Ethics, University of Toronto
200 Larkin