Article by Sandra Wachter
This Article examines the legal status of algorithmic groups in North American and European anti-discrimination doctrine, law, and jurisprudence. This Article proposes a new theory of harm to close the gap between legal doctrine and emergent forms of algorithmic discrimination. Algorithmic groups do not currently enjoy legal protection unless they can be mapped onto an existing protected group. Such linkage is rare in practice. In response, this Article examines three possible pathways to expand the scope of anti-discrimination law to include algorithmic groups.
This Article also proposes a new theory of harm, the “theory of artificial immutability,” that aims to bring AI groups within the scope of the law. The theory describes how algorithmic groups act as de facto immutable characteristics in practice. This Article proposes five sources of artificial immutability in AI: (1) opacity, (2) vagueness, (3) instability, (4) involuntariness and invisibility, and (5) a lack of social concept. Each of these erodes the key elements of good decision criteria.
To remedy this, greater emphasis needs to be placed on whether people have control over decision criteria and whether they are able to achieve important goals and steer their path in life. This Article concludes with reflections on how the law can be reformed to account for artificial immutability, making use of a fruitful overlap with prior work on the “right to reasonable inferences.”
About the Author
Sandra Wachter. Professor of Technology and Regulation, Oxford Internet Institute, University of Oxford, 1 St. Giles, Oxford, OX1 3JS, United Kingdom.
E-mail: sandra.wachter@oii.ox.ac.uk.
Citation
97 Tul. L. Rev. 149