The Legal Status Of AI As A Juridical Person: A Step Too Far?
- IJLLR Journal
- Jul 19
- 1 min read
Mrunalini K Sonkusare, OP Jindal Global University
ABSTRACT
The question of whether advanced artificial intelligence (AI) systems should ever be treated as legal persons has recently entered policy debates, fueled by suggestions such as the European Parliament’s 2017 call for examining “electronic personhood” for autonomous1. This article argues that extending juridical personhood to AI is both legally unwarranted and pragmatically dangerous. It surveys the concept of legal personhood – historically limited to natural persons and human-created entities (e.g. corporations) – and shows that AI lacks the defining features underlying existing legal personhood. Legal analyses highlight how granting personhood to machines would undermine accountability: AI “persons” could become entities that are either “accountable but unfunded” or “fully financed but unaccountable,” weakening protection for actual human victims. Ethical arguments emphasize that AI lacks moral agency and consciousness, making any legal rights or duties conferred on them incoherent. The comparative section reviews multiple legal systems (including the US, EU, Japan, China, and others) and finds that none have adopted AI personhood; many have even barred it (e.g. Utah’s new law) while focusing on liability or safety regulations instead. In sum, scholars and legislators are widely skeptical of the idea: leading experts have urged that the very proposal of “electronic person” status be “discarded from both a technical perspective and a normative viewpoint”.2 The article concludes that existing legal frameworks, suitably adapted, can address the challenges posed by AI, and that creating a new class of legal person for machines would be premature and perilous.
