Protecting Identity In The Age Of Ai: A Media Law Analysis Of Deepfake Scams And Digital Impersonation
- IJLLR Journal
- 1 hour ago
- 2 min read
Deepraj Bagate & Harsha Rajani
ABSTRACT
Artificial Intelligence’s quick progress has changed the nature of media production, making “deepfakes,” or synthetic highly realistic productions, possible. While such technologies have a transformative potential for entertainment, education, and innovation, their misuse has emerged to pose a serious risk for individual identity, privacy, and societal trust. Deepfake scams and digital impersonation facilitate fraud, reputational harm, and misinformation beyond legal recourse. This paper critically analyses the adequacy of media law in combating the challenges of identity manipulation by artificial intelligence, using India’s media law as a reference.
Using doctrinal and comparative methodology, the paper examines constitutional safeguards, statutory provisions under Information Technology Act, 2000, and changing jurisprudence related to personality rights in India. It argues that legal responses remain piecemeal, reactionary, and poorly suited to address the scale, speed, and complexity of deepfake- enabled harms. The study also undertakes a comparative assessment of regulatory strategies in the United States, and European Union, and identifies competing considerations of free speech, data protection, and platform liability.
Through an exploration of emerging case studies, the paper highlights significant regulatory gaps, encompassing the lack of legislation specifically addressing deepfakes, difficulties in attribution and enforcement, and the limited effectiveness of intermediary liability regimes. Its argument is that laws that rely on action after damage has been caused are no longer sufficient in times when synthetic videos can cause instant and unrecoverable harm.
In response, the paper proposes a multi-layered regulatory framework that incorporates legal reform with technological and institutional interventions. This includes recognising deepfakes as a unique form of legal harm, establishing personality rights in law, enforcing transparency and due diligence requirements on digital platforms, and adopting technological measures like watermarking and detection technologies. The paper ultimately advocates for a balanced approach which protects individual identity and dignity without infringing freedom of expression, and encourages responsible innovation within the digital ecosystem.
Keywords: Artificial Intelligence, Deepfakes, Digital Impersonation, DPDP, Identity Protection, Intermediary Liability, Media Law, Personality Rights, Synthetic Media.
