top of page

Synthetic Violence: Deepfakes, Gender Harm, And The Limits Of AI Governance




Aradhya Jain, Advocate, District Courts, Patiala


ABSTRACT


Generative AI and synthetic media have rapidly transformed from experimental tools into core infrastructure for digital life, but this paper argues that their most urgent implications lie not in efficiency gains but in the escalation of gendered and epistemic harm. Focusing on deepfakes and other forms of synthetic media, it shows how these technologies intensify technology‐facilitated gender‐based violence (TFGBV), including non‐consensual sexual imagery, sextortion, impersonation, and targeted harassment, while simultaneously eroding the conditions for trusting information, a phenomenon conceptualized as epistemic pollution and the “truth recession.”


Drawing on Fricker’s framework of testimonial and hermeneutical injustice, the analysis demonstrates how AI‐enabled abuse disproportionately targets women and marginalized communities, undermining their credibility, silencing their testimony, and denying them the conceptual resources needed to name and contest new forms of violence. The paper situates these harms within broader structures of synthetic violence, algorithmic bias, and platform capitalism: biased datasets and engagement‐driven recommender systems reproduce racial and gender hierarchies, while platform economies and moderation failures turn social media intermediaries into active amplifiers of misogyny and disinformation rather than neutral hosts. It then critiques contemporary AI and platform governance as a set of fragmented “regulatory islands”: the EU’s risk‐based AI Act, the United States’ sectoral patchwork, the UK’s principles‐based model, China’s security‐oriented deep synthesis rules, and regulatory uncertainty elsewhere, which collectively fail to address cross‐border TFGBV and digital misogyny.


In response, the paper advances a gender‐responsive AI governance agenda grounded in survivor‐centred frameworks, gender‐sensitive moderation, safety‐by‐design, transparency, accountability, stronger consent and data protections, digital literacy, and intersectional oversight across the AI lifecycle. Taken together, these interventions reframe AI governance as a project of redistributing epistemic and digital power, positioning the prevention of gendered synthetic violence as a central, rather than peripheral, objective of global AI regulation.


Keywords: Generative AI; Deepfakes; TFGBV; Epistemic Injustice; Algorithmic Misogyny; Platform Governance; Feminist AI Governance; Digital Harm.



Indian Journal of Law and Legal Research

Abbreviation: IJLLR

ISSN: 2582-8878

Website: www.ijllr.com

Accessibility: Open Access

License: Creative Commons 4.0

Submit Manuscript: Click here

Licensing: 

 

All research articles published in The Indian Journal of Law and Legal Research are fully open access. i.e. immediately freely available to read, download and share. Articles are published under the terms of a Creative Commons license which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

 

Disclaimer:

The opinions expressed in this publication are those of the authors. They do not purport to reflect the opinions or views of the IJLLR or its members. The designations employed in this publication and the presentation of material therein do not imply the expression of any opinion whatsoever on the part of the IJLLR.

bottom of page