Synthetic Violence: Deepfakes, Gender Harm, And The Limits Of AI Governance
- IJLLR Journal
- 2 days ago
- 2 min read
Aradhya Jain, Advocate, District Courts, Patiala
ABSTRACT
Generative AI and synthetic media have rapidly transformed from experimental tools into core infrastructure for digital life, but this paper argues that their most urgent implications lie not in efficiency gains but in the escalation of gendered and epistemic harm. Focusing on deepfakes and other forms of synthetic media, it shows how these technologies intensify technology‐facilitated gender‐based violence (TFGBV), including non‐consensual sexual imagery, sextortion, impersonation, and targeted harassment, while simultaneously eroding the conditions for trusting information, a phenomenon conceptualized as epistemic pollution and the “truth recession.”
Drawing on Fricker’s framework of testimonial and hermeneutical injustice, the analysis demonstrates how AI‐enabled abuse disproportionately targets women and marginalized communities, undermining their credibility, silencing their testimony, and denying them the conceptual resources needed to name and contest new forms of violence. The paper situates these harms within broader structures of synthetic violence, algorithmic bias, and platform capitalism: biased datasets and engagement‐driven recommender systems reproduce racial and gender hierarchies, while platform economies and moderation failures turn social media intermediaries into active amplifiers of misogyny and disinformation rather than neutral hosts. It then critiques contemporary AI and platform governance as a set of fragmented “regulatory islands”: the EU’s risk‐based AI Act, the United States’ sectoral patchwork, the UK’s principles‐based model, China’s security‐oriented deep synthesis rules, and regulatory uncertainty elsewhere, which collectively fail to address cross‐border TFGBV and digital misogyny.
In response, the paper advances a gender‐responsive AI governance agenda grounded in survivor‐centred frameworks, gender‐sensitive moderation, safety‐by‐design, transparency, accountability, stronger consent and data protections, digital literacy, and intersectional oversight across the AI lifecycle. Taken together, these interventions reframe AI governance as a project of redistributing epistemic and digital power, positioning the prevention of gendered synthetic violence as a central, rather than peripheral, objective of global AI regulation.
Keywords: Generative AI; Deepfakes; TFGBV; Epistemic Injustice; Algorithmic Misogyny; Platform Governance; Feminist AI Governance; Digital Harm.
