top of page

Artificial Intelligence As A New Weapon Of Cybercrime: Legal Security In Regulating Rapid AI Deployment




Brish Kumar Pankaj, B.A. LL.B. (Hons.), Law College Dehradun, Uttaranchal University, Dehradun, Uttarakhand, India.

Dr. Aishwarya Singh, Assistant Professor, Law College Dehradun, Uttaranchal University, Dehradun, Uttarakhand, India.


ABSTRACT


Artificial intelligence (AI) is being used in India as a powerful tool in cybercrime. AI acts as a force multiplier for cybercrime in India, and this increasingly challenges legal security when deployment cycles are faster than legal and forensic response. This study develops a legal- security framework for rapid AI deployment by combining doctrinal analysis of Indian cyber and criminal statutes, delegated intermediary obligations, data protection rules, and leading constitutional and evidentiary case law, with a structured mapping of the AI-enabled attack chain (data poisoning, model theft, prompt injection, and downstream laundering). The study finds that AI is mainly used to speed up the commission of traditional crimes - impersonation, cheating, extortion, invasion of privacy, delivery of malware, and payment fraud - by lowering the skills needed and scaling the multilingual persuasion, but at the same time, it increases the risks of plausible deniability and synthetic evidence. Indias regulatory architecture remains anchored in the Information Technology Act, 2000 and conditional intermediary safe harbor, but recent rulemaking has leaned toward time-critical governance: The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 define synthetically generated information, require prominent labelling and persistent metadata, and reduce actual knowledge takedown compliance to three hours. The Digital Personal Data Protection framework and the Digital Personal Data Protection Rules, 2025 add parallel duties around purpose limitation and breach intimation, while CERT-In’s directions on cyber incident reporting set a six-hour operational timeline for coordinated response. Key gaps persist in attribution, admissibility, and remedies. Investigations often lack preserved prompt histories, API access logs, and platform-side provenance needed to prove intent, and courts face rising authenticity disputes as deepfakes and fabricated logs blur the “original” electronic record. The paper concludes that legal security in the AI era requires supply-chain duties that follow control over models and tool-integrations, evidence-preserving containment procedures that are auditable and judicially reviewable, and victim-centered remedies -freezing, takedown, and injunctions - that operate within hours rather than weeks.


Keywords: AI-enabled cybercrime; synthetically generated information (SGI); deepfakes; intermediary due diligence; electronic evidence; DPDP Rules 2025; CERT-In incident reporting; rapid injunctive relief.



Indian Journal of Law and Legal Research

Abbreviation: IJLLR

ISSN: 2582-8878

Website: www.ijllr.com

Accessibility: Open Access

License: Creative Commons 4.0

Submit Manuscript: Click here

Licensing: 

 

All research articles published in The Indian Journal of Law and Legal Research are fully open access. i.e. immediately freely available to read, download and share. Articles are published under the terms of a Creative Commons license which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

 

Disclaimer:

The opinions expressed in this publication are those of the authors. They do not purport to reflect the opinions or views of the IJLLR or its members. The designations employed in this publication and the presentation of material therein do not imply the expression of any opinion whatsoever on the part of the IJLLR.

bottom of page