top of page

Deepfakes: Threats To Computer Fraud And Security Under Indian Legal Provisions




Neha Verma, Ph.D. Research Scholar, Royal School of Law and Administration, The Assam Royal Global University, Guwahati, Assam.


ABSTRACT


Deepfake technology, which utilizes artificial intelligence to create highly realistic fake videos, audio, and images, is rapidly becoming a tool for cybercriminals. In India, this technology is being exploited to facilitate various forms of fraud, including identity theft, financial scams, and social engineering attacks. These threats put individuals at risk and compromise the security of businesses and government systems. This paper examines how deepfakes contribute to computer fraud and security risks while exploring how Indian laws can address these challenges. Deepfakes allow criminals to impersonate people convincingly, making it easier for them to carry out fraudulent activities, like stealing money or spreading false information. These digital forgeries are particularly dangerous because they target systems that rely on facial recognition, voice authentication, and other forms of digital identity verification, which are now more vulnerable to manipulation by deepfakes. In India, laws like the Information Technology Act, 2000 (IT Act) provide legal tools to deal with cybercrimes, including fraud and identity theft linked to deepfakes. Sections 66C and 66D of the IT Act, for example, criminalize identity theft and impersonation, while the Indian Penal Code (IPC) also has provisions for cheating and fraud. Digital Personal Data Protection Act (DPDPA), 2023, which focuses on privacy and data protection, could also help regulate the misuse of personal data in deep fake content. However, the current legal system faces significant challenges in addressing these crimes, as deep fake technology evolves quickly and often outpaces law enforcement's ability to respond. This paper also examines the difficulties law enforcement faces in detecting deepfakes and suggests that better training, stronger laws, and more advanced detection technologies are needed to protect against these growing threats. The paper concludes by recommending that India enhance legal frameworks, raise public awareness, and adopt AI-powered tools to combat deep fake fraud and safeguard digital security.


Keywords: Deepfake Technology, Artificial Intelligence, Cybercrimes, Fraud, Security Systems.



Indian Journal of Law and Legal Research

Abbreviation: IJLLR

ISSN: 2582-8878

Website: www.ijllr.com

Accessibility: Open Access

License: Creative Commons 4.0

Submit Manuscript: Click here

Licensing: 

 

All research articles published in The Indian Journal of Law and Legal Research are fully open access. i.e. immediately freely available to read, download and share. Articles are published under the terms of a Creative Commons license which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

 

Disclaimer:

The opinions expressed in this publication are those of the authors. They do not purport to reflect the opinions or views of the IJLLR or its members. The designations employed in this publication and the presentation of material therein do not imply the expression of any opinion whatsoever on the part of the IJLLR.

bottom of page