top of page

AI Liability For Crimes: Comparing India’s IT Act And The EU’s AI Directive On Defamation




Satadru Majumder, B.A. LL.B. (Hons.), Xavier Law School, St. Xavier’s University, Kolkata, India


ABSTRACT


The recent runaway success of artificial intelligence, particularly the large language model variety such as ChatGPT, has also presented new wrinkles to the legal environment, especially when it comes to the law in the case of defamation produced by artificial intelligence. In this paper, the researcher will conduct a comparative study of the liability regime that can be found in the Indian Information Technology Act, 2000 and the proposed European Union AI Liability Directive, 2024. It discusses the legal issue whether the generative AI, like ChatGPT, is liable when characters of a defamatory message that is generated independently by the AI appear in an email by a user that concludes that the AI is liable.


In India, statutory machinery is barely ready to address the harms that AI propagates. In earlier IT Act, section 79 was meant to only take care of the passive intermediaries rather than generative systems that are able to create content all on their own. Although Sections 499 and 500 of the Indian Penal Code provide the said liability in cases of defamatory statements, intentional or not, there lies a vast difference between how the section prepared to deal with intentional defamatory statements and how it will be able to deal with the defamatory, intent-less post-AI processes. Case laws like Shreya Singhal v. Union of India further makes liability more complicated introducing unrealistic and unreal standard of actual knowledge of liability by intermediaries, which are rather unproductive with AI behaviour.


Presumably, the EU approach known as the AI Liability Directive is more progressive. It puts the liability on developers of high-risk AI systems strictly and places it so that a developer must prove liability as opposed to a victim. This is a very transparent approach where the claimants can obtain compensation by merely proving that they have been harmed, and, conversely, the developers need to prove that they have indeed met the safety criteria.


With the examples taken out of life, like misrepresentations created by ChatGPT about real people, the paper highlights the different results of lawsuits in different jurisdictions. In India, the law is so old that it puts procedural and legal obstacles in the path of the victim, but the Directive in the EU provides an easier route to justice and even more so, a reason to prevent risks by the developers.


The paper ends by emphasizing the fact that India needs to revise its legislative structure as soon as possible. With the example of the EU, India can develop a similar system that enhances innovation in AI, but can at the same time protect against the reputational consequences of the use of AI in defamation attacks.



Indian Journal of Law and Legal Research

Abbreviation: IJLLR

ISSN: 2582-8878

Website: www.ijllr.com

Accessibility: Open Access

License: Creative Commons 4.0

Submit Manuscript: Click here

Licensing: 

 

All research articles published in The Indian Journal of Law and Legal Research are fully open access. i.e. immediately freely available to read, download and share. Articles are published under the terms of a Creative Commons license which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

 

Disclaimer:

The opinions expressed in this publication are those of the authors. They do not purport to reflect the opinions or views of the IJLLR or its members. The designations employed in this publication and the presentation of material therein do not imply the expression of any opinion whatsoever on the part of the IJLLR.

bottom of page