Artificial Intelligence And Intellectual Property
- IJLLR Journal
- Aug 30
- 2 min read
Ritu Raj Singh, Bharati Vidyapeeth New Law College, Pune
ABSTRACT
The big changes in Artificial Intelligence, especially with generative AI, are changing the basics of intellectual property law. As AI systems evolve, the creativity in works, inventions, and legal thinking that used to come from humans is now more and more driven by AI. This has made it necessary for human systems to recognize authors, inventors and take responsibility for AI actions. This paper looks closely at how AI and IP intersect, focusing especially on who owns AI-generated content and who is responsible for IP violations or breaking other rules caused by AI. The main challenge with ownership is that AI-created works are protected by copyright. In most major areas like the US, the EU, and India, the law says that copyright applies only to works made by people. For example, the US Copyright Office has always said human creativity is needed. says that the person who organizes the creation of the work can be considered the author of computer-generated content. But this is hard to apply when AI is very autonomous. The issue of liability for AI infringing on intellectual property is quickly becoming one of the most contested areas. This mostly happens in two situations: when copyrighted material is used to train AI models without permission or payment, and when AI outputs accidentally break existing IP rights. High- profile lawsuits have been brought by artists and media companies against generative AI developers. Deciding who is responsible—like the AI developer for the model’s design and training data sourcing, the user for their prompts and commercial use, or even the data providers—is complex, made harder by the “black box” nature of many AI algorithms. Beyond direct IP infringement, there are also larger concerns about liability for AI-generated content. These include the risk of AI making false or defamatory statements, which could lead to legal problems for publishers; product liability issues if AI-designed products cause harm; and data privacy problems when AI models process personal data. Various legal and policy responses are being explored and implemented to help deal with this changing environment. These efforts include revising existing IP frameworks by redefining concepts like “fair use” for AI training data and considering new, strong licensing models for both AI training inputs and outputs. Many people support a human-in-the-loop approach to ensure humans remain in control and follow current IP rules. At the same time, there is a growing discussion about whether to suggest new laws. This includes the possibility of creating special rights just for AI-generated content, going back to the basic definitions of authorship and inventorship to include AI’s role, making it mandatory to disclose AI training data to clarify things, and creating clear models for how to divide liability among all the people involved in AI.
