Algorithmic Decision-Making And Corporate Criminal Liability In India: Re-Examining Fault Attribution In The AI Era
- IJLLR Journal
- Aug 21
- 2 min read
Nitish R Daniel, Assistant Professor of law at Barkatullah University and Ph.D Research scholar at National Law University Delhi.
ABSTRACT
Rapid adoption of algorithmic decision-making is transforming how Indian corporations price products, screen borrowers, trade securities and manage supply chains. Yet when an autonomous system causes social harm through biased lending, errant trading or unsafe recommendations India’s century- old fault attribution doctrines strain to respond. This article offers the first systematic re-examination of corporate criminal liability in India through the lens of artificial intelligence (AI). Part II maps four orthodox theories: vicarious liability, identification, aggregate knowledge and the corporate- fault model and demonstrates why each rests on human agency assumptions that algorithmic autonomy disrupts. Part III isolates three doctrinal pressure points unique to AI: the evidentiary collapse of mens rea, the tension between strict-liability offences and opacity-by-design, and the fragility of causal chains in self-learning systems. In Part IV, the analysis turns to India’s positive law through close reading of the Companies Act 2013, the Information Technology Act, and a cross-section of sector-specific regimes, which shows the inadequacy of present penal provisions to address AI harms. Statutes typically hinge liability on “persons in charge of, and responsible to, the company,” a formulation ill-suited where no natural person can reasonably foresee or control machine-generated outputs. Judicial glosses provide limited relief and risk inconsistent outcomes. Synthesizing these doctrinal and statutory gaps, the article advances a reform blueprint. Key recommendations include: statutory duties of algorithmic governance (explainability, audit trails, bias testing) as predicates for compliance-based defences; a risk-tiered strict-liability matrix for high-impact deployments; calibrated reversal of the burden of proof on organisational due diligence; and safe-harbour incentives for open-source transparency. By recalibrating fault attribution around organisational risk management rather than elusive human intent, India can preserve criminal law’s deterrent force without stifling innovation in the AI era.
