top of page

Algorithmic Bias And Gender Discrimination In AI: The Need For Legal Accountability




Ashifa A Saheed, School of Legal Studies, Cochin University of Science and Technology, Kochi, Kerala


ABSTRACT


“With great power comes great responsibility.”


Artificial intelligence is the power bank of the 21st Century. AI has made significant progress in simplifying complex human life. From Chatbots to AI agents, which make autonomous decisions based on preferences and inputs, AI has indeed made significant strides in all arenas of life. Even when AI has the brush in hand, humans decide what to paint. AI is trained with a large amount of databases and preferences, which can thus make accurate predictions and decisions. AI model training is the process of feeding curated data to selected algorithms, which helps the system refine its responses to produce accurate results. Successful AI model training starts with quality data that accurately and consistently represents real-world and authentic situations.


AI is prone to making preferences and prejudices in its output when biased and stereotypical data is provided for training AI models. AI uses predetermined biases for women and other sexual minorities, in cases of hiring bots, suggestive content, and statements. The over-representation of men in the design of these technologies could quietly undo decades of advances in gender equality. Screening of resumes, by Artificial intelligence systems, is the most discussed challenge of Gender discrimination, which leads to automatic rejection of female employees and gender minorities. The system may have an inbuilt capacity to sort and eliminate women according to their age, marital status and even the possibility of a near pregnancy. All this proves that the prevailing gender biases in society can be amplified by automated systems and machine learning technologies. To legally regulate such perpetuation of discrimination by such models would indirectly imply a restriction on discriminatory and detrimental practices and preferences put forth into society. This paper is an effort to analyze the depth of gender discrimination by AI Models and the efficiency of regulatory frameworks worldwide to curtail the same.


Keywords: AI Screening, Gender Bias, Amplification, Hiring Models, AI Training, Automated System, AI Laws.



Indian Journal of Law and Legal Research

Abbreviation: IJLLR

ISSN: 2582-8878

Website: www.ijllr.com

Accessibility: Open Access

License: Creative Commons 4.0

Submit Manuscript: Click here

Licensing: 

 

All research articles published in The Indian Journal of Law and Legal Research are fully open access. i.e. immediately freely available to read, download and share. Articles are published under the terms of a Creative Commons license which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

 

Disclaimer:

The opinions expressed in this publication are those of the authors. They do not purport to reflect the opinions or views of the IJLLR or its members. The designations employed in this publication and the presentation of material therein do not imply the expression of any opinion whatsoever on the part of the IJLLR.

bottom of page