The Rise Of Explainable AI (XAI): Bridging The Gap Between Legal Compliance And Managerial Transparency
- IJLLR Journal
- Aug 4, 2024
- 1 min read
Ms. Honey Verma, Ph.D. Scholar, Manipal University, Jaipur,
Mr. Uddhav Soni, BBA LLB (H), UPES, Dehradun
ABSTRACT:
Artificial Intelligence (AI) systems are becoming more and more prevalent in modern life. Despite their expanding sophistication, many AI systems remain "black boxes" that hide their decision-making from users and regulators. The GDPR mandates that all AI systems must meet certain standards of explainability and openness, which makes it challenging to comply with standards, especially GDPR-mandated openness in automated decision making. This paper explores the gaps between the legal and technical perspectives on AI transparency, and how can these differences be reconciled to develop effective regulatory frameworks for AI systems. We find that the transparency gap results from significant disagreements between technological norms and legal requirements for what constitutes openness. Transparency, interpretability, and explainability are three fundamental concepts that should be specified and scoped in order to enhance legal certainty and guide the development of XAI systems. Since the legislation does not specify how to verify algorithmic transparency, it is unclear how providers may be made transparently certified while yet allowing providers to self-regulate.