A SURVEY ON EXPLAINABLE AI: TECHNIQUES AND CHALLENGES

Authors

  • Sai Teja Boppiniti Sr. Data Engineer and Sr. Research Scientist Department of Information Technology, FL, USA

DOI:

https://doi.org/10.26662/ijiert.v7i3.pp57-66

Keywords:

Explainable AI, interpretability, transparency, post-hoc methods, intrinsic methods, machine learning, neural networks, AI ethics, decision-making, XAI challenges.

Abstract

Explainable Artificial Intelligence (XAI) is a rapidly evolving field aimed at making AI systems more interpretable and transparent to human users. As AI technologies become increasingly integrated into critical sectors such as healthcare, finance, and autonomous systems, the need for explanations behind AI decisions has grown significantly. This survey provides a comprehensive review of XAI techniques, categorizing them into post-hoc and intrinsic methods, and examines their application in various domains. Additionally, the paper explores the major challenges in achieving explainability, including balancing accuracy with interpretability, scalability, and the trade-off between transparency and complexity. The survey concludes with a discussion on the future directions of XAI, emphasizing the importance of interdisciplinary approaches to developing robust and interpretable AI systems.

Downloads

Published

2020-03-31

Issue

Section

Engineering and Technology