+353-1-416-8900REST OF WORLD
+44-20-3973-8888REST OF WORLD
1-917-300-0470EAST COAST U.S
1-800-526-8630U.S. (TOLL FREE)

Explainable Deep Learning AI. Methods and Challenges

  • Book

  • February 2023
  • Elsevier Science and Technology
  • ID: 5638197

Explainable Deep Learning AI: Methods and Challenges presents the latest works of leading researchers in the XAI area, offering an overview of the XAI area, along with several novel technical methods and applications that address explainability challenges for deep learning AI systems. The book overviews XAI and then covers a number of specific technical works and approaches for deep learning, ranging from general XAI methods to specific XAI applications, and finally, with user-oriented evaluation approaches. It also explores the main categories of explainable AI - deep learning, which become the necessary condition in various applications of artificial intelligence.

The groups of methods such as back-propagation and perturbation-based methods are explained, and the application to various kinds of data classification are presented.

Please Note: This is an On Demand product, delivery may take up to 11 working days after payment has been received.

Table of Contents

1. Introduction
2. Explainable Deep Learning: Methods, Concepts and New Developments
3. Compact Visualization of DNN Classification Performances for Interpretation and Improvement
4. Explaining How Deep Neural Networks Forget by Deep Visualization
5. Characterizing a scene recognition model by identifying the effect of input features via semantic- wise attribution
6. A Feature Understanding Method for Explanation of Image Classification by Convolutional Neural Networks
7. Explainable Deep Learning for decrypting disease signature in Multiple Sclerosis
8. Explanation of CNN Image Classifiers with Hiding Parts
9. Remove to Improve?
10. Explaining CNN classifier using Association Rule Mining Methods on time-series
11. A Methodology to compare XAI Explanations on Natural Language Processing
12. Improving Malware Detection with Explainable Machine Learning
13. AI Explainability. A Bridge between Machine Vision and Natural Language Processing
14. Explainable Deep Learning for Multimedia Indexing and Retrieval
15. User Tests and Techniques for the Post-Hoc Explainability of Deep Learning Models
16. Conclusion

Authors

Jenny Benois-Pineau Professor, Labri/University Bordeaux, France. Jenny Benois-Pineau is a professor of computer science at the University of Bordeaux and head of the "Video Analysis and Indexing� research group of the "Image and Sound� team of LABRI UMR 58000 Universit� Bordeaux / CNRS / IPB-ENSEIRB. She was deputy scientific director of theme B of the French national research unit CNRS GDR ISIS (2008-2015) and is currently in charge of international relations at the College of Sciences and Technologies of the University of Bordeaux. She obtained her doctorate in Signals and Systems in Moscow and her Habilitation to Direct Research in Computer Science and Image Processing at the University of Nantes in France. Her subjects of interest include image and video analysis and indexing, artificial intelligence methods applied to image recognition. Romain Bourqui Associate Professor, Labri/University Bordeaux, France. Since 2009 he's been an Associate Professor in the Computer Science Department of the IUT ("Technical School"), University of Bordeaux (Talence), France. He is also deputy director of the BKB ("Bench to Knowledge and Beyond") team of LaBRI. Dragutin Petkovic Professor, Computer Science department, San Francisco State University, USA. Dragutin Petkovic is Professor in the Computer Science department at San Francisco State University, USA. Georges Quenot Senior Researcher, Laboratory of Informatics of Grenoble and Multimedia Information Indexing and Retrieval Group, leader of the MRIM group, CNRS, France. Senior researcher at CNRS, leader of the MRIM group. Works at the Laboratory of Informatics of Grenoble and Multimedia Information Indexing and Retrieval Group.