![]() For instance, model examiners might require an understanding of how input data is used by a model to make a prediction in order to ensure the model is trustworthy, is not biased, or is compliant with regulation. In contrast, non-experts often require a functional explanation to understand how some output outside a model is produced. For instance, model creators might require to understand how layers of a deep network respond to input data in order to debug or validate the model. functional.ĪI experts often require a mechanistic explanation of some component inside a model. outside, which can also be understood as mechanistic vs. In general, there are two types of targets: inside vs. The type of explanation target is often determined according to the role-specific goals of end users. ![]() We will present an overview of each of these three sources of target variation.Īn AI ecosystem includes users with various roles. Targets can vary in terms of their type, scope, and complexity. The target specifies the object of an explainability method, which makes it the most important aspect of the FATE taxonomy. Moreover, we provide examples of related methodologies to review the literature work. In the following sections we will discuss this Four-Aspect Taxonomy of post-hoc Explainability ( FATE) in detail. The proposed taxonomy of the post-hoc explainability methods including the four aspects of target, drivers, explanation family, and estimator. In order to make sense of the post-hoc explainability methods, we propose a taxonomy or a way of breaking down these methods that shows their common structure, organized around four key aspects: the target, what is to be explained about the model the drivers, what is causing the thing you want explained the explanation family, how the explanation information about the drivers causing the target is communicated to the user and the estimator, the computational process of actually obtaining the explanation.įor instance, the popular Local Interpretable Model-agnostic Explanations (LIME) approach provides explanation for an instance prediction of a model, the target, in terms of input features, the drivers, using importance scores, the explanation family, computed through local perturbations of the model input, the estimator. It’s challenging to understand this vast body of literature because of the numerous approaches to XAI. This bias of focus along with the recent popularity of XAI research has resulted in development of numerous and diverse post-hoc explainability methods. Thus, the majority of the XAI literature is dedicated to explaining pre-developed models. Post-modelling explainabilityĬurrently AI models are often developed with only predictive performance in mind. The three stages of AI explainability: Pre-modelling explainability, Explainable modelling and post-modelling explainability.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |