In recent past there is a trend of supplementing complex machine learning and deep learning algorithms with augmented explanations for the decisions taken by the complex ML models. This notion termed as XAI Explainable AI is a recent trend which has emerged to counter the criticism of complex AI models taking opaque decisions which affect our daily lives like say diagnosis of a disease or taking a credit decision.
There are several taxonomies of approaches to XAI be it based on interpretability vs explainability, or model -specific vs model agnostic explainability etc.
We shall delve on the dimension of model specific vs model agnostic explainability approaches. These differ on the dimension of whether the xai approach utilizes the specifics of the structure of the ML model used or is independent of the ML model being used. Much analogous to the white box versus black box approaches to testing software applications. Model specific approaches are white box while model agnostic are black box approaches.
Model specific approaches work based on the details of the specific structures of the machine learning or deep learning model which is applied. These techniques are explicitly used for a specific model architecture, like say a specific convolutional network model CNN. These approaches exploit the internals of the machine learning model like the neural network , and uses reverse engineering approach to provide explanations of how the specific DL or ML algorithm is giving the relevant decision
The advantages of using model specific models is that it allows us to get a deeper understanding of the decision having known the internal workings of the model, and it helps us to devise more customized explainable model. However the flip side of such models is the need for going through entire structures of the model leaving it having to compromise the performance of the model because you will recreate the ML or DL model. Some of the common model specific approaches to deep learning models include those based on deconvolution which traverses the path of CNN (which goes from image data to the final class) in reverse order (from final class to original image pointing out specific regions in image which contribute to the decision). Extensions of the deconvolution based approaches include guided back propagation, GRAD-CAM, Score CAM and Grad-CAM++.
On the other hand model agnostic methods do not take into account the structure of the model. They can be applied to any machine learning algorithms and work on the black box model approach. They obtain explanations by perturbing and mutating the input data and obtaining sensitivity of the performance of these mutations with respect to the original data performance. These give interesting insights into the relative localized region of the input which is giving a higher sensitivity.
Some popular model agnostic techniques which are popular in XAI include SHAPley values, and LIME. Both these approaches work on idea of black box approach to XAI by perturbing inputs and not specific to any kind of ML model. They can be applied to any ML model. Model-agnostic methods will not affect the performance of your ML model and finally they do not require training the model.
So when it comes to usage model specific methods are desirable when you want more pin pointed explanations like in case of healthcare medical image localization etc. On the other hand scenarios where the ML Models are costly model agnostic methods are preferred in contrast.
In case both are possible, it is advisable to use a combination of both schools of techniques to get a comprehensive explanation of the decisions.
Dr Srinivas Padmanabhuni, CTO - TestAIng.com