Explainable AI

Introduction

Neural networks are considered black box models due to their complex architecture. While they have great predictive powesrs, it is particularly hard to understand how they arrive at a particular decision. In contrast, linear regression and decision tree models are inherently explainable. For example, the linear regression assumes a linear relationship between the target and the input features; such that the coefficient of the model can indicate the direction of influence of the independent on the dependent variable. Similarly, decision trees have a simple rule-based structure, such that at each node the input feature is compared against a threshold (e.g., is X > than 5). Furthemore, the linear regression and the