Way beyond SHAP: a XAI overview

Explainable AI (XAI) is crucial to make artificial intelligence (AI) more understandable and accessible.

Nubank employee sitting in an armchair in the Nubank office. She's working with her notebook on her lap, where you can see some stickers pasted on it.

“Explainable AI” (XAI) is an increasingly important area of research to guarantee that machine learning systems can be understood and interpreted by humans. This is crucial to ensure the trust and transparency of AI models, especially in critical applications such as healthcare, finance, and justice, for example.

On February 1, we held another edition of our Data Science & Machine Learning Meetup here at Nubank, and the theme of the time was exactly the one mentioned above: “Explainable AI”. One of the many enriching talks at the Meetup was from Wellington Monteiro, Professor at PUCPR and Lead Machine Learning Engineer at Nubank. 

He clarified that XAI isn’t restricted to the SHapley Additive exPlanations (SHAP) onlyーit’s a lot more. Do you want to know everything about this subject? Keep reading this article!

Balancing accuracy and interpretability

The central theme of the discussion was that “Explainable AI” (XAI) is not just about SHAP, intending to highlight the importance of considering a variety of techniques and approaches to make artificial intelligence (AI) more understandable and accessible.

Wellington Monteiro started by explaining the meaning of the term XAI (Explainable AI) and mentioned a soaring interest in the area since 2019, as evidenced by the steep increase in scientific publications, highlighting the need to better understand and explain AI models to different stakeholders. He also addressed the trade-off between accuracy and interpretability, where simpler models can often be intrinsically understood by humans but perform worse than more complex architectures. On the other hand, complex models such as ensembles and neural networks can perform better, but it is more challenging to understand how they came to a given prediction. This trade-off also applies to XAI techniques: simpler explanations are not necessarily accurate, and detailed explanations are often hard to be understood by humans.

To overcome this challenge, the presenter proposed considering several techniques apart from SHAP to balance this tradeoff, one of the most well-known techniques. In addition, the presenter remembered that data scientists often test different model machine learning architectures to achieve better statistical and cost metrics to solve their problems. Therefore, when analyzing XAI alternatives, we should also be open to multiple techniques — including, but not limited to SHAP.

Much more than SHAP: XAI overview

The talk then focused on demystifying the idea that XAI is analogous to SHAP, a popular visualization technique for explaining AI models. Monteiro highlighted that various AI and XAI techniques exist and the importance of adapting explanations to the specific needs of different audiences, such as end-users, regulatory agencies, statistical technicians, internal organizations, and executives. SHAP provides a visual explanation based on game theory considering a data subset. As a result, the rankings are often an approximation for large datasets instead of accurately representing the model behavior.

The discussion addressed both intrinsically explainable models and black box models, stressing that SHAP is just one example of a visualization technique. Wellington Monteiro also mentioned other XAI techniques, such as local explanations, model simplification, and text explanations.

In addition, examples of older techniques, such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP, as well as a more recent library for XAI application called ELI5 (Explain Like I’m 5), were presented. Then, a practical example with a dataset often used in XAI research (the US Census of 1991) was showcased, illustrating the significant changes in explanations generated by different XAI techniques. Finally, Monteiro emphasized the importance of adapting explanations to the specific context of the company or university and demonstrated how these differences in the explanations or the close-mindedness of selecting just one explanation technique can negatively impact the decisions.

The talk also addressed the distrust that can arise in presenting different XAI techniques when the results significantly differ from the expectations of humans, reiterating that there is no right or wrong answer. Wellington Monteiro also cited examples of graphic methods, such as PDP (Partial Dependence Plots) and counterfactual techniques, which based on a desired prediction, attempt to make minimal changes in the input data to reach it. In addition, sensitivity analyses and a myriad of Python libraries for implementing these techniques were also mentioned.

Monteiro also mentioned that Python and R are popular languages for producing AI and XAI models. He also listed several relevant XAI libraries, including SHAP, FastSHAP, Lime, ALE (Accumulated Local Effects), and InterpretML, which offer a wide range of techniques and resources to help professionals make AI more explainable and understandable.

Throughout the talk, books and articles were recommended for those wishing to deepen their knowledge. These references cover everything from theoretical foundations to practical applications, allowing those interested in the subject to explore the different facets of XAI. Monteiro also mentioned his research on developing new XAI techniques using multi-objective optimization to balance the conflicting goals of accuracy and interpretability.

What the future holds

Finally, we were invited to reflect on the future of XAI and the technical difficulties that still need to be overcome. Wellington Monteiro highlighted the continuous need for research and development of new techniques and approaches to address the inherent challenges of interpretability and explainability of AI models.

Additionally, he emphasized the importance of sharing knowledge and collaboration between professionals and academics, so that XAI can become a more natural and efficient practice in the field of artificial intelligence.

In summary, the talk highlighted the importance of not choosing SHAP just because other people use it, but also considering a variety of XAI techniques when developing new machine learning models to ensure that AI is accessible and understandable for a wide range of audiences. By exploring practical examples, libraries, and approaches, Monteiro demonstrated how professionals can adapt explanations to meet the needs of their companies or institutions. Furthermore, the talk emphasized the current state of research and development in the field of XAI, where new techniques are increasingly being proposed and developed to tackle technical challenges and ensure that AI becomes increasingly transparent and responsible.

​​It’s important to note that any XAI technique is not a one-size-fits-all solution, and different methods may be more appropriate for different contexts. It’s also essential to adapt explanations to the specific needs of different audiences. For example, users may need explanations that are easy to understand and free of technical jargon. In contrast, technical experts may require more detailed explanations, including code and mathematical formulae.

XAI has a wide range of applications across various industries. For example, regulatory agencies may use XAI to ensure that AI models make ethical and fair decisions. In addition, data scientists can use XAI to debug and improve models. In contrast, businesses can use XAI to gain insights into how their models make decisions and build more transparent and trustworthy systems.

As AI becomes more pervasive in our society, the need for interpretability will only increase. XAI techniques can help bridge the gap between AI models and human understanding, allowing us to build more trustworthy and ethical systems. However, it’s important to continue exploring and refining these techniques to ensure they are effective, practical, and accessible to a wide range of users. In conclusion, XAI is an important and rapidly evolving field that has the potential to address some of the most pressing challenges in AI today.

While SHAP is a popular and effective XAI technique, various other methods can be used to achieve interpretability. By continuing to develop and refine these techniques, we can ensure that AI is used ethically and responsibly and that we can all benefit from the many advancements AI offers.

Enter your name