This quiz consists of 0 mandatory questions where you can gather 0 points.
You will complete the quiz by answering all the questions and gathering at least 0 points.
Answered: 0 / 0
Achieved points: 0 (NaN %)
Introduction to Explainable AI
Explainable AI Made Simple: A Guide for Beginners
What makes certain customers more loyal to a brand than others? Ever wondered why some employees stay motivated and thrive in their roles while others leave for new opportunities? Why do some industries recover quickly from economic downturns while others struggle for years? What distinguishes a company’s approach to customer retention from its competitors’? How does one marketing campaign resonate more deeply with a target audience than another? What drives disparities in employee satisfaction across different departments?
Also, and beyond business, say, for fans of Jane Austen, which modern author captures her wit and elegance, and why? How does Brie’s flavor compare to Gouda? What makes one meme go viral while another fades into obscurity? Why are certain road trip snack combinations more satisfying than others? What makes one basketball player sink buzzer-beaters while another misses under pressure? And why do some stadiums boost team performance while others seem to jinx them?
In this two-hour course, we won’t answer all these questions, but we’ll explore how explainable AI can shed light on them. With the right tools and techniques, we can uncover the reasoning behind AI's predictions, helping us interpret data clusters, explain classifications, and understand the factors driving decisions. Whether analyzing customer churn, identifying student clusters, or understanding article groupings, explainable AI helps us see not just the "what" but also the "why."
Most of the training material is presented in the videos, but a concise tutorial is also available for download in a single document. In the videos, we use Orange, a free and powerful data mining toolbox. While you don't need Orange to complete this course, if you're interested, you can download it and explore a video series with training materials on how to use it.
The course builds upon our Introduction to Machine Learning tutorial, which covers clustering, predictive modeling, and text and image mining. If you're new to machine learning and AI, we recommend starting with that tutorial before proceeding with this course.
The material in this course was developed by the Biolab group at the University of Ljubljana and is offered under the Creative Commons CC BY-NC-ND license.
Chapter 1: Explainable Clusters
Machine learning is a way for computers to learn patterns from data and use these patterns to solve problems or make decisions, like predicting what movie you might enjoy or grouping similar items together. Explainable AI goes a step further—it helps us understand how these decisions are made, making the process clear and trustworthy. Instead of being a "black box," explainable AI shows us the "why" behind the results, so we can better trust and use its insights.
One of the simplest techniques in machine learning is finding groups, or clusters, in data. Clustering helps us discover hidden patterns, like identifying students who excel in sports versus those who shine in academics. But once we create these groups, it’s essential to understand why they were formed and what makes each group unique. This ability to explain and interpret clusters is a great starting point for learning about explainable AI, and that’s exactly what we’ll explore in this first video.
Here’s a list of key concepts we have covered in this lesson, along with brief descriptions:
-
Explainable AI: A branch of AI focused on making machine learning models understandable for humans by revealing the reasoning behind their decisions.
-
Hierarchical Clustering: A method for grouping similar data points into clusters, where each group shares common characteristics. For example, grouping students based on their grades in different subjects.
-
Clusters: Groups formed by clustering algorithms that represent data points with similar traits. In this lesson, students were grouped into clusters based on their performance in subjects like English, History, and Algebra.
-
Cluster Explanation: The process of interpreting and understanding why specific data points belong to a cluster. This is achieved using tools like box plots to visualize and compare characteristics across clusters.
-
Box Plot Visualization: A graphical method used to show the distribution of data attributes (e.g., grades) within and across clusters, highlighting key differences and patterns.
-
Cluster Characterization: Summarizing clusters by identifying the unique traits that define each group. For instance, clusters were characterized as students excelling in humanities, sports, or natural sciences.
Now, please complete the following quiz:
What is the purpose of explainable AI? (1pt.)
What is clustering in machine learning? (1pt.)
Why is it important to explain clusters? (1pt.)
What is the purpose of a box plot in clustering? (1pt.)
In the student grades example from the video, what characterized Cluster C2? (1pt.)
Chapter 2: Data Maps
Imagine having a dataset with thousands of rows and dozens of columns, each filled with numbers and categories describing people, objects, or events. How can we make sense of this complexity? This is where data maps, also called projections or embeddings, become essential. Data maps allow us to visualize complex datasets in a simple, two-dimensional space. In these maps, each dot represents a data point, like a person or product, and dots that are close together represent similar data points. This makes it easier to see patterns, groupings, and outliers at a glance.
Data maps are especially important in exploratory data analysis, where the goal is to understand the structure and characteristics of the data. By reducing the dimensions of the dataset—for example, summarizing dozens of variables into just two—data maps make it possible to visually explore the relationships between data points. This visual exploration is a powerful tool for spotting trends, identifying clusters, and gaining insights, all of which are key for explainable AI.
In this video, we’ll look at how to create and interpret data maps using a technique called t-SNE. You’ll see how we can transform a complex customer dataset into a simple, visual map that reveals meaningful clusters. By understanding these clusters and their characteristics, we can uncover patterns that support smarter decisions. Let’s dive in!
Here’s a list of key concepts we have covered in this lesson, along with brief descriptions:
-
Data Maps (Projections/Embeddings): Visual representations of complex datasets in two dimensions, where similar data points are plotted close together, making it easier to explore patterns and relationships.
-
Dimensionality Reduction: The process of simplifying data by reducing many variables (features) to just a few, such as converting 19 features into 2, while retaining the most meaningful patterns and relationships.
-
t-SNE (t-Distributed Stochastic Neighbor Embedding): A specific dimensionality reduction technique that creates a two-dimensional map, ensuring that similar data points remain close together. This makes it easier to visualize and understand patterns in complex datasets.
-
Exploratory Data Analysis: The process of visually examining datasets to uncover patterns, relationships, and insights, which helps in decision-making and building explainable AI systems.
The approach we used for explaining both hierarchical clustering and data maps is the same: selecting a group of data points and identifying what makes them distinct from the rest. In both cases, for explanation of clustering and data maps, we relied on box plots to visualize the distribution of features within the selected group, enabling comparisons to other clusters or the overall dataset. We also ranked features by how well they differentiate the group, making it easy to pinpoint the most significant characteristics.
It's quiz time!
What is a data map in machine learning? (1pt.)
What is the purpose of t-SNE in data maps? (1pt.)
How did we explain clusters in the data map? (1pt.)
What tool did we use to rank features for cluster explanation? (1pt.)
Why is it important to explain clusters in data maps? (1pt.)
Chapter 3: Trees
Classification is a fundamental task in machine learning where the goal is to assign data points to specific categories or classes. For example, we might want to predict whether a customer will leave a service (churn) or stay, whether an email is spam or not, or whether a patient has a certain medical condition. To make these predictions, machine learning algorithms analyze patterns in existing data and build models that can classify new, unseen data.
One effective technique for building classifiers is the classification tree. A classification tree works by asking a series of questions about the features of the data, such as "Does the customer have a month-to-month contract?" or "How long has the customer been with the company?" Each question splits the data into smaller groups, eventually leading to predictions about the class. What makes classification trees particularly interesting is their readability—they are like flowcharts that can be visually interpreted. If the tree is not too large or complex, it provides a clear explanation of how decisions are made.
This combination of simplicity and interpretability makes classification trees a powerful tool in explainable AI. In this video, we’ll explore how a classification tree can help predict customer churn in a telecom dataset. You’ll see how the tree is trained, how predictions are made, and, most importantly, how the tree’s structure explains the reasoning behind its decisions. Let’s get started!
Key concepts covered in this lesson are:
-
Classification: A machine learning task where the goal is to assign predefined labels to data points based on their features. For example, predicting whether a customer will leave or stay with a company based on their behavior and demographics.
-
Class (Target Variable): The outcome we want to predict, like whether a customer will "churn" (leave) or "not churn" (stay).
-
Classification Tree: A machine learning model that works like a flowchart. It asks a series of simple questions about the data, splitting it step by step until it makes a prediction. For example, "Does the customer have a long-term contract?" might be a question the tree asks.
-
Readable and Explainable Models: Classification trees are a prime example of readable machine learning models because they show how decisions are made step by step. This makes them great for explainable AI, especially when the tree is small and not overly complex. Of course, classification trees are not the only readable models; other techniques like decision rules and logistic regression also offer interpretability.
-
Feature Importance in Trees: The tree helps highlight which features, such as contract type or tenure, are most important for making predictions. This gives us insight into what drives decisions.
-
Reviewing the Tree: By looking at the structure of the tree, we can understand why a certain prediction was made. For example, if the tree predicts that a customer is likely to leave, we can trace the decision path to see the exact conditions that led to this outcome.
-
Tree Depth: A way to control how detailed and complex the classification tree becomes. A shallow tree is easier to read and explain, while a deeper tree may be harder to interpret.
-
Explaining Predictions: The tree not only makes predictions but also provides a clear explanation for them, showing the step-by-step reasoning behind each decision.
Understanding classification models from machine learning is crucial because these models often influence important decisions, like determining whether someone qualifies for a loan, predicting health outcomes, or identifying customers at risk of leaving a service. In cases like these, it’s not enough to know what the model predicts—we need to understand why. This is especially important in sensitive areas like healthcare or finance, where people’s lives and livelihoods are directly affected. While some domains, like recommending a movie, may prioritize accuracy over explanation, transparency can still matter. In legal or societal contexts, for instance, it is especially important to show how the model works, what data it used, and why it made the prediction it did, particularly in cases where a decision is challenged—such as why a loan was denied. Knowing this builds trust, ensures fairness, and helps identify biases or errors that could otherwise go unnoticed.
Time for some questions.
What is the purpose of classification in machine learning? (1pt.)
What makes classification trees useful for explainable AI? (1pt.)
Why is it important to understand which features are important in a classification model? (1pt.)
Why is it important to explain machine learning models in areas like finance or healthcare? (1pt.)
In which scenarios might explanation of machine learning models be more important than just accuracy? (1pt.)
Chapter 4: Nomograms
Machine learning offers a variety of techniques for making predictions, and explainable AI provides tools to help us understand how these predictions are made. While classification trees are a great example of readable and interpretable models, they are not the only method available. In fact, some techniques are even simpler to interpret and are widely used in specialized fields like medicine and healthcare, where clear explanations are crucial. One such method is the nomogram.
A nomogram is a graphical representation of a predictive model that assigns points to individual features based on their importance in determining an outcome. These points are then summed up to calculate the probability of a specific outcome. For example, in healthcare, a nomogram might predict the likelihood of a patient recovering based on factors like age, blood pressure, and medical history. What makes nomograms particularly valuable is their ability to rank features and visually explain how each one contributes to the final prediction.
In this video, we’ll explore nomograms using the same telecom dataset as before, focusing on customer churn predictions. You’ll see how nomograms simplify complex data, rank features by importance, and provide clear explanations for individual predictions. Let’s dive in and discover why this tool is so powerful for explainable AI!
Key concepts covered in this lesson include:
-
Nomograms: A graphical tool used to visualize and explain machine learning predictions. Each feature is assigned a number of points based on its contribution to the outcome, and these points are summed to calculate a probability.
-
Feature Ranking in Nomograms: Nomograms rank features by their importance in predicting the outcome, making it easy to see which factors are most influential. For example, contract type and tenure might be key factors for predicting customer churn.
-
Individual Predictions: Nomograms allow us to examine individual data points, showing how each feature contributes to the prediction for a specific case. This makes them especially useful for personalized predictions, such as evaluating customer churn risk.
-
Comparison to Classification Trees: Unlike classification trees, which use a step-by-step decision flow, nomograms provide a linear representation of feature importance. Both are explainable, but nomograms often offer a simpler, more direct visualization.
-
What-If Analysis: Nomograms support "what-if" scenarios by allowing users to adjust feature values and observe how predictions change, making them a powerful tool for exploring and explaining model behavior.
Did you know that over 30,000 research papers in biomedicine published in the past 15 years reference the use of nomograms? This powerful tool has become a cornerstone in predictive modeling for healthcare, offering intuitive visualizations that help doctors and researchers make better-informed decisions. One of the most famous early applications is the Partin nomogram, which predicts the probability of organ-confined prostate cancer based on clinical features like PSA levels, Gleason score, and clinical stage. It remains a vital tool for urologists worldwide in guiding treatment decisions.
A key figure in advancing nomograms is Michael W. Kattan, a pioneer who has collaborated extensively with oncologists to develop influential predictive tools. His groundbreaking work includes nomograms for predicting outcomes in prostate cancer, breast cancer, and other diseases, making complex statistical models accessible to clinicians. Kattan’s work underscores how explainable AI tools like nomograms can bridge the gap between sophisticated machine learning and practical, life-saving applications.
Nomograms are often considered a hidden jewel of explainable machine learning. Their clear, visual approach to prediction has inspired a wave of innovations in explainable AI, influencing the design of modern tools that make machine learning models transparent and actionable. By learning about nomograms, you’re exploring a concept with a rich history and a bright future in AI-driven healthcare.
Let us review our understanding of the role the nomograms play in explainable AI.
What is a nomogram in explainable AI? (1pt.)
How does a nomogram show feature importance? (1pt.)
What makes nomograms different from classification trees? (1pt.)
What is the main use of the Partin nomogram? (1pt.)
Why are nomograms valued in explainable AI? (1pt.)
Chapter 5: Accuracy
In machine learning, choosing the right model often involves balancing two important factors: explainability and accuracy. While simpler models like classification trees and Naive Bayesian classifiers are easy to understand and explain, they may not always achieve the highest accuracy. On the other hand, complex models like gradient boosted trees or neural networks can provide more accurate predictions, but their inner workings are often too intricate to interpret easily.
This trade-off is particularly important in real-world applications where both accuracy and understanding matter. For example, in business, healthcare, or legal contexts, decisions often require not only accurate predictions but also clear explanations of how those predictions were made. This ensures fairness, builds trust, and allows for informed decision-making. In this video, we’ll compare simpler and more complex models, explore their strengths and weaknesses, and learn about techniques like SHAP values that help make even the most complex models more interpretable. Let’s dive in!
This lesson was dense :) and it covered many important concepts on the intersection of model complexity, accuracy, and explainability.
-
Explainability vs. Accuracy Trade-Off: The challenge of balancing simpler, interpretable models that are easy to explain (e.g., classification trees) with complex models that achieve higher accuracy but are harder to interpret (e.g., gradient boosted trees, neural networks).
-
Simple Machine Learning Models: Techniques like classification trees and Naive Bayesian classifiers that prioritize readability and transparency, making it easier to understand how predictions are made.
-
Complex Machine Learning Models: Advanced techniques like gradient boosted trees and neural networks that often achieve higher accuracy but lack interpretability. These models are made up of many components, such as layers or multiple decision trees, making them harder to explain.
-
SHAP (SHapley Additive exPlanations): A method for explaining complex models by assigning importance scores to features, showing how much each feature contributes to a specific prediction. SHAP helps bridge the gap between accuracy and explainability.
-
Feature Importance: The idea of identifying which data features most significantly impact a model’s predictions. Understanding feature importance helps in interpreting both simple and complex models.
-
Visualization of SHAP Values: A tool for graphically representing how each feature influences the prediction for a given data instance. This makes even highly complex models, like gradient boosted trees and neural networks, more interpretable.
SHAP, or SHapley Additive exPlanations, was developed to address the challenge of understanding complex machine learning models, like neural networks, which often act as "black boxes." These models can make highly accurate predictions but don’t easily reveal how or why they arrived at their conclusions. One way to explain such models is by studying their sensitivity, which examines how changes to specific features in the data affect the prediction. SHAP takes this further by assigning each feature an importance score, showing its contribution to a prediction for an individual data point.
The foundations of SHAP were laid by Erik Štrumbelj, who formalized the connection between Shapley values—originating from cooperative game theory—and machine learning prediction explanations. This groundwork provided a robust framework for interpreting non-linear models, where the relationship between inputs and outputs is complex and not easily represented by a straight line or simple equation. SHAP, building on these ideas, has become the most widely used approach for explaining complex models, combining sensitivity analysis with a clear, actionable way to understand predictions.
Time for a quiz.
What is the main trade-off in machine learning discussed in this section? (1pt.)
What is the purpose of SHAP in machine learning? (1pt.)
What does sensitivity analysis examine in machine learning models? (1pt.)
Why are models like gradient boosted trees and neural networks considered 'black boxes'? (1pt.)
Why is understanding feature importance valuable in explainable AI? (1pt.)
Chapter 6: Language
Explainable AI has traditionally focused on making machine learning models transparent and easy to understand through techniques like clustering, classification trees, and nomograms. These methods have been valuable for creating models that decision-makers can trust. But now, a new chapter is unfolding in explainable AI with the rise of large language models like GPT, Llama, and Gemini. These powerful tools, trained on vast amounts of data, are transforming how we explain and interpret machine learning models.
Large language models can do more than predict outcomes—they can, for instance, help refine explanations, making complex ideas clearer for non-experts. Whether it’s ranking features in a dataset or summarizing large collections of text, these models offer intuitive insights and bridge the gap between technical analysis and actionable understanding. In this video, we’ll explore how large language models can take explainability to the next level, opening up exciting possibilities for uncovering patterns across a wide range of data types. Let’s dive in!
In this section, we explore how large language models enhance explainable AI by refining complex data into clear, actionable insights. Here are the key concepts introduced in this chapter:
-
Large Language Models (LLMs): Advanced AI systems like GPT, Llama, and Gemini, trained on vast datasets, that generate human-like text and can refine complex explanations.
-
Feature Interpretation and Refinement: Using LLMs to translate technical feature rankings, such as those from machine learning models, into explanations that are easier for non-experts to understand.
-
Combining Data Exploration with LLMs: By embedding ranked features or text clusters into prompts, LLMs can generate explanations that integrate visual data exploration and machine learning insights.
-
The Evolution of Explainable AI: Explainable AI is still in its early stages, and we have yet to develop systems that can fully explain their decisions in a clear, faithful, and human-understandable way. However, LLMs have great potential to help us move closer to this goal by refining and communicating complex insights effectively.
Here is our last set of questions.
What is a large language model (LLM)? (1pt.)
How can large language models help explain feature importance? (1pt.)
What is one role of LLMs in analyzing large text datasets? (1pt.)
How can LLMs combine data exploration with machine learning insights? (1pt.)
Why are LLMs considered part of a new era in explainable AI? (1pt.)
Chapter 7: Looking Ahead
We are just at the beginning of a new era in explainable AI. Systems that not only analyze data but also provide explanations in clear, textual, or verbal form are already emerging. These advanced systems will soon perform automatic data exploration, narrating their findings while seamlessly integrating additional knowledge from related domains. What seemed unthinkable even just ten years ago—-an AI capable of both analyzing and explaining data in a human-like way—-is now within reach.
Explainable AI is no longer just about making complex models understandable; it’s evolving into a broader discipline that aims to explain everything—from patterns in data to insights gained from exploration. This transformation promises to make AI not only more accessible but also more transparent and ethically aligned, fostering trust and accountability in its applications. The journey ahead is definitely exciting, and the possibilities are endless. The tools we are starting to use today will shape the way we interact with and understand AI in the future.