HomeAI Fundamentals for Decision-Makers

This quiz consists of 0 mandatory questions where you can gather 0 points.
You will complete the quiz by answering all the questions and gathering at least 0 points.

Answered: 0 / 0

Achieved points: 0 (NaN %)

AI Fundamentals for Decision-Makers

A Self-Study Course For Decision-Makers

This course offers a practical and accessible introduction to machine learning and its role in artificial intelligence, tailored specifically for decision-makers. Through clear explanations and real-world examples, it demystifies the key concepts behind these technologies, showing how they can drive better decisions and unlock new opportunities. With no prior knowledge of statistics or computer science required, this concise course fits easily into your busy schedule, taking just one to two hours to complete.

The material in this course was developed by the Biolab group at the University of Ljubljana and is shared under the Creative Commons CC BY-NC-ND license.

Chapter 1: Machine Learning

AI is everywhere, transforming how we work, live, and make decisions. It offers tools to analyze patterns, predict outcomes, and uncover opportunities in nearly every aspect of life. If you’re a decision-maker, here are some examples for the questions AI can help address across various fields:

In the Public Sector:

  • Among cities implementing green policies, which one’s approach aligns most closely with your region’s goals?
  • How can we predict which public initiatives will achieve the highest citizen engagement?
  • Which urban development plan shares similarities with Dubai’s ambitious transformation?
  • How can we identify regions with similar socio-economic characteristics to target resources effectively?
  • What factors contribute most to improving life expectancy or education levels in a specific area?

In Industry and Finance:

  • Among your regional offices, which one’s growth patterns mirror your flagship location?
  • What hiring strategy leads to the retention of your longest-serving employees?
  • Which segment of your customer base is as loyal as your premium subscribers?
  • What’s the tone of feedback from your recent customer surveys, and how can you respond effectively?
  • How can we forecast revenue growth based on past financial trends and external market data?

AI techniques such as machine learning, deep learning, and generative AI turn data into actionable insights, helping leaders solve complex problems and create new opportunities.

In this course, we won’t directly answer these questions, but we’ll explore approaches that could help. With the right data—such as performance metrics, policy impacts, or customer trends—these challenges become manageable. Machine learning analyzes similarities, identifies patterns, and groups entities, uncovering actionable insights. Deep learning, a branch of machine learning, uses advanced neural networks to model complex relationships and predict outcomes. Generative AI creates new content, enabling the generation of reports, innovative solutions, or creative outputs. Together with data science, the practice of extracting and interpreting insights from data, these tools empower decision-makers to tackle challenges, develop strategies, and make informed decisions.

All these techniques fall under the umbrella of artificial intelligence. AI is everywhere, from recommending products and predicting trends to guiding policy and improving operational efficiency. At its core, AI relies on machine learning models to make sense of patterns in data. These models are only as good as the data they are trained on, highlighting the importance of understanding the relationship between data and AI. This course is designed to provide a practical, conceptual understanding of machine learning and AI, using relatable examples to help you make informed, data-driven choices.

Before we delve into examples, let’s start with a few definitions:

  • Artificial Intelligence: Artificial intelligence refers to systems designed to perform tasks that typically require human intelligence, such as decision-making, pattern recognition, or understanding natural language. Today, most AI systems are built using machine learning algorithms.

  • Machine Learning: Machine learning is a subset of artificial intelligence where algorithms identify patterns in data and use these patterns to make predictions or decisions. At its core, a machine learning algorithm takes input data, processes it to recognize trends or relationships, and outputs results, improving its accuracy over time as it learns from more data.

  • Deep Learning: Deep learning is a type of machine learning that uses neural networks, which are structured like layers of interconnected nodes. These networks act like a series of complex decision-making steps, processing large datasets to uncover patterns too intricate for traditional algorithms.

  • Generative Artificial Intelligence: Generative artificial intelligence, also built on machine learning, focuses on creating new content, such as text, images, or music. By learning from existing data, these systems can generate realistic and often creative outputs.

  • Neural Networks: A neural network is a machine learning model inspired by the structure of the human brain, consisting of layers of interconnected nodes, or "neurons." Each node processes information and passes it to the next layer, enabling the network to learn and make predictions. Neural networks are the building blocks of deep learning systems.

You are now ready for some introductory questions. As you progress through this course, you’ll have the opportunity to check your understanding with light, engaging quizzes. Let’s begin!

Which of the following problems do you NOT need AI to solve? (1pt.)

How are artificial intelligence and machine learning related? (1pt.)

Data science provides the methods and tools to process data, which AI uses to learn and make decisions.

Why is data important in artificial intelligence and data science? (1pt.)

Decision-making is the process of selecting a course of action from available options. While it often relies on evidence and analysis, decisions can also be made intuitively or without detailed evaluation.

How can artificial intelligence assist in decision-making? (1pt.)

Machine learning is at the heart of artificial intelligence, enabling systems to learn from data and make informed decisions. Understanding its concepts and applications is a vital step toward leveraging AI effectively, responsibly, and sensibly in your domain.

Chapter 2: Data

AI and machine learning start with data—it’s the foundation of everything. In this lesson, we’ll introduce different types of data that support decision-making, from simple tables like student grades to more complex datasets, such as socio-economic indicators, employee records, and even images and text. You’ll learn how these diverse datasets are used to uncover patterns, group similar items, and build predictive models, setting the stage for how AI can be applied to real-world problems.

It’s now time to watch the video! This short video will guide you through examples of different datasets and how they are used in machine learning. Let’s get started!

In the videos, we use Orange, a free and powerful machine learning and data visualization toolbox. While you don't need Orange to complete this course, if you're interested, you can download it and explore a video series with training materials on how to use it.

Here’s a list of key concepts we have covered in this lesson, along with brief descriptions:

  1. Tabular Data: This is the most common type of data in machine learning; it is organized in rows (instances or examples) and columns (attributes or features), often stored in spreadsheets.

  2. Meta-Features: These are additional pieces of information in datasets, such as names or geographical positions; they provide context but are not always used as input for models.

  3. Unstructured Data: This refers to datasets like images or text; these require transformation into numerical formats using modern machine learning techniques.

  4. Representation of Data: Complex data types like text and images are transformed into numerical representations; this is often achieved using pre-trained models that extract meaningful features.

  5. Diversity of Data Sources: Machine learning works with diverse datasets; examples include student grades, socio-economic indicators, employee records, and news articles.

  6. Modern AI Capabilities: AI has advanced to handle various types of data; it now processes everything from structured tables to unstructured text and images, offering powerful tools for decision-making.

Now, please complete the following quiz:

Which type of dataset would likely help identify trends in global education? (1pt.)

Handling complex datasets with different types of information, such as numbers and categories, is essential for capturing the full scope of real-world problems and making accurate predictions.

What does a dataset with mixed data types look like? (1pt.)

What is an example of unstructured data that decision-makers might use? (1pt.)

Which is a common application of image data in manufacturing? (1pt.)

Involving a machine learning engineer at the right stage of process design ensures that decision-makers align business goals with technical requirements, ensuring all stages are prepared for the effective introduction of AI.

At which stage should machine learning scientists be involved in constructing a database? (1pt.)

Data is the foundation of AI and machine learning, enabling us to uncover patterns, make predictions, and solve real-world problems. By understanding the types of data and how to prepare them, you take the first step toward leveraging AI for informed decision-making.

Chapter 3: Finding Groups

A data point represents a single item or instance in your dataset, such as a customer, employee, or product. For example, a data point could be an individual customer described by their age, income, and purchase history.

Clustering, or finding groups in data, is one of the most exciting and useful techniques in machine learning—and it’s incredibly relevant for decision-makers. Imagine uncovering hidden patterns in your organization’s data: identifying groups of customers with similar preferences, segmenting markets based on purchasing behavior, or finding clusters of customers with high churn risk. For policymakers, clustering can help identify regions with similar socio-economic challenges, enabling more targeted interventions and resource allocation. Clustering groups data points without needing predefined labels, offering actionable insights that can inform strategies, improve processes, and create value across your operations. Whether it’s financial data, customer profiles, employee records, or policy data, clustering reveals the structure within the data that might otherwise go unnoticed.

Now it’s time to dive into the video! You’ll see how clustering works and how it can be useful for making more informed decisions.

This video shows how clustering (finding groups in the data) can uncover hidden patterns in your data, helping you identify valuable groups like customer segments or employee profiles to support smarter decisions.

Here’s a list of key concepts we have covered in this lesson, along with brief descriptions:

  1. Unsupervised Learning: A machine learning approach that finds patterns in data without relying on predefined categories or labels. For example, when grouping customers based on purchasing behavior, the algorithm identifies patterns on its own, rather than being told which customers belong to which group in advance.

  2. Clustering: A way to group data points based on similarities; can help decision-makers identify patterns in customer behavior, employee performance, or market trends to guide strategies.

  3. Measuring Similarity: Comparing data points based on their shared characteristics to determine how alike they are; this ensures that the identified clusters are meaningful and actionable, for example, for making business decisions.

  4. Finding Neighbors: Identifying the most similar data points to a chosen example; useful, for instance, for understanding customers with similar preferences or employees with similar work styles.

  5. Hierarchical Clustering: A method for grouping data that creates a visual hierarchy of clusters; can help decision-makers see how groups are related and identify clear divisions in their data.

  6. Dendrograms: A visual tool that shows how data points group together in clusters; this tool allows decision-makers to easily spot natural groupings and relationships.

  7. t-SNE Maps: A visualization technique that simplifies complex data into a two-dimensional map, making it easier for decision-makers to see patterns in large datasets like regional socio-economic indicators or customer feedback.

  8. Exploring Clusters: Analyzing groups to understand what makes them unique; for example, identifying why certain customers prefer specific products or why some teams perform better.

  9. Understanding Clusters: Breaking down clusters to identify their key characteristics, allowing decision-makers to, for instance, tailor strategies or allocate resources effectively.

  10. Real-World Applications: Using clustering to solve practical challenges like segmenting customers, optimizing teams, or prioritizing business efforts across different markets.

Now, please complete the following quiz:

What is the main purpose of clustering in decision-making? (1pt.)

Which example illustrates finding neighbors in data about car manufacturers? (1pt.)

You have data on car manufacturers, including production costs, energy usage, and defect rates. How can clustering help in decision-making? (1pt.)

A dendrogram is a visualization for hierarchical clustering, one of the most commonly used clustering methods—and it often fits nicely into decision-makers' reports! However, it works best with smaller datasets due to computational and visualization limitations.

What does a dendrogram show in clustering? (1pt.)

Which is an example of how clustering can be used in public policy? (1pt.)

Data embedding techniques, like t-SNE, transform complex high-dimensional data into two dimensions for easier visualization. This approach was developed by Geoffrey Hinton, who was recently awarded the Nobel Prize in Physics for his groundbreaking work in artificial intelligence.

What is the role of t-SNE in clustering? (1pt.)

Clustering helps decision-makers uncover hidden groups and patterns in their data, making it easier to identify customer segments, optimize team structures, or allocate resources effectively. By applying clustering techniques, you can turn complex data into actionable insights that drive smarter strategies and better outcomes.

Chapter 4: Making Predictions

Making predictions is one of the most impactful ways machine learning can support decision-making. Imagine being able to predict which employees might leave your company, helping HR take proactive steps, forecasting which products are likely to succeed in the market, or identifying which public initiatives are most likely to achieve high citizen engagement. In this lesson, we’ll explore predictive modeling, a key technique in supervised learning, and show how it can be applied to real-world challenges. You’ll see examples of how models like Naive Bayes and logistic regression work, and why evaluating their accuracy is crucial to choosing the right tool for the task.

Now it’s time for the video!

Don’t worry if predictive modeling sounds complex—it’s more approachable than you might think. In this video, we’ll guide you step by step through how machine learning models make predictions and how they can be used to support smarter decisions.

Here are the key concepts covered in this section on predictive modeling, with explanations to help you understand their importance:

  1. Predictive Modeling: Using machine learning to predict future outcomes, such as identifying employees at risk of leaving, forecasting market trends, or predicting the success of public policies based on socio-economic data.

  2. Supervised Learning: A set of machine learning approaches where models are trained on historical data with known outcomes to make predictions.

  3. Key Features: The specific descriptors in the data, like age or income, that models most rely on to make predictions. Models can be, in part, explained by listing these descriptors. This kind of explanation may also help us to better understand the underlying data.

There are many machine learning algorithms for classification, which are essentially step-by-step computational methods that process data to make predictions or decisions. Logistic regression is one of the simplest algorithms, but others include random forests and neural networks. Machine learning engineers often choose algorithms based on their accuracy, while also considering the need for interpretability and the ability to explain predictions.

  1. Classification Models: Tools like logistic regression or naive Bayesian classifier that can build actionable models to classify data into categories, enabling decision-makers to act on predictions. Choosing the right classification model is crucial, often requiring a balance between prioritizing accuracy and interpretability.

  2. Accuracy: A measure of how well a model predicts outcomes, giving decision-makers and other actors or analysts confidence in model's reliability.

  3. Cross-Validation: A method for testing how well a model works by dividing data into training and testing sets, ensuring predictions are robust.

  4. Confusion Matrix: A simple way to evaluate a classification model by showing where it succeeds or fails in making predictions.

  5. Training Data: The data used to teach a model, highlighting the importance of quality and completeness in data.

Now, it's time to take the quiz:

What is the main purpose of a classification model in machine learning? (1pt.)

What does supervised learning require to make predictions? (1pt.)

Why is measuring accuracy important when using classification models? (1pt.)

What is the role of cross-validation in machine learning? (1pt.)

If HR wants to identifying and prioritize employees that are likely to leave the company, which factor should they consider when choosing a model? (1pt.)

Which one of the following scenarios would benefit from predictive modeling and classification? (1pt.)

Classification empowers decision-makers to predict outcomes and categorize data, helping them anticipate challenges, identify opportunities, and make informed choices. By leveraging predictive modeling techniques, you can transform raw data into actionable insights that guide strategy and improve decision-making outcomes.

Chapter 5: Images and Text

Images and texts are everywhere in decision-making, from analyzing customer reviews and drafting public communication materials to identifying patterns in satellite imagery or diagnosing medical conditions. For policymakers, they can be used to analyze citizen feedback, monitor urban development through satellite images, or assess the sentiment of news coverage on key issues. Machine learning allows us to apply clustering or predictive modeling to this unstructured data, just as we do with structured datasets like spreadsheets. The key is converting images or text into numbers—a process called embedding—which transforms these complex objects into actionable presentation for machine learning.

While the video uses simple examples like images of dogs and daily news, the same principles apply to managing any kind of visual or textual data. Whether it's analyzing customer feedback, organizing product catalogs, identifying trends in industry reports, or assessing public sentiment on proposed legislation, the techniques of classification and clustering work here too—turning unstructured data into actionable insights.

Here are the main concepts introduced in this lesson on analyzing images and text, with explanations to help you understand their relevance:

Artificial neural networks have been studied since the 1940s, but they only became widely popular with advancements in computational power and data availability. Convolutional neural networks, specifically designed for image analysis, have revolutionized the field in just the last decade. The breakthrough came with models like AlexNet in 2012, which demonstrated their power for tasks like image recognition.

  1. Embedding: The process of converting unstructured data, like images or text, into numerical representations. These embeddings allow machine learning models to analyze and interpret data that does not fit into traditional rows and columns.

  2. Neural Networks: Advanced machine learning models made up of layers of interconnected nodes, which are like combining many logistic regression models to solve more complex problems. Neural networks are particularly effective for handling unstructured data.

  3. Convolutional Neural Networks: A specific type of neural network designed to process images by identifying patterns such as shapes and textures. These networks are often pre-trained on large datasets and used to generate embeddings for image data.

  4. Clustering and Classification for Unstructured Data: Using clustering to group similar objects, such as news articles or images, and classification to categorize new data based on patterns identified in embeddings.

  5. Applications of Embedding: Embeddings created by neural networks can be used for practical tasks, such as identifying product categories from images, diagnosing conditions using medical images, or finding trends and insights in large collections of text.

Now let's move on to the quiz:

Why do we need to convert images and text into numbers for machine learning? (1pt.)

There are many pre-trained neural network models available for embedding nowadays. Some are designed for specific domains, such as medical imaging or legal documents, while others aim to be general-purpose, working across domains and problems. Choosing the right model depends on the type of data and the task at hand.

What is the role of neural networks in analyzing images and text? (1pt.)

How can clustering be applied to image data in a business setting? (1pt.)

What is the purpose of using t-SNE maps in machine learning? (1pt.)

Which of the following is a practical use of embeddings in text analysis? (1pt.)

Machine learning allows decision-makers to make sense of unstructured data like images and text, turning it into valuable insights. With the help of pre-trained models built on vast datasets, businesses can easily analyze visual and textual information to uncover patterns, make predictions, and drive smarter decisions in a wide range of applications. Policymakers can also leverage these tools to assess public sentiment, monitor urban development, or evaluate the impact of policies on diverse communities.

Chapter 6: Foundation Models

Artificial intelligence has advanced rapidly, with foundation models and generative AI leading the way. Earlier, we explored how AI predicts outcomes and uncovers patterns using embeddings to process images and text. Foundation models expand on these concepts with broad, flexible capabilities, trained on massive datasets, making them adaptable to a wide range of tasks without starting from scratch.

Unlike specific models like convolutional neural networks, which focus on tasks like detecting defects or analyzing medical scans, foundation models are broader in scope. They can recognize a product, suggest improvements, predict market success, and even generate ads, showcasing their transformative versatility.

Generative AI, a key application of foundation models, takes this further by creating new content—writing reports, designing graphics, or simulating scenarios. For example, it can assist city planners in drafting policies, visualizing impacts, and generating insights. These tools enable leaders to think creatively, make informed decisions, and seize new opportunities.

The following video provides an overview of the differences between machine learning, deep learning, and foundation models, highlighting how these technologies build upon each other to transform decision-making.

This video explores the evolution from machine learning to deep learning and foundation models, explaining their distinctions, connections, and transformative role in advancing AI technologies.

Below are some of the key concepts essential to understanding this topic.

  1. Foundation Models: Large, pre-trained AI models that can perform a wide variety of tasks across different domains. They are versatile and require minimal fine-tuning, making them useful for applications ranging from text analysis to image interpretation.

  2. Generative AI: A type of AI that creates new content, such as writing text, generating images, or designing products. It uses patterns learned from training data to produce entirely new outputs, supporting creative and strategic tasks.

  3. Broad Scope: Unlike traditional models designed for specific tasks, foundation models are flexible and capable of addressing multiple challenges across industries, from customer analysis to policy simulation.

  4. Adaptability: The ability of foundation models to generalize knowledge from vast training data, allowing them to be applied to new and diverse problems without extensive retraining.

  5. Creating Content: A capability of generative AI to produce meaningful and original outputs, such as drafting documents, designing visuals, or simulating future scenarios, which supports innovative decision-making.

The development of foundation models gained momentum around 2018 with advancements in large-scale neural networks like BERT and GPT. Today, these models are more versatile and accessible than ever, driving innovation across industries and public sectors.

Foundation models are reshaping how organizations approach artificial intelligence by offering ready-to-use tools capable of handling diverse challenges. Unlike traditional AI solutions that require building systems for each specific task, foundation models are pre-trained on extensive datasets and can adapt to a wide range of applications with minimal adjustments. Their appeal lies in their ability to provide quick, scalable solutions to complex problems, making them a game-changer for industries and public institutions alike. They cater to various needs, including:

  • GPT-Neo (EleutherAI): An open-source model for generating reports or summarizing policies, ideal for business and government use.
  • BioBERT (Open Source): Designed for analyzing biomedical literature, essential for healthcare and life sciences.
  • ChemBERTa (Hugging Face): Specialized for chemistry, aiding in drug development and material science.
  • Stable Diffusion (Open Source): Useful for creating images, supporting tasks in marketing, design, and public communication.
  • SAM (Meta): Excels in image segmentation, making it valuable for manufacturing quality control and visual data analysis.
  • Bloom (BigScience): A multilingual model for tasks such as translation and global policy drafting.

These examples showcase the diversity of foundation models, many of which are open-source and freely available. Their versatility allows organizations to enhance productivity, address challenges, and innovate across industries like medicine, manufacturing, education, and public policy without needing to invest heavily in training AI systems from scratch.

Foundation models are powerful tools that can drive innovation and efficiency, but decision-makers should understand their capabilities and limitations. These models are pre-trained, saving time and resources, but their effectiveness depends on the quality of their training data and how well they align with specific tasks. Organizations should evaluate whether to use open-source models, purchase licenses, or collaborate with AI providers based on their needs. Additionally, ethical considerations, such as ensuring fairness, transparency, and data security, are crucial when integrating these models into decision-making processes. While these concepts are essential, they are broad enough to warrant a separate course for deeper exploration.

What is the primary purpose of foundation models? (1pt.)

Open-source models like GPT-Neo and Stable Diffusion offer cost-effective and transparent solutions for tasks like translation or policy analysis. Choosing between open-source, licensed, or collaborative approaches depends on your organization’s needs.

How can organizations access and use foundation models? (1pt.)

What is a key benefit of using foundation models? (1pt.)

What ethical considerations should decision-makers keep in mind when using foundation models? (1pt.)

Which of the following is an example of how foundation models can be applied in government? (1pt.)

Foundation models provide decision-makers with a powerful, versatile tool to address complex challenges across industries and public sectors. By leveraging pre-trained models built on extensive datasets, organizations can save time, enhance productivity, and unlock innovative solutions. These models empower leaders to generate insights, create content, and adapt strategies, making them invaluable for informed decision-making in an increasingly data-driven world.

Chapter 7: Looking Ahead

Congratulations on completing the course! You’ve gained valuable insights into how artificial intelligence, from clustering and predictive modeling to foundation models and generative AI, can transform decision-making. These tools empower you to uncover patterns, make data-driven predictions, and create innovative solutions. Whether you’re applying these concepts to challenges in your organization or exploring AI’s broader potential, this is just the beginning. Stay curious, keep exploring, and embrace the opportunities AI offers to shape a smarter, more efficient future!