Explainable Machine Learning
Introductory Course
*** Cancelled due to personal difficulties of the lecturer ***
Abstract
Machine learning models are often perceived as ‘black boxes’. Explainable machine learning (XML) a.k.a. Explainable artificial intelligence (XAI) methods allow us to inspect their inner workings and understand their predictions to reveal new insights about the data, but also hidden biases, increase transparency, trust and safety of ML applications and promote their adoption by domain experts and general public alike. In high stakes applications (e.g. medical, safety) XAI is becoming a regulatory requirement. The course will provide an overview of principles, methods, applications, limitations and challenges of XAI. We will cover the more inherently interpretable ‘white box’ methods as well as methods tailored to the more opaque deep neural networks and general purpose methods applicable to any model. We will give examples of successful applications, discuss advantages and disadvantages of each approach, but also limitations and open problems of XAI as a field. The course will be accompanied by Python tutorials.
Prerequisites
The course requires a solid understanding of ML principles & terminology. A basic knowledge of calculus (e.g., concepts of function, partial derivative, gradient) and probability (e.g., concepts of probability distribution, conditional probability) is desirable. Basic programming knowledge in Python will allow the students to make the most of the accompanying Jupyter Notebook tutorials.
Learning to behave via imitation
Advanced Course
Abstract
The objective of this course is to transfer knowledge that has been acquired during the last 5 years with regard to modelling the behaviour of moving objects, via deep reinforcement learning and imitation learning. The course will provide background knowledge on deep reinforcement learning and imitation learning and will present challenges and ways to address them, including multimodality (i.e. modelling different patterns of behaviour to perform a task), sparse rewards and long-term horizons, and, maybe very crucially, balancing between optimality and safety, with respect to constraints.
Prerequisites
Attendees, are expected to be last MSc students or first year PhD students, and are expected to have basic knowledge of Markov Chains and Markov Decision Processes, Sequential Decision Making with Reinforcement Learning and basic knowledge of supervised learning methods, for classification and regression.
Introduction to Constraint Satisfaction
Introductory Course
Abstract
Constraint programming is a technology for declarative description and solving of hard combinatorial problems, such as scheduling. It represents one of the closest approaches to the Holy Grail of automated problem solving: the user states the constraints over the problem variables and the system finds an instantiation of variables satisfying the constraints and representing the solution of the problem. The course overviews major constraint satisfaction techniques and shows how they can be used to solve practical problems.
Prerequisites
The course expects basic knowledge of foundations of computer science, in particular understanding algorithms and data structures.
Quantification: Predicting Class Frequencies via Supervised Learning
Introductory Course
Abstract
Quantification is the task of training predictors of class frequencies via supervised learning, and is interesting in all applications of classification in which the final goal is not determining to which class individual unlabelled datapoints belong, but estimating the distribution of the unlabelled datapoints across the classes of interest. Example disciplines whose interest in labelling datapoints is at the aggregate level (rather than at the individual level) are the social sciences, market research, ecological modelling, and epidemiology. While quantification may be solved by classifying each unlabelled datapoint and counting how many such datapoints have been labelled with each class, it is now undisputed that this “classify and count” (CC) method yields suboptimal quantification accuracy. As a result, quantification is no longer considered a mere byproduct of classification, and has evolved as a task of its own. The goal of this course is introducing young researchers to the methods, algorithms, evaluation measures, and evaluation protocols used in the field of quantification.
Prerequisites
The prerequisites of the course are basic knowledge of linear algebra, probability theory, and supervised learning. The course will appeal not only to students whose main interest is machine learning, but also to data science students in general, because of the variety of fields to which quantification can be applied.
Deep Reasoning in AI with Answer Set Programming
Introductory Course
Abstract
Answer Set Programming (ASP) is a logic-based Knowledge Representation and Reasoning (KRR) paradigm easing the fast prototyping of algorithms for complex problems. Indeed, ASP finds a natural application in solving Deep Reasoning problems characterized by search spaces of exponential size, which is the typical case for combinatorial search and combinatorial optimization. However, while moving the first steps in ASP is easy, being proficient with the most advanced linguistic constructs and scaling over realistic size instances is not necessarily a walk in the park. In this course we will show how to use ASP at several levels, from the basic use of ASP systems for computing answer sets of an ASP program, to more sophisticated use cases in which ASP itself is just one (even if of crucial importance) wheel in broader and more complex gears. We will give space to ASP internals, secure coding and explainability concerns (XAI).
Prerequisites
Students, young researchers, and other non-specialists, with a background in computer programming. Basic understanding of Python and SQL code. Some very basic knowledge of complexity theory, first-order logic, and boolean satisfiability.
Formal Aspects of Strategic Reasoning and Game Playing
Introductory Course
Abstract
Strategic reasoning is an active research area in Multi-Agent Systems (MAS). Theoretical results in this area are now being used in many exciting domains, including software tools for information system security, robot teams with sophisticated adaptive strategies, and mechanism design. In the last years, there has been an extensive and fruitful effort to provide cross-fertilization and build a bridge between formal methods, Game Theory, and logics for MAS. Recent results prove this research field provides the right tools to answer fundamental questions and to develop theoretical paradigms and practical tools to help design correct systems. This course presents important background and novel techniques on formal methods and logics for modeling and verifying strategic ability and game-playing in MAS. We begin with relatively simple approaches and progressively move to more sophisticated ones that deal with open systems, strategic reasoning, imperfect information, and quantitative goals.
Prerequisites
The course is self-contained, and the participants are only expected to have basic background in logic.
Integrated Knowledge-based and Data-driven Reasoning, Control, and Learning in Robotics: Robustness, Rationality and Explainable Agency
Advanced Course
Abstract
This advanced course seeks to bring participants to the state of the art in integrated systems that sense and interact with the physical world using knowledge-based and data-driven methods for reasoning, control, and learning. In particular, we will explore systems that support: (a) reasoning with prior commonsense domain knowledge and learned models based on non-monotonic logics; (b) leveraging generic knowledge and specific observations using machine learning methods and foundation models; (c) robust and efficient decision-making based on heuristic methods that prioritize adaptive satisficing over optimization; and (d) a methodology to establish that the system’s behavior satisfies desired properties, and to provide on-demand relational descriptions as explanations in response to different types of questions. We will use practical examples drawn from robotics, computer vision, and multiagent systems to ground these concepts and discuss how the interplay between representation, reasoning, control, and learning can help address the underlying fundamental challenges.
Prerequisites
This is an advanced course that seeks to bring participants to the state of the art in a cutting edge research area. It is appropriate for senior-level undergraduate students and postgraduate students who satisfy the following prerequisites: (i) basic proficiency in mathematical logic and logic-based formalisms (e.g., first-order logic), (ii) proficiency in core mathematical concepts (e.g., linear algebra, calculus, probability theory) and (iii) basic understanding of machine learning concepts and some related algorithms.
Unlocking Data Insights: Introduction to Data-Centric AI
Introductory Course
Abstract
Artificial Intelligence (AI) has historically relied on data and algorithms. However, the traditional model-centric AI paradigm mainly treated data as static entities. Data is collected first, pre-processed, and kept fixed, by spending most of the development time optimizing the learned models. This approach resulted in increasingly complex and opaque models, demanding big training data. In contrast, the emerging data-centric AI is dedicated to systematically and algorithmically generating optimal data to feed Machine Learning (ML) models. The primary objective of data-centric AI approaches is to continually enhance data quality, enabling a level of model accuracy previously deemed unattainable through modelcentric techniques alone. This course aims to delve into the key concepts and open challenges of data-centric AI on the future of AI and ML, providing an overview of the techniques and solutions in the data-centric paradigm.
Prerequisites
Basic understanding of key data science concepts, including exploratory data analysis, data visualization and data understanding. Familiarity with basic machine learning concepts can be beneficial, but it may depend on the focus of the course.
Logic-based specification and verification of multi-agent systems
Advanced Course
Abstract
Formal, logic-based approaches and methods are becoming increasingly popular and important in the modeling and analysis of the performance of agents and coalitions in multi-agent systems (MAS). In this course I will first present and discuss some of the most popular and useful logical frameworks for modelling and specification of MAS, including the alternating time temporal logic ATL and some variations and extensions of it. I will then discuss models with perfect and imperfect information as well as the role of agents memory. I will then present some specific topics of recent and ongoing research, including MAS with quantitative and qualitative objectives, homogeneous dynamic MAS, and dynamic resource allocation problems in MAS modelled as generalized dining philosophers games. The emphasis of the course will be on understanding of the logic-based approach for formal specification of MAS and the associated algorithmic model checking methods for formal verification and strategy synthesis.
Prerequisites
The course is intended for an audience with some background on (classical) formal logic; basic knowledge of modal and temporal logics; general idea of formal specification and verification, and a general appreciation of logic-based approaches to modelling and analyzing multi-agent scenarios. The intended level is advanced master or doctoral, but it can be adjusted downwards, depending on the actual audience. The course should appeal to a wide audience of students and researchers.
Introduction to computational argumentation semantics
Introductory Course
Abstract
Argumentation is a process of constructing arguments and attacks between them. It drew attention of numerous scholars recently with its features and applications. This course starts with a general introduction to argumentation theory. Then, we study the main motivation and ideas behind extension-based semantics and labellings. After that, we review the quantitative and qualitative approaches for evaluation of individual arguments in an argumentation framework. We analyse and compare all those approaches using the principles from the literature. We conclude the course by providing an overview of multi-disciplinary questions, open problems, possibilities and future challenges.
Prerequisites
The course is self-contained and accessible to any student familiar with the general field of Knowledge Representation and Reasoning. Basic familiarity with propositional logic might be useful for a deeper understanding of some formalisations.
Fairness and Explainability: Models, measurements and mitigation strategies
Advanced Course
Abstract
Algorithmic fairness and explainability are foundational elements for achieving responsible AI. In this course, we will cover core principles of both, putting emphasis on their interplay. We will start with specific application examples of unfair algorithms. Then, we will present the various approaches towards formalising algorithmic fairness as well as several preprocessing, in-processing and post-processing approaches for mitigating unfairness. We will also focus on application and context-specific fairness models, namely for rankings and recommendations, entity resolution tasks and graphs. In the next part of the course, we will present the fundamental types of explanations and methods for producing them, focusing on those appropriate to be used in the context of fairness. Finally, we will investigate the connections between fairness and explanations, answering two important questions: (a) can we use explanations to model, understand, and mitigate unfairness, and (b) are explanations and the methods that produce them fair?
Prerequisites
This course is intended to be an advanced course targeting primarily graduate students. We expect a foundational understanding of Machine Learning concepts and models, as well as knowledge of algorithms, basic statistics, and probabilities.
Learning Paradigms for Hybrid Decision-Making
Introductory Course
Abstract
The focus of the course lies on frameworks that combine human and AI components to enable a synergistic collaboration in solving a task, namely, taking a decision. The interpretation of concepts revolving around the definition of a Hybrid Decision-Making (HDM) process can widely vary across the literature from multiple perspectives since both the hybrid and the decision-making terms may convey different meanings with respect to, for instance, the nature and the number of the agents involved, their blending, the domain of the decision-making process, and their impact on people. This course aims to thoroughly analyze possible declinations of HDM from a conceptual and, especially, technical point of view. This will involve describing and contextualizing the distinctive features of the HDM systems. The first goal of the course is to present a primer of learning paradigms for HDM and provide an overview of the ongoing research in this emerging field along three dimensions: Human Oversight, Learning to Abstain, and Learning Together. The second goal is to promote discussions on the limitations of current paradigms and their potentiality in fostering synergistic collaboration between humans and Artificial Intelligence (AI).
Prerequisites
The target audience is assumed to have the following knowledge: Machine learning principles, AI systems design, Mathematical Analysis and Optimization. Optionaly Cognitive science background, Explainable AI and Human-Machine interaction. The course will touch on introductory and advanced concepts on these topics. Nevertheless, a concise summary of the necessary prerequisite knowledge will be provided.
The Legislation Game: Introduction to Legal Issues in Artificial Intelligence and Large Language Models
Introductory Course
Abstract
This course will introduce the participants to the dynamic legal discourse surrounding Artificial Intelligence (AI), with a particular focus on Large Language Models (LLM). Through a blend of interactive theoretical discussions and case studies from EU and US law, the audience will discover how legal frameworks such as Intellectual Property (and especially Copyright), Data Protection or Civil Liability (Tort Law) are affected by technological developments, and to what extent AI and LLMs are already regulated by law. An important part of the course will be devoted to the most recent developments in EU law, such as the Data Act and the long-awaited Artificial Intelligence Act. The course is intended for people with no background in law, but lawyers who want to learn more about the subject are also welcome.
Prerequisites
There are no prerequisites required. Multiple legal frameworks will be discussed, making the course appealing to students without any background in legal studies, as well as to those with legal training who wish to broaden their perspective on AI-related issues.
From Quantity to Quality: The Role of Large Datasets in Language AI Evolution
Introductory Course
*** Cancelled due to personal difficulties of the lecturer ***
Abstract
This talk will explore the transformative impact of large pre-training datasets on the evolution and efficiency of Large Language Models (LLMs). By examining a series of breakthrough models, such as T5, BERT, and the GPT series through GPT-3, GPT-J, LLaMA, and Falcon, the presentation aims to highlight how the scale and quality of datasets have been pivotal in enhancing language capabilities. It will delve into various architectures, from Encoder-Decoder and Encoder-only to Decoder-only frameworks, and discuss the critical role of foundational datasets like C4, BooksCorpus, WebText, the Pile, and the RefinedWeb. These datasets underscore technological milestones and the importance of data diversity, volume, and quality in achieving significant advances in model performance. The talk will provide insights into the interplay between data and algorithmic innovation, charting the journey of LLMs towards more sophisticated language understanding and generation capabilities.
Prerequisites
This introductory course is designed for students, young researchers, and professionals with a basic understanding of machine learning and artificial intelligence principles. No advanced expertise in deep learning or natural language processing is required; however, participants should be familiar with fundamental concepts in AI, such as neural networks and basic data handling techniques. A general curiosity about the workings of language models and enthusiasm for exploring the impact of large datasets in AI development will enrich the learning experience. The course aims to provide a comprehensive foundation, enabling attendees to grasp the complexities of Large Language Models (LLMs) and their underlying technologies, thus fostering a sound understanding suitable for interdisciplinary applications and further study in the field.
Game-Theoretic Approach to Temporal Synthesis
Advanced Course
Abstract
This course introduces AI reactive synthesis for tasks (goals) expressed over finite traces instead of states. Drawing upon the methodologies of Formal Methods, we will consider tasks and environment specifications expressed in LTL and its finite trace variant LTLf. We will review the main results and algorithmic techniques for solving reactive synthesis. Then, we will draw connections with their game-theoretic solution techniques. The main catch is that working with these logics can be based on devising suitable 2-player games and finding strategies, i.e., plans, to win them. We will cover the following topics: Games on Graphs, Temporal Logics, LTL,LTLf,Game-theoretictechniquesforLTLf objectives,andReactive Synthesis. This course is partially based on the work carried out in ERC Advanced Grant WhiteMech, EU ICT-48 TAILOR, and the PNRR MUR project PE0000013-FAIR.
Prerequisites
The course targets Graduate and PhD Students who wish to acquire knowledge on current research from Formal Methods for Artificial Intelligence. The attendees are expected to have a basic understanding in Discrete Mathematics, Graph Theory, Logics, and Automata.
Universal Models and the Chase Procedure
Advanced Course
Abstract
Many database problems that involve rule-like constraints, e.g., containment of queries under constraints, checking logical implication of constraints, computing data exchange solutions, and ontological query answering, to name a few, can be solved by simply exhibiting a universal model of the given database D and set of constraints Σ. Intuitively, a universal model is a representative of all the models of D and Σ. A fundamental tool that allow us to build universal models is the so-called chase procedure, which takes as input a database D and a set of rule-like constraints Σ, and adds new tuples to D as dictated by the rules of Σ. After discussing the central notion of universal model, we will introduce the main variants of the chase procedure (oblivious and restricted) focussing on tuple-generating dependencies, discuss their key differences, and show that they indeed build universal models. We will then focus on the challenge of non-termination of the chase procedure under tuple-generating dependencies. In particular, we will discuss the problem of deciding whether the chase procedure terminates, and also introduce the core chase, a sophisticated variant of the chase procedure that is complete for computing finite universal models.
Prerequisites
The proposed course is intended for graduate students with working knowledge of first-order logic and computability theory (e.g., decidability and undecidability concepts and techniques). Rudimentary knowledge of databases, ontologies, and description logics is desirable but not necessary. The course will appeal to graduate students that are interested in the broader area of knowledge representation and reasoning. There is no single reference containing the topics that will be covered in this course. Prior to the beginning of the course, the instructors will provide a comprehensive set of references (some of which are listed in the References section of this proposal) and also prepare a set of lecture notes that will be distributed to the enrolled students at the beginning of the course.
Harnessing Scientific AI for Knowledge Discovery in the Open Research Knowledge Graph
Advanced Courses
Abstract
This course delves into the transformative potential of the Open Research Knowledge Graph (ORKG) in redefining scholarly communication through structured descriptions of research contributions and comparative analysis based on salient research properties. By integrating scientific AI, specifically domain-specific information extraction services and Large Language Models (LLMs), the ORKG facilitates the creation of semantically rich, machine-readable data that enhances research discoverability and comparison. We explore how we intertwine search engines, LLMs and knowledge to provide researchers better targeted information extraction from scholarly articles. Furthermore, we discuss how the extracted data is integrated in the ORKG by leveraging human knowledge to ensure the accuracy and relevance of the information added to the graph. This session aims to equip researchers with the knowledge to leverage these advancements for effective knowledge discovery and management in the digital age of scholarly research.
Prerequisites
This advanced course is tailored for graduate students, researchers, and professionals who have a foundational understanding of Artificial Intelligence (AI), machine learning, and data science principles, as well as a basic familiarity with the challenges and processes of scholarly communication. Participants should possess a working knowledge of key AI concepts and techniques, such as natural language processing and information extraction, to fully engage with the course's content. While a basic understanding of semantic web technologies and knowledge graphs is beneficial, it is not a strict prerequisite. Prior exposure to or experience with scholarly databases, digital libraries, or research management systems will enhance the learning experience but is not required for participation.
Explainable AI via Argumentation: Theory & Practice
Introductory Course
Abstract
Explanations play a central role in AI either in providing some form of transparency to black-box machine learning systems or more generally in supporting the results of an AI system in order to help users to understand, accept and trust the operation of the system. The course will present how Argumentation can serve as a basis for Explainable AI (XAI) and how this can be applied to Decision Making and Machine Learning for AI applications. It will present the role and basic quality requirements of explanations of AI systems and how these can be met in argumentation-based systems. It will cover the necessary theory of argumentation, a software methodology for argumentation-based explainable systems and the use of practical tools in argumentation for realizing such systems. Students will have hands on experience on using these tools and the development of a realistic XAI decision making system.
Prerequisites
The course requires general background on Computing and AI.
Self-Governing Multi-Agent Systems
Introductory Course
Abstract
Self-organising multi-agent systems are a powerful engineering paradigm for developing cyber-physical systems using distributed computational intelligence. Applications include, amongst others, smart grids, ad hoc networks, cloud computing, and information processing. However, basing agent-behaviour on voluntary compliance with conventional rules, that can be re-negotiated and modified at run-time, produces several challenges in self-governance. This includes knowledge management, enforceability, social influence, collective action, and sustainability. There are also some well-known political problems such as the “iron law of oligarchy”, the “paradox of self-amendment”, and path dependency to be addressed. In this inter-disciplinary course, students will learn the foundations of self-organising multi-agent systems, and understand deep issues concerning self-governance and ‘democracy’. They will be able to apply concepts and theories from philosophy, psychology and political science, in order to specify learning and reasoning algorithms, and to code social simulators. The implications for self-governing socio-technical systems combining human and computational intelligence are also discussed.
Prerequisites
The expected level of attendees would be graduate students in whichever year, and post-doctoral research assistants. There are no prerequisites, although some background in the basics of Artificial Intelligence (knowledge representation in computational logic, algorithms for machine reasoning and machine learning) would be helpful, but are not essential. Some background in distributed systems would also be helpful, but again is not essential.
Multi-Agent Systems and Evolution
Introductory Course
Abstract
Evolutionary Game Theory (EGT) provides an important framework to study behavior in multi-agent systems. It combines ideas from evolutionary biology and population dynamics with the game theoretical modeling of strategic interactions. EGT offers a powerful mechanism to characterize not only behavior at equilibrium, but also the dynamics of the population before an equilibrium is achieved. Its importance is highlighted by the numerous high level publications that have enriched different fields, ranging from biology and social sciences to AI, in many decades. In this course we introduce the main concepts of the field and how they can be used to study the emergence of cooperation in hybrid human-AI interactions both in un unstructured (well-mixed) and structured populations.
Prerequisites
Basic knowledge of dynamical systems, Complex Networks and Markov Chains.
Logic-Based Explainable Artificial Intelligence
Advanced Course
Abstract
The last decade witnessed remarkable advances in machine learning (ML), that are having far-reaching societal impact. By all accounts, such impact is expected to become even more prominent in the future. Nevertheless, a threat to the widespread deployment of ML models is their complexity. Human decision-makers are unable to fathom the decisions made by complex ML models. This not only makes debugging a challenge, but it is also a source of distrust in those models. Explainable artificial intelligence (XAI) aims to help human decision-makers to understand the decisions made by ML models. However, the best-known XAI approaches do not provide formal guarantees of rigor, and this can cast distrust instead of building trust. As a result, recent years have witnessed the emergence of formal approaches for explaining the operation of ML models, being referred to as formal explainability in AI (FXAI). The explanations obtained with FXAI are logic-based and offer guarantees of rigor that are unmatched by other XAI approaches. This course offers an in-depth contact with the underpinnings of formal explainability in AI.
Prerequisites
The course targets researchers interested in artificial intelligence, machine learning, and in formal approaches for explaining decision making. The course also targets junior and senior graduate students working on artificial intelligence. The course is mostly self-contained, but some basic contact with symbolic AI will be a plus.
Machines Climbing Pearl’s Ladder of Causation
Introductory Course
Abstract
Following the success of the “Machines Climbing Pearl’s Ladder of Causation” course in its inaugural form at ESSAI 2023 in Ljubljana (Slovenia), we are eager to let said machines climb once again towards the promises of causal AI in a 2nd edition at ESSAI 2024 in Athens (Greece). Artificial intelligence's primary engine, deep learning, has several issues with regard to its data-hungry nature along with a lack of interpretability and explainability. A principled approach to overcome these weaknesses is causal modeling and inference, a mathematical framework well aligned with human-like cognition. In this course, we will show how causality can help machine learning models ascend the ladder of causation, moving beyond mere identification of statistical associations (rung 1 inferences) to provide more insightful and valuable interventional and counterfactual explanations (rung 2 and 3 inferences). We propose to start with the very basic notions of causality, then moving to the question of how to discover and represent causal knowledge in machine learning models. Then after covering the identification and estimation of causal effects we will present the current state of research in causality, eventually concluding with a hands-on session where the participants can do a practical deep dive into causal models.
Prerequisites
We expect the students to have basic knowledge of probability theory, including the axioms of probability, conditional independence, and Bayes’ Rule. Occasionally, topics from statistics and machine learning will be covered in the course, so some familiarity with those will be helpful, but is not necessary.
AI Governance in Europe: Navigating the AI Act, Establishing AI Offices and Upholding European Principles
Introductory Course
Abstract
This course aims to provide a journey to the heart of AI governance in Europe. Tailored for non-specialists, this program introduces participants to essential legal and regulatory insights while focusing on groundbreaking EU developments, notably the AI Act and the impending launch of the European AI Office. Key objective is fostering a nuanced understanding of the regulation of the AI technologies, the special regime and bans of the AI Act, all while understanding international principles on AI ethics and modern European principles on AI such as transparency, trustworthiness and explainability, as well as the traditional ones, such as the freedom of movement of goods. As an AI legal expert and President of Rythmisis, the Greek Institute for AI Law, I am well-equipped to guide participants through this transformative learning experience, offering both theoretical insights and actionable strategies for navigating the evolving AI governance landscape. The course's ultimate goal is to empower the next generation of AI leaders with the knowledge and skills needed to navigate, innovate, and succeed in the European AI landscape in any sector (health, education, commerce, economics etc.).
Prerequisites
This course is designed for professionals, policymakers, legal experts, and individuals interested in gaining a comprehensive understanding of AI governance in Europe. While a background in law, policy, or related fields is beneficial, there are no specific prerequisites for participants. The course is crafted to accommodate individuals at various stages of their careers, including those with limited prior knowledge of AI regulation.
Agent-Based Simulation in Complex Networks
Introductory Course
Abstract
Agent-based models are a promising area to deal with adaptive complex systems, which are characterized by a collective behavior that leads to emergent phenomena. Networks constitute a mathematical framework to study complex, emergent, and self-organized environments, in which the relation among the participant entities plays a central role in the functioning of the community. The aim of this tutorial is to introduce the students to the area of complex networks. The course comprises a combination of theoretical concepts regarding the structure and dynamics of the most known network models, scale-free and small-world phenomena. The second part is a practical one in which these models will be implemented using Netlogo and Python.
Prerequisites
The course is appropriate for graduates in degrees with basic programming and mathematical competences (computer science, engineerings, physics, or mathematics among others). It is recommended knowledge about programming (recommended elemental skills with Python), classic graph theory and basic statistics.
Large Language Models, Societal Harms, and their Mitigation
Introductory Course
Abstract
Numerous recent studies have highlighted societal harms that can be caused by language technologies deployed in the wild. In fact, several surveys, tutorials, and workshops have discussed the risks of harms in specific contexts (e.g., detecting and mitigating gender bias in NLP models). This course will present a unified typology of technical approaches for mitigating harms of language generation models. The course is based on an extensive survey that proposes such a typology. The course will provide an overview of potential social issues in language generation, including toxicity, social biases, misinformation, factual inconsistency, and privacy violations. Our primary focus will be on how to systematically identify risks, and how to eliminate them at various stages of model development, from data collection, to model development, to inference/language generation. Through this tutorial, we aim to equip AI and NLP researchers and engineers with a suite of practical tools for mitigating safety risks from pretrained language generation models.
Prerequisites
We expect that students at the MSc and PhD level that have some familiar with basic machine learning and NLP techniques will be able to easily follow the tutorial, so these are the minimum prerequisites. Of course, some familiarity with LLMs will certainly help (which we assume will be almost a given, as most students will have interacted with popular ones like ChatGPT at some point).
Algorithms for Causal Probabilistic Graphical Models
Introductory Course
Abstract
This course will cover methods for reasoning about Causal Probabilistic Graphical Models (e.g., Causal Bayesian Networks, influence diagrams). We will cover methods for evaluating or approximating probabilistic queries in these models (e.g., conditional probabilities, causal effects, etc.), primarily from the perspective of a fully known causal model, but will also address issues of estimating quantities from observational data. In particular, beginning with the formalisms of Bayesian Networks, Causal Structural Diagrams, and Influence Diagrams, and the process and computational complexity of exact inference by variable elimination, we will then discuss the major frameworks for answering or estimating probabilistic queries, including conditioning-based methods (e.g., cycle-cutset conditioning and AND/OR search), optimization-based methods (variational approximations and decomposition bounds), and sampling methods (Monte Carlo and MCMC). Finally, we will discuss estimating causal queries using observational data, including the widely-used class of estimand methods, and model learning-based approaches.
Prerequisites
Familiarity with basic concepts of probability theory and algorithms. Knowledge of basic computer science, algorithms and programming principles.
Neural-symbolic Knowledge Representation and Reasoning
Introductory Course
Abstract
Ontologies and Knowledge Graphs (KGs) have been extensively explored as means of symbolic knowledge representation and reasoning (KRR), leading to a wide range of insights, formalisms, tools, and applications in information systems, AI, life sciences and so on. More recently, ontology and KG embeddings have been designed and investigated: they aim to represent entities (concepts, relationships) in a sub-symbolic (vector) space while preserving (some) of their semantics, and promise to transfer the encapsulated knowledge to deep learning models and to enable the integration of statistical methods. In this course, we will mainly (i) introduce symbolic KRR using ontologies and KGs with technologies such as Resource Description Framework (RDF), RDF Schema (RDFS), Web Ontology Language (OWL) and Description Logic, and (ii) present the development of ontology and KG embeddings with their concepts, methods and usage in knowledge engineering, for augmenting machine learning, and for addressing link prediction problems.
Prerequisites
This course is suitable for AI students of different levels, but is especially developed for (i) those who are keen to learn about KRR with ontologies, including Web Ontology Language and Description Logic, (ii) those who have a knowledge of ontologies but want to learn about their integration with machine learning, and (iii) those who have a knowledge of semantic embeddings such as word embeddings and knowledge graph embeddings but want to extend their understanding to more complex semantics. This course will appeal to students outside of its main discipline KRR, such as students from machine learning and from AI for sciences. It is expected that the students already have some basic knowledge of AI foundations including machine learning and logic in computer science.
Probabilistic Circuits: Tractable Representations for Learning and Reasoning
Advanced Course
Abstract
Decision making in the real world involves reasoning in the presence of uncertainty, calling for a probabilistic approach. Often, these reasoning processes can be rather complex and involve background knowledge given by logical or arithmetic constraints. Moreover, in sensitive domains like health-care and economical decision making, the results of these queries are required to be of high quality, as approximations without guarantees would make the decision making process brittle. Despite all their recent successes, deep probabilistic models, such as VAEs, normalizing flows and diffusion models, are intractable and fall short of the above requirements. This Advanced Course is the continuation of the successful and well-received Introductory Course “Probabilistic Circuits: Deep Probabilistic Models with Reliable Reasoning” presented at ESSAI’23 in Ljubljana, and will provide in-depth knowledge and skills to treat above challenges with the powerful framework of probabilistic circuits (PCs). PCs have emerged as a “lingua franca” of tractable probabilistic modeling and have strong connections to a wide field of AI disciplines such as logical reasoning, deep learning and other machine learning techniques. Capitalising on these connections in a wide scope is the central goal of the Advanced Course. After an introduction and recap of the PC framework, we will teach algorithms and implementation techniques to actively work with PCs in research and applications. We will then discuss a wide range of hybrid techniques combining PCs with deep learning models and mixing exact and approximate inference. Further, we will introduce PCs as an excellent tool to implement neurosymbolic systems, as PCs are at an interesting intersection between neural networks, logic and probability. Finally, we will highlight further connections between probabilistic circuits, knowledge graphs, decision trees, random forests, causal models, and databases. After this course, students will have an in-depth knowledge about algorithmic details concerning PCs and have a good overview about the role of PCs in the field of AI and computer science.
Prerequisites
We require basic statistics and probability theory as is typically covered in Bachelor degrees, covering topics such as maximum likelihood estimation, parametric families, Monte Carlo, etc. Ideally, the assumed academic level of our audience would be PhD students who are already acquainted with probabilistic machine learning, Bayesian methods, or, as this is an advanced course, probabilistic circuits. Less experienced students might first watch (parts of) our 5-day introductory course recorded at ESSAI’23.2 Furthermore, also advanced AI experts who wish to get a quick introduction to PCs will benefit from this course. At any rate, we are happy to synchronize and homogenize our notation with other courses at ESSAI, as to facilitate learning for students across courses.
Practical AI for Autonomous Robots
Introductory Course
Abstract
Creating Artificially Intelligent autonomous robots has unique challenges in transferring AI techniques into a practical real-world and real-time domain. The software must handle the limited computation power of autonomous robots along with the uncertainty and noise produced by their sensors and actuators. This software must integrate across algorithms at multiple levels of abstraction, from the low-level information of the sensors to high-level reasoning. This course proposal revises the very successful 2023 ESSAI iteration of the course and focuses on the design and development of the practical AI software architectures for autonomous robotic systems, localisation, mapping, vision and audio processing, and task planning. This course will feature both theoretical aspects of AI in the robotics domain along with practical experiments on a simulated robot platform.
Prerequisites
Participants are expected to have high-level familiarity with common techniques in Artificial Intelligence such as: (i) Basic search such as A*; (ii) state-action planning; and (iii) computer vision. Familiarity with Python programming is beneficial. More in-depth understanding of AI techniques and algorithms is beneficial but not required. The course is designed for students with knowledge in AI, but without any knowledge in the field of robotics.
Tensor Computations for Machine Learning
Advanced Course
*** Cancelled due to visa problems of the lecturer ***