Logic-based Learning (C304)


Logic-based Learning is a particular area of Artificial Intelligence defined at the intersection between Knowledge Representation and Machine Learning, which involves the automated construction of logic-based programs from examples and existing domain knowledge.

This course provides an in-depth understanding of the current-state-of-the-art of logic-based learning, starting from its key foundation concepts and principles and moving to most recent advances with particular emphasis on different algorithms, pro and cons, available systems and introduction to successful applications and open research challenges.

The course will be taught by Alessandra Russo  and Mark Law.

The course is offered to 3rd year undergraduates and all master students, including MRes students.

Lecture notes and tutorial exercises are available in CATE.

Course Aims

During this course the students will:

  • acquire knowledge of the main foundations and principles of logic-based learning;
  • familiarise with state-of-the-art algorithms and develop basic skills for formalising a logic-based learning task to solve a given learning problem;
  • explore how probabilistic and logic-based learning can be integrated;
  • understand the applicability of these algorithms by considering examples of real-world case studies in different application domains.

Learning Outcomes

Knowledge and Understanding

Upon successful completion of this course, students will have developed an in-depth knowledge of:

  • basic concepts of three classes of logic-based reasoning: deduction, abduction and induction;
  • fundamental characteristics and principles of logic-based learning approaches: bottom-up, top-down and meta-level learning;
  • basic semantic frameworks for logic-based learning: monotonic, non-monotonic, brave and cautious induction;
  • logic-based learning algorithms and their properties: expressive power, limitations, soundness and completeness;
  • logic-based learning from noisy data;
  • principle of probabilistic inference: distribution semantics and probabilistic logic programming;
  • principle of probabilistic rule learning.

Intellectual and Practical Skills

Upon successful completion of this course, students will have developed a wide range of skills necessary for modelling problem domains in terms of learning tasks. They will have:

  • acquired an in-dept familiars of the current state-of-the-art of logic-based machine learning;
  • acquired in-depth familiarity with several state-of-the-art logic-based learning systems. Students will be able to practice exercises using these systems through the course web portal “ILP Frameworks“.


Students are required to have a basic knowledge of Logic and Prolog.

Recommended textbooks and papers


  • Prolog Programming for Artificial Intelligence, Ivan Bratko, Pearson 2012.
  • Logical and Relational Learning, Luc de Raedt, Springer 2008
  • Answer Set Solving in Practice (available here)
  • Knowledge Representation, Reasoning and the Design of Intelligent Agents, Michael Felfond & Yulia Gelfond Kahl, 2014.


  • The Aleph Manual, Ashwin Srinivasan, University of Oxford (available here).
  • The stable model semantics for logic programming, Gelfond, Michael, and Vladimir Lifschitz, ICLP/SLP. Vol. 88. 1988. (available here).
  • A User’s Guide to gringo, clasp, clingo, and iclingo,M. Gebser, R. Kaminski, B. Kaufmann, M. Ostrowski, T. Schaub, S. Thiele, 2010. (available here).
  • Hybrid Abductive Inductive Learning: a Generalisation of Progol, O. Ray, K. Broda and A. Russo, ILP 2003 (available here).
  • Refining Complete Hypotheses in ILP, Ivan Bratko, ILP 1999 (available here).
  • Inductive Logic Programing as an Abductive Search, Domenico Corapi, Alessandra Russo and Emil Lupu, ICLP 2010 (available here).
  • Inductive Logic Programming in Answer Set Programming, Domenico Corapi, Alessandra Russo and Emil Lupu, ILP 2011, Lecture Notes in Computer Science Volume 7207, 2012, pp 91-97 (available here).
  • Brave induction: a logical framework for learning from incomplete information, Chiaki Sakama, Katsumi Inoue, Machine Learning July 2009, Volume 76, Issue 1, pp 3-35 (available here).
  • Inductive Learning of Answer Set Programs, Mark Law, Alessandra Russo, Krysia Broda, JELIA 2014, LNAI 8761, pp. 311–325, 2014 (available here).
  • TopLog: ILP using a logic program declarative bias, Stephen Muggleton, Joese Santos and Alireza Tamaddoni-Nezhad, ICLP 2008. (available here).
  • Inverse Entailment and Progol, Stephen Muggleton, New Generation Computing, 13:245-286, 1995 (available here).
  • Theory Completion using Inverse Entailment, Stephen Muggleton and C.H. Bryant, In Proceedings of 10th International Workshop on Inductive Logic Programming (ILP-00), pages 130-146, Berlin, Springer-VerlagILP 2000 (available here).
  • Inference and Learning in Probabilistic Logic Programs using Weighted Boolean Formulas, D Fierens et al., TPLP, 15(3), pp. 358-401, 2015 (available here).
  • Probabilistic (Logic) Programming Concepts, Luc De Raedt and Angelika Kimmig, Machine Learning Journal, Vol. 100(1), pp. 5-47, 2015 (available here).
  • Probabilistic Rule Learning, Luc De Raedt and Ingo Thon, ILP Inductive Logic Programming, pp 47-58, 2010 (available here).
  • Learning Logical Definitions from Relations, J.R. Quinlan, Machine Learning Journal 5, 239-266, 1990 (available  here).
  • Inducing Probabilistic Relational Rules from Probabilistic Examples, Luc De Raedt, et al, IJCAI 2015 (available  here).


I am interested in supervising various undergraduate and master projects as well as PhD research topics related to theory and/or application of logic-based learning.