Logic-based Learning is a particular area of Artificial Intelligence defined at the intersection between Knowledge Representation and Machine Learning, which involves the automated construction of logic-based programs from examples and existing domain knowledge.
This course provides an in-depth understanding of the current-state-of-the-art of logic-based learning, starting from its key foundation concepts and principles and moving to most recent advances with particular emphasis on different algorithms, pro and cons, available systems and introduction to successful applications and open research challenges.
The course is offered to 3rd year undergraduates and master students, including MRes students.
Lecture notes and tutorial exercises are available in CATE.
During this course the students will:
- acquire knowledge of the main foundations and principles of logic-based learning;
- familiarise with state-of-the-art algorithms and develop basic skills for formalising a logic-based learning task to solve a given learning problem;
- explore how probabilistic and logic-based learning can be integrated;
- understand the applicability of these algorithms by considering examples of real-world case studies in different application domains.
Knowledge and Understanding
Upon successful completion of this course, students will have developed an in-depth knowledge of:
- basic concepts of three classes of logic-based reasoning: deduction, abduction and induction;
- fundamental characteristics and principles of logic-based learning approaches: bottom-up, top-down and meta-level learning and predicate invention;
- basic semantic frameworks for logic-based learning: monotonic, non-monotonic, brave and cautious induction;
- logic-based learning algorithms and their properties: expressive power, limitations, soundness and completeness;
- principle of probabilistic inference: stochastic models and stochastic logic-based inference;
- scope of applicability of these machine learning techniques.
Intellectual and Practical Skills
Upon successful completion of this course, students will have developed a wide range of skills necessary for modelling problem domains in terms of learning tasks. They will have:
- acquired in-depth familiarity with several state-of-the-art logic-based learning systems. Students will be able to practice exercises using these systems through the course web portal “ILP Frameworks“;
- set up experimental context for evaluating a logic-based learning approach versus others, e.g.use of cross-validation and heuristics for improving performance.
Students are required to have a basic knowledge of Logic and Prolog.
Recommended textbooks and papers
- Prolog Programming for Artificial Intelligence, Ivan Bratko, Pearson 2012.
- Logical and Relational Learning, Luc de Raedt, Springer 2008
- Answer Set Solving in Practice (available here)
- Knowledge Representation, Reasoning and the Design of Intelligent Agents, Michael Felfond & Yulia Gelfond Kahl, 2014.
- The Aleph Manual, Ashwin Srinivasan, University of Oxford (available here).
- The stable model semantics for logic programming, Gelfond, Michael, and Vladimir Lifschitz, ICLP/SLP. Vol. 88. 1988. (available here).
- A User’s Guide to gringo, clasp, clingo, and iclingo,M. Gebser, R. Kaminski, B. Kaufmann, M. Ostrowski, T. Schaub, S. Thiele, 2010. (available here).
- Hybrid Abductive Inductive Learning: a Generalisation of Progol, O. Ray, K. Broda and A. Russo, ILP 2003 (available here).
- Refining Complete Hypotheses in ILP, Ivan Bratko, ILP 1999 (available here).
- Inductive Logic Programing as an Abductive Search, Domenico Corapi, Alessandra Russo and Emil Lupu, ICLP 2010 (available here).
- Inductive Logic Programming in Answer Set Programming, Domenico Corapi, Alessandra Russo and Emil Lupu, ILP 2011, Lecture Notes in Computer Science Volume 7207, 2012, pp 91-97 (available here).
- Brave induction: a logical framework for learning from incomplete information, Chiaki Sakama, Katsumi Inoue, Machine Learning July 2009, Volume 76, Issue 1, pp 3-35 (available here).
- Inductive Learning of Answer Set Programs, Mark Law, Alessandra Russo, Krysia Broda, JELIA 2014, LNAI 8761, pp. 311–325, 2014 (available here).
- TopLog: ILP using a logic program declarative bias, Stephen Muggleton, Joese Santos and Alireza Tamaddoni-Nezhad, ICLP 2008. (available here).
- Inverse Entailment and Progol, Stephen Muggleton, New Generation Computing, 13:245-286, 1995 (available here).
- Theory Completion using Inverse Entailment, Stephen Muggleton and C.H. Bryant, In Proceedings of 10th International Workshop on Inductive Logic Programming (ILP-00), pages 130-146, Berlin, Springer-VerlagILP 2000 (available here).
- Meta-interpretive learning: application to grammatical inference, S.H. Muggleton, D. Lin, N. Pahlavi, and A. Tamaddoni-Nezhad, Machine Learning, 94:25-49, 2014 (available here).
- Meta-interpretive learning of Higher-Order Dyadic Datalog, S.H. Muggleton, D. Lin and A. Tamaddoni-Nezhad, Machine Learning Journal, 2014 (available here).
- Bias reformulation for one-shiot function induction , D. Lin, E. Dechter, K. Ellis, J. Tenebaum and S.H. Muggleton, ECAI 2014 (available here).
- Bias reformulation for one-shot function induction , D. Lin, E. Dechter, K. Ellis, J. Tenebaum and S.H. Muggleton, ECAI 2014 (available here).
- Stochastic Logic Programs, Stephen Muggleton, in L. de Raedt, editor, Advances in Inductive Logic Programming, pages 254-264, IOS Press 1996 (available here).
- Metabayes: Bayesian meta-interpretative learning using higher-order stochastic refinement, S.H. Muggleton, D. Lin, J. Chen, and A. Tamaddoni-Nezhad. In Gerson Zaverucha, Vitor Santos Costa, and Aline Marins Paes, editors, Proceedings of the 23rd International Conference on Inductive Logic Programming (ILP 2013), pages 1-17, Berlin, 2014. Springer-Verlag. LNAI 8812 (available here).
I am very much interested in supervising various undergraduate and master projects as well as PhD research topics related to theory and/or application of logic-based learning.