Contact: Alessio Lomuscio
Autonomous, multi-agent systems are distributed systems where the
components, or agents, communicate and interact in order to meet their
design objectives. Examples of this include self-driving cars, unmanned
space robotic exploration modules, swarms of flying machines, and
autonomous underwater vehicles.
A key concern in the development of safe and reliable autonomous systems
is that they should be correct with respect to the rich specifications
describing their intended behaviours. These typically go beyond plain
reachability, lifeness and safefy properties to involve concepts such as
the knowledge of the agents and their joint abilities.
The aim of our research is to develop verification methodologies and
toolkits to facilitate the verification of autonomous systems.
Specifically, we currently work on the following objectives:
* Development of efficient model checking techniques for autonomous
systems against expressive specifications, including epistemic logic,
ATL, and strategy logic.
* Parameterised model checking for the verification of infinite-state,
open autonomous systems.
* Implementations and open-source release of verification toolkits.
* Verification of autonomous systems generated by high-level languages
for multi-agent systems.
* Verification of concrete systems with industrial and academic partners.
For more information on possible research topics and related
publications are available at: