PI: Ekaterina Komendantskaya, Heriot-Watt University (HWU). Co-I: Guy Katz, Hebrew University of Jerusalem, Israel (HUJI).

  • Deep Neural networks (DNNs) are a crucial part of many modern Artificial Intelligence (AI) applications.
  • Verification of DNNs is not straightforward: when we deploy DNNs, the ideal data distribution is typically unknown or unattainable.
  • In the absence of a clear specification, it becomes problematic to guarantee the absence of “bugs”.

Deep Neural networks (DNNs) are a crucial part of many modern Artificial Intelligence (AI) applications, mostly as components responsible for “perception tasks” e.g. processing signals, images, speech or text. However, verification of DNNs is not straightforward: when we deploy DNNs, the ideal data distribution is typically unknown or unattainable. Therefore, by definition, DNNs are only approximately correct, only as good as the data is, and do not carry additional semantical meaning.

In the absence of a clear specification, it becomes problematic to guarantee the absence of “bugs”, or identify components that are responsible for deviations from the intended model behaviour.

The goal of this project was to address the current under-defined semantics of neural networks (“the problem of the missing spec”) by identifying and formalising a wider range of Deep Neural networks (DNNs) verification properties suitable for autonomous systems.

PUBLICATIONS. [1] M. Casadio, E. Komendantskaya, M. L. Daggitt, W. Kokke, G. Katz, G. Amir and I. Refaeli: Neural Network Robustness as a Verification Property: A Principled Case Study. International conference on Computer-Aided Verification CAV’22 (Part of FLOC’22). [2] Wen Kokke, Ekaterina Komendantskaya, Daniel Kienitz, Robert Atkey, and David Aspinall. Neural Networks, Secure by Construction: An Exploration of Refinement Types. In APLAS’20, the The 18th Asian Symposium on Programming Languages and Systems.