PI: Ekaterina Komendantskaya, Heriot-Watt University, Co-I: David Aspinall, University of Edinburgh
- Conflict between continuous methods (that enable data classification in multi-dimensional real space) and discrete methods (used by solvers and provers) poses a challenge for neural network (NN) verification.
- This project aims to turn this from a disadvantage to a capability.
- A continuum of models can serve as suitable classifiers for NNs and give reasonable prediction accuracy.
- Given the task of verifying a neural network, we are no longer required to think of the object as immutable, i.e. we are allowed to verify and deploy a different NN instead.
Most challenges encountered in neural network verification are due to the conflict between continuous and discrete methods. Conventionally, we assume that the object we verify is uniquely defined, often hand-written, and therefore needs to be verified as-is. Neural networks are different—often there is a continuum of models that can serve as suitable classifiers, and we usually do not have much preference for any of them, as long as they give reasonable prediction accuracy. Given the task of verifying a neural network, we are no longer required to think of the object as immutable, i.e., we are allowed to verify and deploy a different neural network instead.
This opens up new possibilities for verification and justifies several methods of NN transformation, including NN size reduction, piece-wise linearisa-tion of activation functions either during or after training, and use of constraint-aware loss functions during training or interleave verification with adversarial training, which improves NN safety. Thus, verification becomes part of the object construction. We also assume that the training-verification cycle may repeat poten-tially indefinitely, especially if NNs are retrained using new data. We call this approach continuous verification.
However, to be truly successful, this methodology needs proper programming language support. Ideally, the programmer should only need to specify basic neural network parameters and the desired verification constraints, leaving the work of fine-tuning of the training-verification cycle to the integrated tools.
We cast a type-theoretic view on these problems, and have conducted our first successful experiments at building a verification infrastructure based around these ideas. For this, we initially used F* and Liquid Haskell, functional languages with refinement types. We have also explored different ways of integrating neural network verification into other mainstream languages (e.g., our Z3 verifier that works with TensorFlow models in Python and the Agda extension that allows us to perform neural network verification via refinement type.
In the second half of the project, we explored the problem of NN verification as program synthesis. We performed an in-depth comparison of approaches to improving NN robustness, including their relationship, assumptions, interpretability and after-training verifiability. We also looked at constraint-driven training, a gene-ral approach designed to encode arbitrary constraints, and showed that not all of these definitions are directly encodable.
Finally, we performed experiments to compare applicability and efficacy of the training methods at ensuring the network obeys these different definitions. These results highlight that even the encoding of such a simple piece of knowledge, such as robustness of an NN, training is fraught with difficult choices and pitfalls.
RELATED GRANT. AISEC: AI Secure and Explainable by Construction, EPSRC grant EP/T026952/1 (£800K)
PUBLICATIONS. [1] M. Casadio, E. Komendantskaya, M. L. Daggitt, W. Kokke, G. Katz, G. Amir and I. Refaeli: Neural Network Robustness as a Verification Property: A Principled Case Study. International conference on Computer-Aided Verification CAV’22 (Part of FLOC’22). [2] Wen Kokke, Ekaterina Komendantskaya, Daniel Kienitz, Robert Atkey, and David Aspinall. Neural Networks, Secure by Construction: An Exploration of Refinement Types. In APLAS’20, the The 18th Asian Symposium on Programming Languages and Systems.